OptimaChat Overview
OptimaChat is the built-in chat interface for OptimaGPT. It lets you have conversations with the AI models running on your OptimaNode, directly in a web browser or through the OptimaChatApp desktop application.
What OptimaChat provides
- A conversation interface connected to your organisation's locally-hosted AI models
- Support for tools — AI-driven actions such as searching, fetching data, or calling external services
- A context window visualiser showing how much of the model's memory the current conversation is using
- Controls for adjusting how the model responds, including temperature and a custom system prompt
- A conversation history saved automatically as you chat
Accessing OptimaChat
Browser: Navigate to your OptimaGPT address (e.g. https://optima.yourcompany.com). After logging in, you land directly on the Chat page.
Desktop app: Open OptimaChatApp and sign in with your OptimaGPT credentials. See Installing the Desktop App.
The chat interface
The Chat page is divided into two areas:

Conversation area (left) — where you type messages and read responses. The model selector at the top lets you choose which AI model to use for the conversation.
Settings panel (right) — a collapsible panel containing: - The context window visualiser - Thinking Blocks and Tool Blocks display toggles - The system prompt editor
The settings panel can be collapsed to give the conversation area more space.
Navigation
The left navigation bar has three items:
| Item | Description |
|---|---|
| Chat | The main conversation interface |
| Search | Search across conversations |
| Settings | Personal settings (animations and similar preferences) |
Choosing a model
Use the model selector dropdown at the top of the conversation area to choose which AI model handles your messages. The list shows all models currently online and available on the connected nodes.
Different models have different capabilities, sizes, and response styles. If you are unsure which to use, ask your administrator which model is recommended for your use case.
What happens when you send a message
- Your message is sent to the Gateway.
- The Gateway routes it to an available Node running the selected model.
- The model generates a response, which streams back to your screen token by token.
- If tools are enabled and the model decides to use one, you will see an agent run in the conversation showing the tool activity before the final response.
- The conversation is saved automatically.