Listing Models
Use the models endpoint to discover which AI models are currently available on your OptimaGPT deployment.
Endpoint
GET /optima/v1/models
Requires authentication. See Authentication.
Response
The response contains two arrays: one for language models (used with /v1/chat/completions) and one for embedding models (used with /v1/embeddings).
Example response:
{
"object": "list",
"chat_models": [
{
"id": "llama-3.1-8b-q4",
"name": "Llama 3.1 8B Q4",
"object": "chat_model",
"owned_by": "optima",
"ctx_max": 8192,
"description": "Llama 3.1 8B, Q4_K_M quantisation"
},
{
"id": "qwen2.5-32b-q4",
"name": "Qwen 2.5 32B Q4",
"object": "chat_model",
"owned_by": "optima",
"ctx_max": 16384,
"description": null
}
],
"embed_models": [
{
"id": "nomic-embed-text",
"name": "Nomic Embed Text",
"object": "embedding_model",
"owned_by": "optima",
"dimensions": 768,
"description": null
}
]
}
Chat model fields
| Field | Description |
|---|---|
id |
The model identifier to use in API requests |
name |
Human-readable display name |
object |
Always "chat_model" |
owned_by |
Always "optima" |
ctx_max |
Maximum context window size in tokens |
description |
Optional description set in the executor configuration |
Embedding model fields
| Field | Description |
|---|---|
id |
The model identifier to use in API requests |
name |
Human-readable display name |
object |
Always "embedding_model" |
owned_by |
Always "optima" |
dimensions |
The dimensionality of the output embedding vectors |
description |
Optional description set in the executor configuration |
Which models appear here
Only models with a running executor appear in the response. Models that are installed but stopped do not appear. If a model you expect to see is missing, check with your administrator that its executor is online.
Using the model ID
The id field from this response is what you pass as the model parameter in /v1/chat/completions and /v1/embeddings requests.