Add Custom Models
1 When to add a custom model
Section titled “1 When to add a custom model”Add a custom model when you want OpAgent to use your own provider key, a company gateway, or an OpenAI-compatible endpoint instead of only OpAgent-hosted models.
Common cases:
- you already have an OpenAI, Anthropic, Gemini, or compatible provider account;
- your team has an internal model gateway;
- you want a separate model for cost, speed, or privacy reasons.
2 Open the Models page
Section titled “2 Open the Models page”Open the OpAgent app menu from the logo in the title bar, then choose Models.
If you are not logged in, log in first. OpAgent uses the model catalog to choose the model ID and capabilities.
3 Add a provider model
Section titled “3 Add a provider model”-
Refresh models
Click refresh if the OpAgent model list is empty or outdated.
-
Choose a provider target
Select New provider to create a new endpoint, or choose an existing provider to add another model under it.
-
Enter provider name
Use a readable name, for example
OpenAI,Company Gateway, orLocal Proxy. -
Choose the model
Select the model ID from the OpAgent catalog. Pick the ID that matches the model served by your endpoint.
-
Choose API protocol
Select the protocol your endpoint supports:
- OpenAI Completions for
/chat/completionscompatible endpoints; - OpenAI Responses for
/responsescompatible endpoints; - Anthropic Messages for native Anthropic Messages endpoints;
- Gemini Native for Google Gemini native endpoints.
- OpenAI Completions for
-
Enter Base URL and API key
Enter the endpoint base URL and the API key from your provider.
-
Add model
Click Add model. Make sure the model is enabled.
4 Set the default chat model
Section titled “4 Set the default chat model”After adding a model, click Set default on the model row. The default model is used when a conversation or agent does not have a more specific model selected.
You can still choose a different model from the conversation panel for a single chat.
5 Base URL examples
Section titled “5 Base URL examples”Use the base URL expected by your provider:
- OpenAI-compatible:
https://api.openai.com/v1 - Anthropic native:
https://api.anthropic.com/v1 - Gemini native:
https://generativelanguage.googleapis.com - Company gateway: the base URL provided by your team
If a request fails, check that the protocol and base URL belong together. For example, an OpenAI-compatible proxy usually needs OpenAI Completions or OpenAI Responses, not Anthropic Messages.
6 Troubleshooting
Section titled “6 Troubleshooting”- The model does not appear in chat: make sure it is enabled.
- The model cannot be selected as custom inline completion: make sure it is enabled first.
- The request fails immediately: check API key and base URL.
- The provider returns protocol errors: choose a different API protocol.
- You do not see model IDs: log in and refresh the OpAgent model catalog.