Skip to content

Add Custom Models

Add a custom model when you want OpAgent to use your own provider key, a company gateway, or an OpenAI-compatible endpoint instead of only OpAgent-hosted models.

Common cases:

  • you already have an OpenAI, Anthropic, Gemini, or compatible provider account;
  • your team has an internal model gateway;
  • you want a separate model for cost, speed, or privacy reasons.

Open the OpAgent app menu from the logo in the title bar, then choose Models.

If you are not logged in, log in first. OpAgent uses the model catalog to choose the model ID and capabilities.

  1. Refresh models

    Click refresh if the OpAgent model list is empty or outdated.

  2. Choose a provider target

    Select New provider to create a new endpoint, or choose an existing provider to add another model under it.

  3. Enter provider name

    Use a readable name, for example OpenAI, Company Gateway, or Local Proxy.

  4. Choose the model

    Select the model ID from the OpAgent catalog. Pick the ID that matches the model served by your endpoint.

  5. Choose API protocol

    Select the protocol your endpoint supports:

    • OpenAI Completions for /chat/completions compatible endpoints;
    • OpenAI Responses for /responses compatible endpoints;
    • Anthropic Messages for native Anthropic Messages endpoints;
    • Gemini Native for Google Gemini native endpoints.
  6. Enter Base URL and API key

    Enter the endpoint base URL and the API key from your provider.

  7. Add model

    Click Add model. Make sure the model is enabled.

After adding a model, click Set default on the model row. The default model is used when a conversation or agent does not have a more specific model selected.

You can still choose a different model from the conversation panel for a single chat.

Use the base URL expected by your provider:

  • OpenAI-compatible: https://api.openai.com/v1
  • Anthropic native: https://api.anthropic.com/v1
  • Gemini native: https://generativelanguage.googleapis.com
  • Company gateway: the base URL provided by your team

If a request fails, check that the protocol and base URL belong together. For example, an OpenAI-compatible proxy usually needs OpenAI Completions or OpenAI Responses, not Anthropic Messages.

  • The model does not appear in chat: make sure it is enabled.
  • The model cannot be selected as custom inline completion: make sure it is enabled first.
  • The request fails immediately: check API key and base URL.
  • The provider returns protocol errors: choose a different API protocol.
  • You do not see model IDs: log in and refresh the OpAgent model catalog.