Skip to main content
Deep Agents work with any LangChain chat model that supports tool calling.

Supported models

Specify models in provider:model format (for example, google_genai:gemini-3.1-pro-preview, openai:gpt-5.4, or anthropic:claude-sonnet-4-6). For valid provider strings, see the model_provider parameter of init_chat_model. For provider-specific configuration, see chat model integrations.

Suggested models

These models perform well on the Deep Agents eval suite, which tests basic agent operations. Passing these evals is necessary but not sufficient for strong performance on longer, more complex tasks.
ProviderModels
Googlegemini-3.1-pro-preview, gemini-3-flash-preview
OpenAIgpt-5.4, gpt-4o, gpt-5.4, o4-mini, gpt-5.2-codex, gpt-4o-mini, o3
Anthropicclaude-opus-4-6, claude-opus-4-5, claude-sonnet-4-6, claude-sonnet-4, claude-sonnet-4-5, claude-haiku-4-5, claude-opus-4-1
Open-weightGLM-5, Kimi-K2.5, MiniMax-M2.5, qwen3.5-397B-A17B, devstral-2-123B
Open-weight models are available through providers like Baseten, Fireworks, OpenRouter, and Ollama.

Configure model parameters

Pass a model string to create_deep_agent in provider:model format, or pass a configured model instance for full control. Under the hood, model strings are resolved via init_chat_model. To configure model-specific parameters, use init_chat_model or instantiate a provider model class directly:
from langchain.chat_models import init_chat_model
from deepagents import create_deep_agent

model = init_chat_model(
    model="google_genai:gemini-3.1-pro-preview",
    thinking_level="medium",  
)
agent = create_deep_agent(model=model)
Available parameters vary by provider. See the chat model integrations page for provider-specific configuration options.

Provider profiles

ProviderProfile is a public beta API and may be updated in future releases.
Provider profiles let you package model setup for a provider or a specific model. They apply when the harness turns a string spec like "openai:gpt-5.4" into a chat model, and shape how the client is built:
  • init_kwargs — default kwargs forwarded to init_chat_model
  • pre_init — side effects to run before construction (for example, credential validation for a clearer error than the SDK would give)
  • init_kwargs_factory — kwargs derived from runtime state (for example, headers pulled from environment variables)
Register a profile under a provider name like "openai" for provider-wide defaults, or under a fully qualified provider:model key like "openai:gpt-5.4" for per-model overrides. Registrations are additive: re-registering under an existing key merges on top of the prior registration. init_kwargs dicts merge key-wise (your value wins on a shared key), pre_init callables chain (existing runs first, then the new one), and init_kwargs_factory callables chain with their outputs merged every time resolve_model runs.
from deepagents import ProviderProfile, register_provider_profile

# `temperature=0` is forwarded whenever the harness builds an
# `openai:*` model from a string spec.
register_provider_profile(
    "openai",
    ProviderProfile(init_kwargs={"temperature": 0}),
)
This is useful when you want model selection to carry the right defaults automatically, without repeating setup code everywhere you create an agent. If you pass a preconfigured chat model instance directly, that instance’s settings take precedence — ProviderProfile is only consulted when the harness constructs the model from a string spec. For harness behavior after model creation, see Harness profiles.
Distributable profiles can register themselves via importlib.metadata entry points instead of requiring callers to run register_provider_profile by hand. Declare an entry point in the distribution’s own pyproject.toml under the deepagents.provider_profiles group:
[project.entry-points."deepagents.provider_profiles"]
my_provider = "my_pkg.profiles:register"
The target resolves to a zero-arg callable that performs the registrations when deepagents.profiles is imported:
from deepagents import ProviderProfile, register_provider_profile

def register() -> None:
    register_provider_profile(
        "my_provider",
        ProviderProfile(init_kwargs={"temperature": 0}),
    )

Select a model at runtime

If your application lets users choose a model (for example using a dropdown in the UI), use middleware to swap the model at runtime without rebuilding the agent. Pass the user’s model selection through runtime context, then use a wrap_model_call middleware to override the model on each invocation using the @wrap_model_call decorator:
from dataclasses import dataclass
from langchain.chat_models import init_chat_model
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from deepagents import create_deep_agent
from typing import Callable


@dataclass
class Context:
    model: str

@wrap_model_call
def configurable_model(
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse],
) -> ModelResponse:
    model_name = request.runtime.context.model
    model = init_chat_model(model_name)
    return handler(request.override(model=model))

agent = create_deep_agent(
    model="google_genai:gemini-3.1-pro-preview",
    middleware=[configurable_model],
    context_schema=Context,
)

# Invoke with the user's model selection
result = agent.invoke(
    {"messages": [{"role": "user", "content": "Hello!"}]},
    context=Context(model="openai:gpt-5.4"),
)
For more dynamic model patterns (for example routing based on conversation complexity or cost optimization), see Dynamic model in the LangChain agents guide.

Learn more

  • Models in LangChain: chat model features including tool calling, structured output, and multimodality