You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/models/index.md
+33Lines changed: 33 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -108,6 +108,39 @@ result = await Runner.run(
108
108
109
109
If you need prefix-based model routing (for example mixing `openai/...` and `litellm/...` model names in one run), use [`MultiProvider`][agents.MultiProvider] and set `openai_use_responses_websocket=True` there instead.
110
110
111
+
`MultiProvider` keeps two historical defaults:
112
+
113
+
-`openai/...` is treated as an alias for the OpenAI provider, so `openai/gpt-4.1` is routed as model `gpt-4.1`.
114
+
- Unknown prefixes raise `UserError` instead of being passed through.
115
+
116
+
When you point the OpenAI provider at an OpenAI-compatible endpoint that expects literal namespaced model IDs, opt into the pass-through behavior explicitly. In websocket-enabled setups, keep `openai_use_responses_websocket=True` on the `MultiProvider` as well:
117
+
118
+
```python
119
+
from agents import Agent, MultiProvider, RunConfig, Runner
120
+
121
+
provider = MultiProvider(
122
+
openai_base_url="https://openrouter.ai/api/v1",
123
+
openai_api_key="...",
124
+
openai_use_responses_websocket=True,
125
+
openai_prefix_mode="model_id",
126
+
unknown_prefix_mode="model_id",
127
+
)
128
+
129
+
agent = Agent(
130
+
name="Assistant",
131
+
instructions="Be concise.",
132
+
model="openai/gpt-4.1",
133
+
)
134
+
135
+
result =await Runner.run(
136
+
agent,
137
+
"Hello",
138
+
run_config=RunConfig(model_provider=provider),
139
+
)
140
+
```
141
+
142
+
Use `openai_prefix_mode="model_id"` when a backend expects the literal `openai/...` string. Use `unknown_prefix_mode="model_id"` when the backend expects other namespaced model IDs such as `openrouter/openai/gpt-4.1-mini`. These options also work on `MultiProvider` outside websocket transport; this example keeps websocket enabled because it is part of the transport setup described in this section. The same options are also available on [`responses_websocket_session()`][agents.responses_websocket_session].
143
+
111
144
If you use a custom OpenAI-compatible endpoint or proxy, websocket transport also requires a compatible websocket `/responses` endpoint. In those setups you may need to set `websocket_base_url` explicitly.
0 commit comments