You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you prefer environment-based endpoint configuration, the default OpenAI provider also reads `OPENAI_BASE_URL`. When you enable Responses websocket transport, it also reads `OPENAI_WEBSOCKET_BASE_URL` for the websocket `/responses` endpoint.
Finally, you can also customize the OpenAI API that is used. By default, we use the OpenAI Responses API. You can override this to use the Chat Completions API by using the [set_default_openai_api()][agents.set_default_openai_api] function.
36
43
37
44
```python
@@ -50,6 +57,21 @@ from agents import set_tracing_export_api_key
50
57
set_tracing_export_api_key("sk-...")
51
58
```
52
59
60
+
If your model traffic uses one key or client but tracing should use a different OpenAI key, pass `use_for_tracing=False` when setting the default key or client, then configure tracing separately. The same pattern works with [`set_default_openai_key()`][agents.set_default_openai_key] if you are not using a custom client.
If you need to attribute traces to a specific organization or project when using the default exporter, set these environment variables before your app starts:
Copy file name to clipboardExpand all lines: docs/models/index.md
+26Lines changed: 26 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -133,6 +133,30 @@ result = await Runner.run(
133
133
)
134
134
```
135
135
136
+
OpenAI-backed providers also accept optional agent registration config. This is an advanced option for cases where your OpenAI setup expects provider-level registration metadata such as a harness ID.
If you need prefix-based model routing (for example mixing `openai/...` and `any-llm/...` model names in one run), use [`MultiProvider`][agents.MultiProvider] and set `openai_use_responses_websocket=True` there instead.
@@ -170,6 +194,8 @@ result = await Runner.run(
170
194
171
195
Use `openai_prefix_mode="model_id"` when a backend expects the literal `openai/...` string. Use `unknown_prefix_mode="model_id"` when the backend expects other namespaced model IDs such as `openrouter/openai/gpt-4.1-mini`. These options also work on `MultiProvider` outside websocket transport; this example keeps websocket enabled because it is part of the transport setup described in this section. The same options are also available on [`responses_websocket_session()`][agents.responses_websocket_session].
172
196
197
+
If you need the same provider-level registration metadata while routing through `MultiProvider`, pass `openai_agent_registration=OpenAIAgentRegistrationConfig(...)` and it will be forwarded to the underlying OpenAI provider.
198
+
173
199
If you use a custom OpenAI-compatible endpoint or proxy, websocket transport also requires a compatible websocket `/responses` endpoint. In those setups you may need to set `websocket_base_url` explicitly.
0 commit comments