Skip to content

Commit 6b14ebc

Browse files
authored
docs: clarify OpenAI provider configuration guidance (#2901)
1 parent 47ecb38 commit 6b14ebc

File tree

3 files changed

+49
-1
lines changed

3 files changed

+49
-1
lines changed

docs/config.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,13 @@ custom_client = AsyncOpenAI(base_url="...", api_key="...")
3232
set_default_openai_client(custom_client)
3333
```
3434

35+
If you prefer environment-based endpoint configuration, the default OpenAI provider also reads `OPENAI_BASE_URL`. When you enable Responses websocket transport, it also reads `OPENAI_WEBSOCKET_BASE_URL` for the websocket `/responses` endpoint.
36+
37+
```bash
38+
export OPENAI_BASE_URL="https://your-openai-compatible-endpoint.example/v1"
39+
export OPENAI_WEBSOCKET_BASE_URL="wss://your-openai-compatible-endpoint.example/v1"
40+
```
41+
3542
Finally, you can also customize the OpenAI API that is used. By default, we use the OpenAI Responses API. You can override this to use the Chat Completions API by using the [set_default_openai_api()][agents.set_default_openai_api] function.
3643

3744
```python
@@ -50,6 +57,21 @@ from agents import set_tracing_export_api_key
5057
set_tracing_export_api_key("sk-...")
5158
```
5259

60+
If your model traffic uses one key or client but tracing should use a different OpenAI key, pass `use_for_tracing=False` when setting the default key or client, then configure tracing separately. The same pattern works with [`set_default_openai_key()`][agents.set_default_openai_key] if you are not using a custom client.
61+
62+
```python
63+
from openai import AsyncOpenAI
64+
from agents import (
65+
set_default_openai_client,
66+
set_tracing_export_api_key,
67+
)
68+
69+
custom_client = AsyncOpenAI(base_url="https://your-openai-compatible-endpoint.example/v1", api_key="provider-key")
70+
set_default_openai_client(custom_client, use_for_tracing=False)
71+
72+
set_tracing_export_api_key("sk-tracing")
73+
```
74+
5375
If you need to attribute traces to a specific organization or project when using the default exporter, set these environment variables before your app starts:
5476

5577
```bash

docs/models/index.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,30 @@ result = await Runner.run(
133133
)
134134
```
135135

136+
OpenAI-backed providers also accept optional agent registration config. This is an advanced option for cases where your OpenAI setup expects provider-level registration metadata such as a harness ID.
137+
138+
```python
139+
from agents import (
140+
Agent,
141+
OpenAIAgentRegistrationConfig,
142+
OpenAIProvider,
143+
RunConfig,
144+
Runner,
145+
)
146+
147+
provider = OpenAIProvider(
148+
use_responses_websocket=True,
149+
agent_registration=OpenAIAgentRegistrationConfig(harness_id="your-harness-id"),
150+
)
151+
152+
agent = Agent(name="Assistant")
153+
result = await Runner.run(
154+
agent,
155+
"Hello",
156+
run_config=RunConfig(model_provider=provider),
157+
)
158+
```
159+
136160
#### Advanced routing with `MultiProvider`
137161

138162
If you need prefix-based model routing (for example mixing `openai/...` and `any-llm/...` model names in one run), use [`MultiProvider`][agents.MultiProvider] and set `openai_use_responses_websocket=True` there instead.
@@ -170,6 +194,8 @@ result = await Runner.run(
170194

171195
Use `openai_prefix_mode="model_id"` when a backend expects the literal `openai/...` string. Use `unknown_prefix_mode="model_id"` when the backend expects other namespaced model IDs such as `openrouter/openai/gpt-4.1-mini`. These options also work on `MultiProvider` outside websocket transport; this example keeps websocket enabled because it is part of the transport setup described in this section. The same options are also available on [`responses_websocket_session()`][agents.responses_websocket_session].
172196

197+
If you need the same provider-level registration metadata while routing through `MultiProvider`, pass `openai_agent_registration=OpenAIAgentRegistrationConfig(...)` and it will be forwarded to the underlying OpenAI provider.
198+
173199
If you use a custom OpenAI-compatible endpoint or proxy, websocket transport also requires a compatible websocket `/responses` endpoint. In those setups you may need to set `websocket_base_url` explicitly.
174200

175201
#### Notes

docs/running_agents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ Use `tool_error_formatter` to customize the message that is returned to the mode
163163
The formatter receives [`ToolErrorFormatterArgs`][agents.run_config.ToolErrorFormatterArgs] with:
164164

165165
- `kind`: The error category. Today this is `"approval_rejected"`.
166-
- `tool_type`: The tool runtime (`"function"`, `"computer"`, `"shell"`, or `"apply_patch"`).
166+
- `tool_type`: The tool runtime (`"function"`, `"computer"`, `"shell"`, `"apply_patch"`, or `"custom"`).
167167
- `tool_name`: The tool name.
168168
- `call_id`: The tool call ID.
169169
- `default_message`: The SDK's default model-visible message.

0 commit comments

Comments
 (0)