You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This affects OpenAI Responses models resolved by the default OpenAI provider (including string model names such as `"gpt-5.2"`).
75
+
76
+
You can also configure websocket transport per provider or per run:
77
+
78
+
```python
79
+
from agents import Agent, OpenAIProvider, RunConfig, Runner
80
+
81
+
provider = OpenAIProvider(
82
+
use_responses_websocket=True,
83
+
# Optional; if omitted, OPENAI_WEBSOCKET_BASE_URL is used when set.
84
+
websocket_base_url="wss://your-proxy.example/v1",
85
+
)
86
+
87
+
agent = Agent(name="Assistant")
88
+
result =await Runner.run(
89
+
agent,
90
+
"Hello",
91
+
run_config=RunConfig(model_provider=provider),
92
+
)
93
+
```
94
+
95
+
If you need prefix-based model routing (for example mixing `openai/...` and `litellm/...` model names in one run), use [`MultiProvider`][agents.MultiProvider] and set `openai_use_responses_websocket=True` there instead.
96
+
97
+
Notes:
98
+
99
+
- This is the Responses API over websocket transport, not the [Realtime API](../realtime/guide.md).
100
+
- Install the `websockets` package if it is not already available in your environment.
101
+
- You can use [`Runner.run_streamed()`][agents.run.Runner.run_streamed] directly after enabling websocket transport. For multi-turn workflows where you want to reuse the same websocket connection across turns (and nested agent-as-tool calls), the [`responses_websocket_session()`][agents.responses_websocket_session] helper is recommended. See the [Running agents](../running_agents.md) guide and [`examples/basic/stream_ws.py`](https://github.com/openai/openai-agents-python/tree/main/examples/basic/stream_ws.py).
102
+
64
103
## Non-OpenAI models
65
104
66
105
You can use most other non-OpenAI models via the [LiteLLM integration](./litellm.md). First, install the litellm dependency group:
Copy file name to clipboardExpand all lines: docs/release.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,6 +19,16 @@ We will increment `Z` for non-breaking changes:
19
19
20
20
## Breaking change changelog
21
21
22
+
### 0.10.0
23
+
24
+
This minor release does **not** introduce a breaking change, but it includes a significant new feature area for OpenAI Responses users: websocket transport support for the Responses API.
25
+
26
+
Highlights:
27
+
28
+
- Added websocket transport support for OpenAI Responses models (opt-in; HTTP remains the default transport).
29
+
- Added a `responses_websocket_session()` helper / `ResponsesWebSocketSession` for reusing a shared websocket-capable provider and `RunConfig` across multi-turn runs.
30
+
- Added a new websocket streaming example (`examples/basic/stream_ws.py`) covering streaming, tools, approvals, and follow-up turns.
31
+
22
32
### 0.9.0
23
33
24
34
In this version, Python 3.9 is no longer supported, as this major version reached EOL three months ago. Please upgrade to a newer runtime version.
Copy file name to clipboardExpand all lines: docs/running_agents.md
+65Lines changed: 65 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,6 +42,71 @@ The runner then runs a loop:
42
42
43
43
Streaming allows you to additionally receive streaming events as the LLM runs. Once the stream is done, the [`RunResultStreaming`][agents.result.RunResultStreaming] will contain the complete information about the run, including all the new outputs produced. You can call `.stream_events()` for the streaming events. Read more in the [streaming guide](streaming.md).
44
44
45
+
### Responses WebSocket transport (optional helper)
46
+
47
+
If you enable the OpenAI Responses websocket transport, you can keep using the normal `Runner` APIs. The websocket session helper is recommended for connection reuse, but it is not required.
48
+
49
+
This is the Responses API over websocket transport, not the [Realtime API](realtime/guide.md).
50
+
51
+
#### Pattern 1: No session helper (works)
52
+
53
+
Use this when you just want websocket transport and do not need the SDK to manage a shared provider/session for you.
54
+
55
+
```python
56
+
import asyncio
57
+
58
+
from agents import Agent, Runner, set_default_openai_responses_transport
result = Runner.run_streamed(agent, "Summarize recursion in one sentence.")
66
+
67
+
asyncfor event in result.stream_events():
68
+
if event.type =="raw_response_event":
69
+
continue
70
+
print(event.type)
71
+
72
+
73
+
asyncio.run(main())
74
+
```
75
+
76
+
This pattern is fine for single runs. If you call `Runner.run()` / `Runner.run_streamed()` repeatedly, each run may reconnect unless you manually reuse the same `RunConfig` / provider instance.
77
+
78
+
#### Pattern 2: Use `responses_websocket_session()` (recommended for multi-turn reuse)
79
+
80
+
Use [`responses_websocket_session()`][agents.responses_websocket_session] when you want a shared websocket-capable provider and `RunConfig` across multiple runs (including nested agent-as-tool calls that inherit the same `run_config`).
81
+
82
+
```python
83
+
import asyncio
84
+
85
+
from agents import Agent, responses_websocket_session
first = ws.run_streamed(agent, "Say hello in one short sentence.")
93
+
asyncfor _event in first.stream_events():
94
+
pass
95
+
96
+
second = ws.run_streamed(
97
+
agent,
98
+
"Now say goodbye.",
99
+
previous_response_id=first.last_response_id,
100
+
)
101
+
asyncfor _event in second.stream_events():
102
+
pass
103
+
104
+
105
+
asyncio.run(main())
106
+
```
107
+
108
+
Finish consuming streamed results before the context exits. Exiting the context while a websocket request is still in flight may force-close the shared connection.
109
+
45
110
## Run config
46
111
47
112
The `run_config` parameter lets you configure some global settings for the agent run:
0 commit comments