Skip to content

Commit 78f20f4

Browse files
committed
docs: update details; change the default model for translation
1 parent 6865e21 commit 78f20f4

5 files changed

Lines changed: 51 additions & 7 deletions

File tree

docs/human_in_the_loop.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,21 @@ To stream output while waiting for approvals, call `Runner.run_streamed`, consum
123123

124124
## Long-running approvals
125125

126-
`RunState` is designed to be durable. Use `state.to_json()` or `state.to_string()` to store pending work in a database or queue and recreate it later with `RunState.from_json(...)` or `RunState.from_string(...)`. Pass `context_override` if you do not want to persist sensitive context data in the serialized payload.
126+
`RunState` is designed to be durable. Use `state.to_json()` or `state.to_string()` to store pending work in a database or queue and recreate it later with `RunState.from_json(...)` or `RunState.from_string(...)`.
127+
128+
Useful serialization options:
129+
130+
- `context_serializer`: Customize how non-mapping context objects are serialized.
131+
- `strict_context=True`: Fail serialization or deserialization unless the context is already a
132+
mapping or you provide the appropriate serializer/deserializer.
133+
- `context_override`: Replace the serialized context when loading state. This is useful when you
134+
do not want to restore the original context object, but it does not remove that context from an
135+
already serialized payload.
136+
- `include_tracing_api_key=True`: Include the tracing API key in the serialized trace payload
137+
when you need resumed work to keep exporting traces with the same credentials.
138+
139+
`RunState` also preserves trace metadata and server-managed conversation settings, so a resumed run
140+
can continue the same trace and the same `conversation_id` / `previous_response_id` chain.
127141

128142
## Versioning pending tasks
129143

docs/results.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,11 @@ The [`input`][agents.result.RunResultBase.input] property contains the original
5555

5656
### Interruptions and resuming runs
5757

58-
If a run pauses for tool approval, pending approvals are exposed in [`interruptions`][agents.result.RunResultBase.interruptions]. Convert the result into a [`RunState`][agents.run_state.RunState] with `to_state()`, approve or reject the interruption(s), and resume with `Runner.run(...)` or `Runner.run_streamed(...)`.
58+
If a run pauses for tool approval, pending approvals are exposed in
59+
[`RunResult.interruptions`][agents.result.RunResult.interruptions] or
60+
[`RunResultStreaming.interruptions`][agents.result.RunResultStreaming.interruptions]. Convert the
61+
result into a [`RunState`][agents.run_state.RunState] with `to_state()`, approve or reject the
62+
interruption(s), and resume with `Runner.run(...)` or `Runner.run_streamed(...)`.
5963

6064
```python
6165
from agents import Agent, Runner
@@ -70,7 +74,9 @@ if result.interruptions:
7074
result = await Runner.run(agent, state)
7175
```
7276

73-
Both [`RunResult`][agents.result.RunResult] and [`RunResultStreaming`][agents.result.RunResultStreaming] support `to_state()`.
77+
Both [`RunResult`][agents.result.RunResult] and
78+
[`RunResultStreaming`][agents.result.RunResultStreaming] support `to_state()`. For durable
79+
approval workflows, see the [human-in-the-loop guide](human_in_the_loop.md).
7480

7581
### Convenience helpers
7682

docs/running_agents.md

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,11 @@ Read more in the [results guide](results.md).
2525

2626
### The agent loop
2727

28-
When you use the run method in `Runner`, you pass in a starting agent and input. The input can either be a string (which is considered a user message), or a list of input items, which are the items in the OpenAI Responses API.
28+
When you use the run method in `Runner`, you pass in a starting agent and input. The input can be:
29+
30+
- a string (treated as a user message),
31+
- a list of input items in the OpenAI Responses API format, or
32+
- a [`RunState`][agents.run_state.RunState] when resuming an interrupted run.
2933

3034
The runner then runs a loop:
3135

@@ -123,7 +127,7 @@ Use `RunConfig` to override behavior for a single run without changing each agen
123127
- [`model_provider`][agents.run.RunConfig.model_provider]: A model provider for looking up model names, which defaults to OpenAI.
124128
- [`model_settings`][agents.run.RunConfig.model_settings]: Overrides agent-specific settings. For example, you can set a global `temperature` or `top_p`.
125129
- [`session_settings`][agents.run.RunConfig.session_settings]: Overrides session-level defaults (for example, `SessionSettings(limit=...)`) when retrieving history during a run.
126-
- [`session_input_callback`][agents.run.RunConfig.session_input_callback]: Customize how new user input is merged with session history before each turn when using Sessions.
130+
- [`session_input_callback`][agents.run.RunConfig.session_input_callback]: Customize how new user input is merged with session history before each turn when using Sessions. The callback can be sync or async.
127131

128132
##### Guardrails, handoffs, and model input shaping
129133

@@ -345,6 +349,10 @@ async def main():
345349
print(f"Assistant: {result.final_output}")
346350
```
347351

352+
If a run pauses for approval and you resume from a [`RunState`][agents.run_state.RunState], the
353+
SDK keeps the saved `conversation_id` / `previous_response_id` / `auto_previous_response_id`
354+
settings so the resumed turn continues in the same server-managed conversation.
355+
348356
!!! note
349357

350358
The SDK automatically retries `conversation_locked` errors with backoff. In server-managed
@@ -378,7 +386,7 @@ result = Runner.run_sync(
378386
)
379387
```
380388

381-
Set the hook per run via `run_config` or as a default on your `Runner` to redact sensitive data, trim long histories, or inject additional system guidance.
389+
Set the hook per run via `run_config` to redact sensitive data, trim long histories, or inject additional system guidance.
382390

383391
## Errors and recovery
384392

docs/scripts/translate_docs.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
# logging.basicConfig(level=logging.INFO)
1212
# logging.getLogger("openai").setLevel(logging.DEBUG)
1313

14-
OPENAI_MODEL = os.environ.get("OPENAI_MODEL", "gpt-5.2")
14+
OPENAI_MODEL = os.environ.get("OPENAI_MODEL", "gpt-5.3-codex")
1515

1616
ENABLE_CODE_SNIPPET_EXCLUSION = True
1717
# gpt-4.5 needed this for better quality

docs/streaming.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,22 @@ if __name__ == "__main__":
3535

3636
[`RunItemStreamEvent`][agents.stream_events.RunItemStreamEvent]s are higher level events. They inform you when an item has been fully generated. This allows you to push progress updates at the level of "message generated", "tool ran", etc, instead of each token. Similarly, [`AgentUpdatedStreamEvent`][agents.stream_events.AgentUpdatedStreamEvent] gives you updates when the current agent changes (e.g. as the result of a handoff).
3737

38+
### Run item event names
39+
40+
`RunItemStreamEvent.name` uses a fixed set of semantic event names:
41+
42+
- `message_output_created`
43+
- `handoff_requested`
44+
- `handoff_occured`
45+
- `tool_called`
46+
- `tool_output`
47+
- `reasoning_item_created`
48+
- `mcp_approval_requested`
49+
- `mcp_approval_response`
50+
- `mcp_list_tools`
51+
52+
`handoff_occured` is intentionally misspelled for backward compatibility.
53+
3854
For example, this will ignore raw events and stream updates to the user.
3955

4056
```python

0 commit comments

Comments
 (0)