Skip to content

Commit da3d45c

Browse files
authored
docs: reorganize navigation and clarify runtime guides (#2568)
1 parent 6c711b0 commit da3d45c

File tree

16 files changed

+306
-212
lines changed

16 files changed

+306
-212
lines changed

docs/config.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,13 @@
11
# Configuring the SDK
22

3+
This page covers SDK-wide defaults that you usually set once during application startup, such as the default OpenAI key or client, the default OpenAI API shape, tracing export defaults, and logging behavior.
4+
5+
If you need to configure a specific agent or run instead, start with:
6+
7+
- [Running agents](running_agents.md) for `RunConfig`, sessions, and conversation-state options.
8+
- [Models](models/index.md) for model selection and provider configuration.
9+
- [Tracing](tracing.md) for per-run tracing metadata and custom trace processors.
10+
311
## API keys and clients
412

513
By default, the SDK uses the `OPENAI_API_KEY` environment variable for LLM requests and tracing. The key is resolved when the SDK first creates an OpenAI client (lazy initialization), so set the environment variable before your first model call. If you are unable to set that environment variable before your app starts, you can use the [set_default_openai_key()][agents.set_default_openai_key] function to set the key.
@@ -30,7 +38,7 @@ set_default_openai_api("chat_completions")
3038

3139
## Tracing
3240

33-
Tracing is enabled by default. It uses the OpenAI API keys from the section above by default (i.e. the environment variable or the default key you set). You can specifically set the API key used for tracing by using the [`set_tracing_export_api_key`][agents.set_tracing_export_api_key] function.
41+
Tracing is enabled by default. By default it uses the same OpenAI API key as your model requests from the section above (that is, the environment variable or the default key you set). You can specifically set the API key used for tracing by using the [`set_tracing_export_api_key`][agents.set_tracing_export_api_key] function.
3442

3543
```python
3644
from agents import set_tracing_export_api_key

docs/guardrails.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Tool guardrails wrap **function tools** and let you validate or block tool calls
4747

4848
- Input tool guardrails run before the tool executes and can skip the call, replace the output with a message, or raise a tripwire.
4949
- Output tool guardrails run after the tool executes and can replace the output or raise a tripwire.
50-
- Tool guardrails apply only to function tools created with [`function_tool`][agents.function_tool]; hosted tools (`WebSearchTool`, `FileSearchTool`, `HostedMCPTool`, `CodeInterpreterTool`, `ImageGenerationTool`) and local runtime tools (`ComputerTool`, `ShellTool`, `ApplyPatchTool`, `LocalShellTool`) do not use this guardrail pipeline.
50+
- Tool guardrails apply only to function tools created with [`function_tool`][agents.function_tool]; hosted tools (`WebSearchTool`, `FileSearchTool`, `HostedMCPTool`, `CodeInterpreterTool`, `ImageGenerationTool`) and built-in execution tools (`ComputerTool`, `ShellTool`, `ApplyPatchTool`, `LocalShellTool`) do not use this guardrail pipeline.
5151

5252
See the code snippet below for details.
5353

docs/index.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,3 +54,9 @@ print(result.final_output)
5454
```bash
5555
export OPENAI_API_KEY=sk-...
5656
```
57+
58+
## Start here
59+
60+
- Build your first text-based agent with the [Quickstart](quickstart.md).
61+
- Build a low-latency voice agent with the [Realtime agents quickstart](realtime/quickstart.md).
62+
- If you want a speech-to-text / agent / text-to-speech pipeline instead, see the [Voice pipeline quickstart](voice/quickstart.md).

docs/ja/multi_agent.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
search:
33
exclude: true
44
---
5-
# 複数エージェントのオーケストレーション
5+
# エージェントオーケストレーション
66

77
オーケストレーションとは、アプリ内でのエージェントの流れのことです。どのエージェントをどの順番で実行し、その後何をするかをどう決めるのか。エージェントをオーケストレーションする主な方法は 2 つあります。
88

@@ -38,4 +38,4 @@ LLM によるオーケストレーションは強力ですが、コードによ
3838
- 評価とフィードバックを行うエージェントと、タスクを実行するエージェントを `while` ループで回し、評価者が基準を満たしたと判断するまで繰り返す。
3939
- 複数のエージェントを並列実行する(例: `asyncio.gather` のような Python の基本コンポーネントを使用)。相互依存しない複数タスクがある場合の高速化に有効。
4040

41-
[`examples/agent_patterns`](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns) に複数のコード例があります。
41+
[`examples/agent_patterns`](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns) に複数のコード例があります。

docs/ko/multi_agent.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
search:
33
exclude: true
44
---
5-
# 멀티 에이전트 오케스트레이션
5+
# 에이전트 오케스트레이션
66

77
오케스트레이션은 앱에서 에이전트의 흐름을 의미합니다. 어떤 에이전트를 어떤 순서로 실행하고, 다음에 무엇을 할지 어떻게 결정할까요? 에이전트를 오케스트레이션하는 주요 방법은 두 가지입니다:
88

@@ -38,4 +38,4 @@ LLM 기반 오케스트레이션이 강력하긴 하지만, 코드 기반 오케
3838
- 작업을 수행하는 에이전트와 평가 및 피드백을 제공하는 에이전트를 `while` 루프로 함께 실행하고, 평가자가 출력이 특정 기준을 통과했다고 말할 때까지 반복
3939
- 여러 에이전트를 병렬로 실행, 예: 파이썬 기본 구성요소 `asyncio.gather` 사용. 서로 의존하지 않는 여러 작업이 있을 때 속도 향상에 유용합니다.
4040

41-
[`examples/agent_patterns`](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns)에 여러 개의 code examples가 있습니다.
41+
[`examples/agent_patterns`](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns)에 여러 개의 code examples가 있습니다.

docs/ko/tools.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -456,7 +456,7 @@ def get_user_profile(user_id: str) -> str:
456456

457457
## Agents as tools
458458

459-
일부 워크플로에서는 제어를 핸드오프하는 대신 중앙 에이전트가 전문화된 에이전트 네트워크를 멀티 에이전트 오케스트레이션하도록 만들고 싶을 수 있습니다. 이를 위해 에이전트를 도구로 모델링할 수 있습니다.
459+
일부 워크플로에서는 제어를 핸드오프하는 대신 중앙 에이전트가 전문화된 에이전트 네트워크를 오케스트레이션하도록 만들고 싶을 수 있습니다. 이를 위해 에이전트를 도구로 모델링할 수 있습니다.
460460

461461
```python
462462
from agents import Agent, Runner
@@ -737,4 +737,4 @@ agent = Agent(
737737
- [Codex tool API reference](ref/extensions/experimental/codex/codex_tool.md)
738738
- [ThreadOptions reference](ref/extensions/experimental/codex/thread_options.md)
739739
- [TurnOptions reference](ref/extensions/experimental/codex/turn_options.md)
740-
- 전체 실행 가능한 샘플은 `examples/tools/codex.py``examples/tools/codex_same_thread.py`를 참고하세요
740+
- 전체 실행 가능한 샘플은 `examples/tools/codex.py``examples/tools/codex_same_thread.py`를 참고하세요

docs/multi_agent.md

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Orchestrating multiple agents
1+
# Agent orchestration
22

33
Orchestration refers to the flow of agents in your app. Which agents run, in what order, and how do they decide what happens next? There are two main ways to orchestrate agents:
44

@@ -17,6 +17,19 @@ An agent is an LLM equipped with instructions, tools and handoffs. This means th
1717
- Code execution to do data analysis
1818
- Handoffs to specialized agents that are great at planning, report writing and more.
1919

20+
### Core SDK patterns
21+
22+
In the Python SDK, two orchestration patterns come up most often:
23+
24+
| Pattern | How it works | Best when |
25+
| --- | --- | --- |
26+
| Agents as tools | A manager agent keeps control of the conversation and calls specialist agents through `Agent.as_tool()`. | You want one agent to own the final answer, combine outputs from multiple specialists, or enforce shared guardrails in one place. |
27+
| Handoffs | A triage agent routes the conversation to a specialist, and that specialist becomes the active agent for the rest of the turn. | You want the specialist to respond directly, keep prompts focused, or swap instructions without the manager narrating the result. |
28+
29+
Use **agents as tools** when a specialist should help with a bounded subtask but should not take over the user-facing conversation. Use **handoffs** when routing itself is part of the workflow and you want the chosen specialist to own the next part of the interaction.
30+
31+
You can also combine the two. A triage agent might hand off to a specialist, and that specialist can still call other agents as tools for narrow subtasks.
32+
2033
This pattern is great when the task is open-ended and you want to rely on the intelligence of an LLM. The most important tactics here are:
2134

2235
1. Invest in good prompts. Make it clear what tools are available, how to use them, and what parameters it must operate within.
@@ -25,6 +38,8 @@ This pattern is great when the task is open-ended and you want to rely on the in
2538
4. Have specialized agents that excel in one task, rather than having a general purpose agent that is expected to be good at anything.
2639
5. Invest in [evals](https://platform.openai.com/docs/guides/evals). This lets you train your agents to improve and get better at tasks.
2740

41+
If you want the core SDK primitives behind this style of orchestration, start with [tools](tools.md), [handoffs](handoffs.md), and [running agents](running_agents.md).
42+
2843
## Orchestrating via code
2944

3045
While orchestrating via LLM is powerful, orchestrating via code makes tasks more deterministic and predictable, in terms of speed, cost and performance. Common patterns here are:
@@ -35,3 +50,11 @@ While orchestrating via LLM is powerful, orchestrating via code makes tasks more
3550
- Running multiple agents in parallel, e.g. via Python primitives like `asyncio.gather`. This is useful for speed when you have multiple tasks that don't depend on each other.
3651

3752
We have a number of examples in [`examples/agent_patterns`](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns).
53+
54+
## Related guides
55+
56+
- [Agents](agents.md) for composition patterns and agent configuration.
57+
- [Tools](tools.md#agents-as-tools) for `Agent.as_tool()` and manager-style orchestration.
58+
- [Handoffs](handoffs.md) for delegation between specialist agents.
59+
- [Running agents](running_agents.md) for per-run orchestration controls and conversation state.
60+
- [Quickstart](quickstart.md) for a minimal end-to-end handoff example.

docs/quickstart.md

Lines changed: 76 additions & 98 deletions
Original file line numberDiff line numberDiff line change
@@ -34,158 +34,136 @@ export OPENAI_API_KEY=sk-...
3434

3535
## Create your first agent
3636

37-
Agents are defined with instructions, a name, and optional config (such as `model_config`)
37+
Agents are defined with instructions, a name, and optional configuration such as a specific model.
3838

3939
```python
4040
from agents import Agent
4141

4242
agent = Agent(
43-
name="Math Tutor",
44-
instructions="You provide help with math problems. Explain your reasoning at each step and include examples",
43+
name="History Tutor",
44+
instructions="You answer history questions clearly and concisely.",
4545
)
4646
```
4747

48-
## Add a few more agents
48+
## Run your first agent
4949

50-
Additional agents can be defined in the same way. `handoff_descriptions` provide additional context for determining handoff routing
50+
Use [`Runner`][agents.run.Runner] to execute the agent and get a [`RunResult`][agents.result.RunResult] back.
5151

5252
```python
53-
from agents import Agent
53+
import asyncio
54+
from agents import Agent, Runner
5455

55-
history_tutor_agent = Agent(
56+
agent = Agent(
5657
name="History Tutor",
57-
handoff_description="Specialist agent for historical questions",
58-
instructions="You provide assistance with historical queries. Explain important events and context clearly.",
58+
instructions="You answer history questions clearly and concisely.",
5959
)
6060

61-
math_tutor_agent = Agent(
62-
name="Math Tutor",
63-
handoff_description="Specialist agent for math questions",
64-
instructions="You provide help with math problems. Explain your reasoning at each step and include examples",
65-
)
66-
```
67-
68-
## Define your handoffs
69-
70-
On each agent, you can define an inventory of outgoing handoff options that the agent can choose from to decide how to make progress on their task.
61+
async def main():
62+
result = await Runner.run(agent, "When did the Roman Empire fall?")
63+
print(result.final_output)
7164

72-
```python
73-
triage_agent = Agent(
74-
name="Triage Agent",
75-
instructions="You determine which agent to use based on the user's homework question",
76-
handoffs=[history_tutor_agent, math_tutor_agent]
77-
)
65+
if __name__ == "__main__":
66+
asyncio.run(main())
7867
```
7968

80-
## Run the agent orchestration
69+
For a second turn, you can either pass `result.to_input_list()` back into `Runner.run(...)`, attach a [session](sessions/index.md), or reuse OpenAI server-managed state with `conversation_id` / `previous_response_id`. The [running agents](running_agents.md) guide compares these approaches.
70+
71+
## Give your agent tools
8172

82-
Let's check that the workflow runs and the triage agent correctly routes between the two specialist agents.
73+
You can give an agent tools to look up information or perform actions.
8374

8475
```python
85-
from agents import Runner
76+
import asyncio
77+
from agents import Agent, Runner, function_tool
8678

87-
async def main():
88-
result = await Runner.run(triage_agent, "who was the first president of the united states?")
89-
print(result.final_output)
90-
```
9179

92-
## Add a guardrail
80+
@function_tool
81+
def history_fun_fact() -> str:
82+
"""Return a short history fact."""
83+
return "Sharks are older than trees."
9384

94-
You can define custom guardrails to run on the input or output.
9585

96-
```python
97-
from agents import GuardrailFunctionOutput, Agent, Runner
98-
from pydantic import BaseModel
86+
agent = Agent(
87+
name="History Tutor",
88+
instructions="Answer history questions clearly. Use history_fun_fact when it helps.",
89+
tools=[history_fun_fact],
90+
)
9991

10092

101-
class HomeworkOutput(BaseModel):
102-
is_homework: bool
103-
reasoning: str
93+
async def main():
94+
result = await Runner.run(
95+
agent,
96+
"Tell me something surprising about ancient life on Earth.",
97+
)
98+
print(result.final_output)
10499

105-
guardrail_agent = Agent(
106-
name="Guardrail check",
107-
instructions="Check if the user is asking about homework.",
108-
output_type=HomeworkOutput,
109-
)
110100

111-
async def homework_guardrail(ctx, agent, input_data):
112-
result = await Runner.run(guardrail_agent, input_data, context=ctx.context)
113-
final_output = result.final_output_as(HomeworkOutput)
114-
return GuardrailFunctionOutput(
115-
output_info=final_output,
116-
tripwire_triggered=not final_output.is_homework,
117-
)
101+
if __name__ == "__main__":
102+
asyncio.run(main())
118103
```
119104

120-
## Put it all together
105+
## Add a few more agents
121106

122-
Let's put it all together and run the entire workflow, using handoffs and the input guardrail.
107+
Additional agents can be defined in the same way. `handoff_description` gives the routing agent extra context about when to delegate.
123108

124109
```python
125-
from agents import Agent, InputGuardrail, GuardrailFunctionOutput, Runner
126-
from agents.exceptions import InputGuardrailTripwireTriggered
127-
from pydantic import BaseModel
128-
import asyncio
129-
130-
class HomeworkOutput(BaseModel):
131-
is_homework: bool
132-
reasoning: str
110+
from agents import Agent
133111

134-
guardrail_agent = Agent(
135-
name="Guardrail check",
136-
instructions="Check if the user is asking about homework.",
137-
output_type=HomeworkOutput,
112+
history_tutor_agent = Agent(
113+
name="History Tutor",
114+
handoff_description="Specialist agent for historical questions",
115+
instructions="You answer history questions clearly and concisely.",
138116
)
139117

140118
math_tutor_agent = Agent(
141119
name="Math Tutor",
142120
handoff_description="Specialist agent for math questions",
143-
instructions="You provide help with math problems. Explain your reasoning at each step and include examples",
144-
)
145-
146-
history_tutor_agent = Agent(
147-
name="History Tutor",
148-
handoff_description="Specialist agent for historical questions",
149-
instructions="You provide assistance with historical queries. Explain important events and context clearly.",
121+
instructions="You explain math step by step and include worked examples.",
150122
)
123+
```
151124

125+
## Define your handoffs
152126

153-
async def homework_guardrail(ctx, agent, input_data):
154-
result = await Runner.run(guardrail_agent, input_data, context=ctx.context)
155-
final_output = result.final_output_as(HomeworkOutput)
156-
return GuardrailFunctionOutput(
157-
output_info=final_output,
158-
tripwire_triggered=not final_output.is_homework,
159-
)
127+
On an agent, you can define an inventory of outgoing handoff options that it can choose from while solving the task.
160128

129+
```python
161130
triage_agent = Agent(
162131
name="Triage Agent",
163-
instructions="You determine which agent to use based on the user's homework question",
132+
instructions="Route each homework question to the right specialist.",
164133
handoffs=[history_tutor_agent, math_tutor_agent],
165-
input_guardrails=[
166-
InputGuardrail(guardrail_function=homework_guardrail),
167-
],
168134
)
135+
```
136+
137+
## Run the agent orchestration
138+
139+
The runner handles executing individual agents, any handoffs, and any tool calls.
140+
141+
```python
142+
import asyncio
143+
from agents import Runner
144+
169145

170146
async def main():
171-
# Example 1: History question
172-
try:
173-
result = await Runner.run(triage_agent, "who was the first president of the united states?")
174-
print(result.final_output)
175-
except InputGuardrailTripwireTriggered as e:
176-
print("Guardrail blocked this input:", e)
177-
178-
# Example 2: General/philosophical question
179-
try:
180-
result = await Runner.run(triage_agent, "What is the meaning of life?")
181-
print(result.final_output)
182-
except InputGuardrailTripwireTriggered as e:
183-
print("Guardrail blocked this input:", e)
147+
result = await Runner.run(
148+
triage_agent,
149+
"Who was the first president of the United States?",
150+
)
151+
print(result.final_output)
152+
print(f"Answered by: {result.last_agent.name}")
153+
184154

185155
if __name__ == "__main__":
186156
asyncio.run(main())
187157
```
188158

159+
## Reference examples
160+
161+
The repository includes full scripts for the same core patterns:
162+
163+
- [`examples/basic/hello_world.py`](https://github.com/openai/openai-agents-python/tree/main/examples/basic/hello_world.py) for the first run.
164+
- [`examples/basic/tools.py`](https://github.com/openai/openai-agents-python/tree/main/examples/basic/tools.py) for function tools.
165+
- [`examples/agent_patterns/routing.py`](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns/routing.py) for multi-agent routing.
166+
189167
## View your traces
190168

191169
To review what happened during your agent run, navigate to the [Trace viewer in the OpenAI Dashboard](https://platform.openai.com/traces) to view traces of your agent runs.
@@ -195,5 +173,5 @@ To review what happened during your agent run, navigate to the [Trace viewer in
195173
Learn how to build more complex agentic flows:
196174

197175
- Learn about how to configure [Agents](agents.md).
198-
- Learn about [running agents](running_agents.md).
176+
- Learn about [running agents](running_agents.md) and [sessions](sessions/index.md).
199177
- Learn about [tools](tools.md), [guardrails](guardrails.md) and [models](models/index.md).

0 commit comments

Comments
 (0)