You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/config.md
+9-1Lines changed: 9 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,13 @@
1
1
# Configuring the SDK
2
2
3
+
This page covers SDK-wide defaults that you usually set once during application startup, such as the default OpenAI key or client, the default OpenAI API shape, tracing export defaults, and logging behavior.
4
+
5
+
If you need to configure a specific agent or run instead, start with:
6
+
7
+
-[Running agents](running_agents.md) for `RunConfig`, sessions, and conversation-state options.
8
+
-[Models](models/index.md) for model selection and provider configuration.
9
+
-[Tracing](tracing.md) for per-run tracing metadata and custom trace processors.
10
+
3
11
## API keys and clients
4
12
5
13
By default, the SDK uses the `OPENAI_API_KEY` environment variable for LLM requests and tracing. The key is resolved when the SDK first creates an OpenAI client (lazy initialization), so set the environment variable before your first model call. If you are unable to set that environment variable before your app starts, you can use the [set_default_openai_key()][agents.set_default_openai_key] function to set the key.
Tracing is enabled by default. It uses the OpenAI API keys from the section above by default (i.e. the environment variable or the default key you set). You can specifically set the API key used for tracing by using the [`set_tracing_export_api_key`][agents.set_tracing_export_api_key] function.
41
+
Tracing is enabled by default. By default it uses the same OpenAI API key as your model requests from the section above (that is, the environment variable or the default key you set). You can specifically set the API key used for tracing by using the [`set_tracing_export_api_key`][agents.set_tracing_export_api_key] function.
Copy file name to clipboardExpand all lines: docs/guardrails.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ Tool guardrails wrap **function tools** and let you validate or block tool calls
47
47
48
48
- Input tool guardrails run before the tool executes and can skip the call, replace the output with a message, or raise a tripwire.
49
49
- Output tool guardrails run after the tool executes and can replace the output or raise a tripwire.
50
-
- Tool guardrails apply only to function tools created with [`function_tool`][agents.function_tool]; hosted tools (`WebSearchTool`, `FileSearchTool`, `HostedMCPTool`, `CodeInterpreterTool`, `ImageGenerationTool`) and local runtime tools (`ComputerTool`, `ShellTool`, `ApplyPatchTool`, `LocalShellTool`) do not use this guardrail pipeline.
50
+
- Tool guardrails apply only to function tools created with [`function_tool`][agents.function_tool]; hosted tools (`WebSearchTool`, `FileSearchTool`, `HostedMCPTool`, `CodeInterpreterTool`, `ImageGenerationTool`) and built-in execution tools (`ComputerTool`, `ShellTool`, `ApplyPatchTool`, `LocalShellTool`) do not use this guardrail pipeline.
Copy file name to clipboardExpand all lines: docs/multi_agent.md
+24-1Lines changed: 24 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Orchestrating multiple agents
1
+
# Agent orchestration
2
2
3
3
Orchestration refers to the flow of agents in your app. Which agents run, in what order, and how do they decide what happens next? There are two main ways to orchestrate agents:
4
4
@@ -17,6 +17,19 @@ An agent is an LLM equipped with instructions, tools and handoffs. This means th
17
17
- Code execution to do data analysis
18
18
- Handoffs to specialized agents that are great at planning, report writing and more.
19
19
20
+
### Core SDK patterns
21
+
22
+
In the Python SDK, two orchestration patterns come up most often:
23
+
24
+
| Pattern | How it works | Best when |
25
+
| --- | --- | --- |
26
+
| Agents as tools | A manager agent keeps control of the conversation and calls specialist agents through `Agent.as_tool()`. | You want one agent to own the final answer, combine outputs from multiple specialists, or enforce shared guardrails in one place. |
27
+
| Handoffs | A triage agent routes the conversation to a specialist, and that specialist becomes the active agent for the rest of the turn. | You want the specialist to respond directly, keep prompts focused, or swap instructions without the manager narrating the result. |
28
+
29
+
Use **agents as tools** when a specialist should help with a bounded subtask but should not take over the user-facing conversation. Use **handoffs** when routing itself is part of the workflow and you want the chosen specialist to own the next part of the interaction.
30
+
31
+
You can also combine the two. A triage agent might hand off to a specialist, and that specialist can still call other agents as tools for narrow subtasks.
32
+
20
33
This pattern is great when the task is open-ended and you want to rely on the intelligence of an LLM. The most important tactics here are:
21
34
22
35
1. Invest in good prompts. Make it clear what tools are available, how to use them, and what parameters it must operate within.
@@ -25,6 +38,8 @@ This pattern is great when the task is open-ended and you want to rely on the in
25
38
4. Have specialized agents that excel in one task, rather than having a general purpose agent that is expected to be good at anything.
26
39
5. Invest in [evals](https://platform.openai.com/docs/guides/evals). This lets you train your agents to improve and get better at tasks.
27
40
41
+
If you want the core SDK primitives behind this style of orchestration, start with [tools](tools.md), [handoffs](handoffs.md), and [running agents](running_agents.md).
42
+
28
43
## Orchestrating via code
29
44
30
45
While orchestrating via LLM is powerful, orchestrating via code makes tasks more deterministic and predictable, in terms of speed, cost and performance. Common patterns here are:
@@ -35,3 +50,11 @@ While orchestrating via LLM is powerful, orchestrating via code makes tasks more
35
50
- Running multiple agents in parallel, e.g. via Python primitives like `asyncio.gather`. This is useful for speed when you have multiple tasks that don't depend on each other.
36
51
37
52
We have a number of examples in [`examples/agent_patterns`](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns).
53
+
54
+
## Related guides
55
+
56
+
-[Agents](agents.md) for composition patterns and agent configuration.
57
+
-[Tools](tools.md#agents-as-tools) for `Agent.as_tool()` and manager-style orchestration.
58
+
-[Handoffs](handoffs.md) for delegation between specialist agents.
59
+
-[Running agents](running_agents.md) for per-run orchestration controls and conversation state.
60
+
-[Quickstart](quickstart.md) for a minimal end-to-end handoff example.
Agents are defined with instructions, a name, and optional config (such as `model_config`)
37
+
Agents are defined with instructions, a name, and optional configuration such as a specific model.
38
38
39
39
```python
40
40
from agents import Agent
41
41
42
42
agent = Agent(
43
-
name="Math Tutor",
44
-
instructions="You provide help with math problems. Explain your reasoning at each step and include examples",
43
+
name="History Tutor",
44
+
instructions="You answer history questions clearly and concisely.",
45
45
)
46
46
```
47
47
48
-
## Add a few more agents
48
+
## Run your first agent
49
49
50
-
Additional agents can be defined in the same way. `handoff_descriptions` provide additional context for determining handoff routing
50
+
Use [`Runner`][agents.run.Runner] to execute the agent and get a [`RunResult`][agents.result.RunResult] back.
51
51
52
52
```python
53
-
from agents import Agent
53
+
import asyncio
54
+
from agents import Agent, Runner
54
55
55
-
history_tutor_agent= Agent(
56
+
agent= Agent(
56
57
name="History Tutor",
57
-
handoff_description="Specialist agent for historical questions",
58
-
instructions="You provide assistance with historical queries. Explain important events and context clearly.",
58
+
instructions="You answer history questions clearly and concisely.",
59
59
)
60
60
61
-
math_tutor_agent = Agent(
62
-
name="Math Tutor",
63
-
handoff_description="Specialist agent for math questions",
64
-
instructions="You provide help with math problems. Explain your reasoning at each step and include examples",
65
-
)
66
-
```
67
-
68
-
## Define your handoffs
69
-
70
-
On each agent, you can define an inventory of outgoing handoff options that the agent can choose from to decide how to make progress on their task.
61
+
asyncdefmain():
62
+
result =await Runner.run(agent, "When did the Roman Empire fall?")
63
+
print(result.final_output)
71
64
72
-
```python
73
-
triage_agent = Agent(
74
-
name="Triage Agent",
75
-
instructions="You determine which agent to use based on the user's homework question",
76
-
handoffs=[history_tutor_agent, math_tutor_agent]
77
-
)
65
+
if__name__=="__main__":
66
+
asyncio.run(main())
78
67
```
79
68
80
-
## Run the agent orchestration
69
+
For a second turn, you can either pass `result.to_input_list()` back into `Runner.run(...)`, attach a [session](sessions/index.md), or reuse OpenAI server-managed state with `conversation_id` / `previous_response_id`. The [running agents](running_agents.md) guide compares these approaches.
70
+
71
+
## Give your agent tools
81
72
82
-
Let's check that the workflow runs and the triage agent correctly routes between the two specialist agents.
73
+
You can give an agent tools to look up information or perform actions.
83
74
84
75
```python
85
-
from agents import Runner
76
+
import asyncio
77
+
from agents import Agent, Runner, function_tool
86
78
87
-
asyncdefmain():
88
-
result =await Runner.run(triage_agent, "who was the first president of the united states?")
89
-
print(result.final_output)
90
-
```
91
79
92
-
## Add a guardrail
80
+
@function_tool
81
+
defhistory_fun_fact() -> str:
82
+
"""Return a short history fact."""
83
+
return"Sharks are older than trees."
93
84
94
-
You can define custom guardrails to run on the input or output.
95
85
96
-
```python
97
-
from agents import GuardrailFunctionOutput, Agent, Runner
98
-
from pydantic import BaseModel
86
+
agent = Agent(
87
+
name="History Tutor",
88
+
instructions="Answer history questions clearly. Use history_fun_fact when it helps.",
89
+
tools=[history_fun_fact],
90
+
)
99
91
100
92
101
-
classHomeworkOutput(BaseModel):
102
-
is_homework: bool
103
-
reasoning: str
93
+
asyncdefmain():
94
+
result =await Runner.run(
95
+
agent,
96
+
"Tell me something surprising about ancient life on Earth.",
97
+
)
98
+
print(result.final_output)
104
99
105
-
guardrail_agent = Agent(
106
-
name="Guardrail check",
107
-
instructions="Check if the user is asking about homework.",
The runner handles executing individual agents, any handoffs, and any tool calls.
140
+
141
+
```python
142
+
import asyncio
143
+
from agents import Runner
144
+
169
145
170
146
asyncdefmain():
171
-
# Example 1: History question
172
-
try:
173
-
result =await Runner.run(triage_agent, "who was the first president of the united states?")
174
-
print(result.final_output)
175
-
except InputGuardrailTripwireTriggered as e:
176
-
print("Guardrail blocked this input:", e)
177
-
178
-
# Example 2: General/philosophical question
179
-
try:
180
-
result =await Runner.run(triage_agent, "What is the meaning of life?")
181
-
print(result.final_output)
182
-
except InputGuardrailTripwireTriggered as e:
183
-
print("Guardrail blocked this input:", e)
147
+
result =await Runner.run(
148
+
triage_agent,
149
+
"Who was the first president of the United States?",
150
+
)
151
+
print(result.final_output)
152
+
print(f"Answered by: {result.last_agent.name}")
153
+
184
154
185
155
if__name__=="__main__":
186
156
asyncio.run(main())
187
157
```
188
158
159
+
## Reference examples
160
+
161
+
The repository includes full scripts for the same core patterns:
162
+
163
+
-[`examples/basic/hello_world.py`](https://github.com/openai/openai-agents-python/tree/main/examples/basic/hello_world.py) for the first run.
164
+
-[`examples/basic/tools.py`](https://github.com/openai/openai-agents-python/tree/main/examples/basic/tools.py) for function tools.
165
+
-[`examples/agent_patterns/routing.py`](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns/routing.py) for multi-agent routing.
166
+
189
167
## View your traces
190
168
191
169
To review what happened during your agent run, navigate to the [Trace viewer in the OpenAI Dashboard](https://platform.openai.com/traces) to view traces of your agent runs.
@@ -195,5 +173,5 @@ To review what happened during your agent run, navigate to the [Trace viewer in
195
173
Learn how to build more complex agentic flows:
196
174
197
175
- Learn about how to configure [Agents](agents.md).
198
-
- Learn about [running agents](running_agents.md).
176
+
- Learn about [running agents](running_agents.md) and [sessions](sessions/index.md).
199
177
- Learn about [tools](tools.md), [guardrails](guardrails.md) and [models](models/index.md).
0 commit comments