Skip to content

Commit 2b32271

Browse files
authored
Update README.md to be more user-friendly
1 parent bb14973 commit 2b32271

1 file changed

Lines changed: 13 additions & 259 deletions

File tree

README.md

Lines changed: 13 additions & 259 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,15 @@ The OpenAI Agents SDK is a lightweight yet powerful framework for building multi
1010
### Core concepts:
1111

1212
1. [**Agents**](https://openai.github.io/openai-agents-python/agents): LLMs configured with instructions, tools, guardrails, and handoffs
13-
2. [**Handoffs**](https://openai.github.io/openai-agents-python/handoffs/): A specialized tool call used by the Agents SDK for transferring control between agents
14-
3. [**Guardrails**](https://openai.github.io/openai-agents-python/guardrails/): Configurable safety checks for input and output validation
15-
4. [**Sessions**](#sessions): Automatic conversation history management across agent runs
16-
5. [**Tracing**](https://openai.github.io/openai-agents-python/tracing/): Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows
13+
1. [**Agents as tools / Handoffs**](https://openai.github.io/openai-agents-python/handoffs/): Delegating to other agents for specific tasks
14+
1. [**Tools**](https://openai.github.io/openai-agents-python/tools/): Various Tools let agents take actions (functions, MCP, hosted tools)
15+
1. [**Guardrails**](https://openai.github.io/openai-agents-python/guardrails/): Configurable safety checks for input and output validation
16+
1. [**Human in the loop**](https://openai.github.io/openai-agents-python/human_in_the_loop/): Built-in mechanisms for involving humans across agent runs
17+
1. [**Sessions**](https://openai.github.io/openai-agents-python/sessions/): Automatic conversation history management across agent runs
18+
1. [**Tracing**](https://openai.github.io/openai-agents-python/tracing/): Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows
19+
1. [**Realtime Agents**](https://openai.github.io/openai-agents-python/realtime/quickstart/): Build powerful voice agents with full features
1720

18-
Explore the [examples](examples) directory to see the SDK in action, and read our [documentation](https://openai.github.io/openai-agents-python/) for more details.
21+
Explore the [examples](https://github.com/openai/openai-agents-python/tree/main/examples) directory to see the SDK in action, and read our [documentation](https://openai.github.io/openai-agents-python/) for more details.
1922

2023
## Get started
2124

@@ -29,9 +32,7 @@ source .venv/bin/activate # On Windows: .venv\Scripts\activate
2932
pip install openai-agents
3033
```
3134

32-
For voice support, install with the optional `voice` group: `pip install 'openai-agents[voice]'`.
33-
34-
For Redis session support, install with the optional `redis` group: `pip install 'openai-agents[redis]'`.
35+
For voice support, install with the optional `voice` group: `pip install 'openai-agents[voice]'`. For Redis session support, install with the optional `redis` group: `pip install 'openai-agents[redis]'`.
3536

3637
### uv
3738

@@ -42,11 +43,9 @@ uv init
4243
uv add openai-agents
4344
```
4445

45-
For voice support, install with the optional `voice` group: `uv add 'openai-agents[voice]'`.
46-
47-
For Redis session support, install with the optional `redis` group: `uv add 'openai-agents[redis]'`.
46+
For voice support, install with the optional `voice` group: `uv add 'openai-agents[voice]'`. For Redis session support, install with the optional `redis` group: `uv add 'openai-agents[redis]'`.
4847

49-
## Hello world example
48+
## Run your first agent
5049

5150
```python
5251
from agents import Agent, Runner
@@ -63,254 +62,9 @@ print(result.final_output)
6362

6463
(_If running this, ensure you set the `OPENAI_API_KEY` environment variable_)
6564

66-
(_For Jupyter notebook users, see [hello_world_jupyter.ipynb](examples/basic/hello_world_jupyter.ipynb)_)
67-
68-
## Handoffs example
69-
70-
```python
71-
from agents import Agent, Runner
72-
import asyncio
73-
74-
spanish_agent = Agent(
75-
name="Spanish agent",
76-
instructions="You only speak Spanish.",
77-
)
78-
79-
english_agent = Agent(
80-
name="English agent",
81-
instructions="You only speak English",
82-
)
83-
84-
triage_agent = Agent(
85-
name="Triage agent",
86-
instructions="Handoff to the appropriate agent based on the language of the request.",
87-
handoffs=[spanish_agent, english_agent],
88-
)
89-
90-
91-
async def main():
92-
result = await Runner.run(triage_agent, input="Hola, ¿cómo estás?")
93-
print(result.final_output)
94-
# ¡Hola! Estoy bien, gracias por preguntar. ¿Y tú, cómo estás?
95-
96-
97-
if __name__ == "__main__":
98-
asyncio.run(main())
99-
```
100-
101-
## Functions example
102-
103-
```python
104-
import asyncio
105-
106-
from agents import Agent, Runner, function_tool
107-
108-
109-
@function_tool
110-
def get_weather(city: str) -> str:
111-
return f"The weather in {city} is sunny."
112-
113-
114-
agent = Agent(
115-
name="Hello world",
116-
instructions="You are a helpful agent.",
117-
tools=[get_weather],
118-
)
119-
120-
121-
async def main():
122-
result = await Runner.run(agent, input="What's the weather in Tokyo?")
123-
print(result.final_output)
124-
# The weather in Tokyo is sunny.
125-
126-
127-
if __name__ == "__main__":
128-
asyncio.run(main())
129-
```
130-
131-
## The agent loop
132-
133-
When you call `Runner.run()`, we run a loop until we get a final output.
134-
135-
1. We call the LLM, using the model and settings on the agent, and the message history.
136-
2. The LLM returns a response, which may include tool calls.
137-
3. If the response has a final output (see below for more on this), we return it and end the loop.
138-
4. If the response has a handoff, we set the agent to the new agent and go back to step 1.
139-
5. We process the tool calls (if any) and append the tool responses messages. Then we go to step 1.
140-
141-
There is a `max_turns` parameter that you can use to limit the number of times the loop executes.
142-
143-
### Final output
144-
145-
Final output is the last thing the agent produces in the loop.
146-
147-
1. If you set an `output_type` on the agent, the final output is when the LLM returns something of that type. We use [structured outputs](https://platform.openai.com/docs/guides/structured-outputs) for this.
148-
2. If there's no `output_type` (i.e. plain text responses), then the first LLM response without any tool calls or handoffs is considered as the final output.
149-
150-
As a result, the mental model for the agent loop is:
151-
152-
1. If the current agent has an `output_type`, the loop runs until the agent produces structured output matching that type.
153-
2. If the current agent does not have an `output_type`, the loop runs until the current agent produces a message without any tool calls/handoffs.
154-
155-
## Common agent patterns
156-
157-
The Agents SDK is designed to be highly flexible, allowing you to model a wide range of LLM workflows including deterministic flows, iterative loops, and more. See examples in [`examples/agent_patterns`](examples/agent_patterns).
158-
159-
## Tracing
160-
161-
The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, including [Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents), [AgentOps](https://docs.agentops.ai/v1/integrations/agentssdk), [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk), [Scorecard](https://docs.scorecard.io/docs/documentation/features/tracing#openai-agents-sdk-integration), [Keywords AI](https://docs.keywordsai.co/integration/development-frameworks/openai-agent), and many more. For more details about how to customize or disable tracing, see [Tracing](https://openai.github.io/openai-agents-python/tracing), which also includes a larger list of [external tracing processors](https://openai.github.io/openai-agents-python/tracing/#external-tracing-processors-list).
162-
163-
## Long running agents & human-in-the-loop
164-
165-
There are several options for long-running agents. Refer to [the documentation](https://openai.github.io/openai-agents-python/running_agents/#long-running-agents-human-in-the-loop) for details.
166-
167-
## Sessions
168-
169-
The Agents SDK provides built-in session memory to automatically maintain conversation history across multiple agent runs, eliminating the need to manually handle `.to_input_list()` between turns.
170-
171-
### Quick start
172-
173-
```python
174-
from agents import Agent, Runner, SQLiteSession
175-
176-
# Create agent
177-
agent = Agent(
178-
name="Assistant",
179-
instructions="Reply very concisely.",
180-
)
181-
182-
# Create a session instance
183-
session = SQLiteSession("conversation_123")
184-
185-
# First turn
186-
result = await Runner.run(
187-
agent,
188-
"What city is the Golden Gate Bridge in?",
189-
session=session
190-
)
191-
print(result.final_output) # "San Francisco"
192-
193-
# Second turn - agent automatically remembers previous context
194-
result = await Runner.run(
195-
agent,
196-
"What state is it in?",
197-
session=session
198-
)
199-
print(result.final_output) # "California"
200-
201-
# Also works with synchronous runner
202-
result = Runner.run_sync(
203-
agent,
204-
"What's the population?",
205-
session=session
206-
)
207-
print(result.final_output) # "Approximately 39 million"
208-
```
209-
210-
### Session options
211-
212-
- **No memory** (default): No session memory when session parameter is omitted
213-
- **`session: Session = DatabaseSession(...)`**: Use a Session instance to manage conversation history
214-
215-
```python
216-
from agents import Agent, Runner, SQLiteSession
217-
218-
# SQLite - file-based or in-memory database
219-
session = SQLiteSession("user_123", "conversations.db")
220-
221-
# Redis - for scalable, distributed deployments
222-
# from agents.extensions.memory import RedisSession
223-
# session = RedisSession.from_url("user_123", url="redis://localhost:6379/0")
224-
225-
agent = Agent(name="Assistant")
226-
227-
# Different session IDs maintain separate conversation histories
228-
result1 = await Runner.run(
229-
agent,
230-
"Hello",
231-
session=session
232-
)
233-
result2 = await Runner.run(
234-
agent,
235-
"Hello",
236-
session=SQLiteSession("user_456", "conversations.db")
237-
)
238-
```
239-
240-
### Custom session implementations
241-
242-
You can implement your own session memory by creating a class that follows the `Session` protocol:
243-
244-
```python
245-
from agents.memory import Session
246-
from typing import List
247-
248-
class MyCustomSession:
249-
"""Custom session implementation following the Session protocol."""
250-
251-
def __init__(self, session_id: str):
252-
self.session_id = session_id
253-
# Your initialization here
254-
255-
async def get_items(self, limit: int | None = None) -> List[dict]:
256-
# Retrieve conversation history for the session
257-
pass
258-
259-
async def add_items(self, items: List[dict]) -> None:
260-
# Store new items for the session
261-
pass
262-
263-
async def pop_item(self) -> dict | None:
264-
# Remove and return the most recent item from the session
265-
pass
65+
(_For Jupyter notebook users, see [hello_world_jupyter.ipynb](https://github.com/openai/openai-agents-python/blob/main/examples/basic/hello_world_jupyter.ipynb)_)
26666

267-
async def clear_session(self) -> None:
268-
# Clear all items for the session
269-
pass
270-
271-
# Use your custom session
272-
agent = Agent(name="Assistant")
273-
result = await Runner.run(
274-
agent,
275-
"Hello",
276-
session=MyCustomSession("my_session")
277-
)
278-
```
279-
280-
## Development (only needed if you need to edit the SDK/examples)
281-
282-
0. Ensure you have [`uv`](https://docs.astral.sh/uv/) installed.
283-
284-
```bash
285-
uv --version
286-
```
287-
288-
1. Install dependencies
289-
290-
```bash
291-
make sync
292-
```
293-
294-
2. (After making changes) lint/test
295-
296-
```
297-
make check # run tests linter and typechecker
298-
```
299-
300-
Or to run them individually:
301-
302-
```
303-
make tests # run tests
304-
make mypy # run typechecker
305-
make lint # run linter
306-
make format-check # run style checker
307-
```
308-
309-
Format code if `make format-check` fails above by running:
310-
311-
```
312-
make format
313-
```
67+
Explore the [examples](https://github.com/openai/openai-agents-python/tree/main/examples) directory to see the SDK in action, and read our [documentation](https://openai.github.io/openai-agents-python/) for more details.
31468

31569
## Acknowledgements
31670

0 commit comments

Comments
 (0)