You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/guardrails.md
+52-1Lines changed: 52 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,6 +41,57 @@ Output guardrails run in 3 steps:
41
41
42
42
Output guardrails always run after the agent completes, so they don't support the `run_in_parallel` parameter.
43
43
44
+
## Tool guardrails
45
+
46
+
Tool guardrails wrap **function tools** and let you validate or block tool calls before and after execution. They are configured on the tool itself and run every time that tool is invoked.
47
+
48
+
- Input tool guardrails run before the tool executes and can skip the call, replace the output with a message, or raise a tripwire.
49
+
- Output tool guardrails run after the tool executes and can replace the output or raise a tripwire.
50
+
- Tool guardrails apply only to function tools created with [`function_tool`][agents.function_tool]; hosted tools (`WebSearchTool`, `FileSearchTool`, `HostedMCPTool`, `CodeInterpreterTool`, `ImageGenerationTool`) and local runtime tools (`ComputerTool`, `ShellTool`, `ApplyPatchTool`, `LocalShellTool`) do not use this guardrail pipeline.
If the input or output fails the guardrail, the Guardrail can signal this with a tripwire. As soon as we see a guardrail that has triggered the tripwires, we immediately raise a `{Input,Output}GuardrailTripwireTriggered` exception and halt the Agent execution.
@@ -161,4 +212,4 @@ async def main():
161
212
1. This is the actual agent's output type.
162
213
2. This is the guardrail's output type.
163
214
3. This is the guardrail function that receives the agent's output, and returns the result.
164
-
4. This is the actual agent that defines the workflow.
215
+
4. This is the actual agent that defines the workflow.
Copy file name to clipboardExpand all lines: docs/handoffs.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,8 @@ Handoffs are represented as tools to the LLM. So if there's a handoff to an agen
8
8
9
9
All agents have a [`handoffs`][agents.agent.Agent.handoffs] param, which can either take an `Agent` directly, or a `Handoff` object that customizes the Handoff.
10
10
11
+
If you pass plain `Agent` instances, their [`handoff_description`][agents.agent.Agent.handoff_description] (when set) is appended to the default tool description. Use it to hint when the model should pick that handoff without writing a full `handoff()` object.
12
+
11
13
You can create a handoff using the [`handoff()`][agents.handoffs.handoff] function provided by the Agents SDK. This function allows you to specify the agent to hand off to, along with optional overrides and input filters.
Copy file name to clipboardExpand all lines: docs/mcp.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -181,6 +181,10 @@ The constructor accepts additional options:
181
181
182
182
## 3. HTTP with SSE MCP servers
183
183
184
+
!!! warning
185
+
186
+
The MCP project has deprecated the Server-Sent Events transport. Prefer Streamable HTTP or stdio for new integrations and keep SSE only for legacy servers.
187
+
184
188
If the MCP server implements the HTTP with SSE transport, instantiate
185
189
[`MCPServerSse`][agents.mcp.server.MCPServerSse]. Apart from the transport, the API is identical to the Streamable HTTP server.
Copy file name to clipboardExpand all lines: docs/models/litellm.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -88,3 +88,13 @@ agent = Agent(
88
88
```
89
89
90
90
With `include_usage=True`, LiteLLM requests report token and request counts through `result.context_wrapper.usage` just like the built-in OpenAI models.
91
+
92
+
## Troubleshooting
93
+
94
+
If you see Pydantic serializer warnings from LiteLLM responses, enable a small compatibility patch by setting:
0 commit comments