Skip to content

Commit 1215783

Browse files
authored
docs: add tool search coverage across Python guides (#2622)
1 parent 279a69f commit 1215783

8 files changed

Lines changed: 91 additions & 0 deletions

File tree

docs/agents.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -311,6 +311,8 @@ Supplying a list of tools doesn't always mean the LLM will use a tool. You can f
311311
3. `none`, which requires the LLM to _not_ use a tool.
312312
4. Setting a specific string e.g. `my_tool`, which requires the LLM to use that specific tool.
313313

314+
When you are using OpenAI Responses tool search, named tool choices are more limited: you cannot target bare namespace names or deferred-only tools with `tool_choice`, and `tool_choice="tool_search"` does not target [`ToolSearchTool`][agents.tool.ToolSearchTool]. In those cases, prefer `auto` or `required`. See [Hosted tool search](tools.md#hosted-tool-search) for the Responses-specific constraints.
315+
314316
```python
315317
from agents import Agent, Runner, function_tool, ModelSettings
316318

docs/context.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -124,6 +124,8 @@ plus additional fields specific to the current tool call:
124124
- `tool_name` – the name of the tool being invoked
125125
- `tool_call_id` – a unique identifier for this tool call
126126
- `tool_arguments` – the raw argument string passed to the tool
127+
- `tool_namespace` – the Responses namespace for the tool call, when the tool was loaded through `tool_namespace()` or another namespaced surface
128+
- `qualified_tool_name` – the tool name qualified with the namespace when one is available
127129

128130
Use `ToolContext` when you need tool-level metadata during execution.
129131
For general context sharing between agents and tools, `RunContextWrapper` remains sufficient. Because `ToolContext` extends `RunContextWrapper`, it can also expose `.tool_input` when a nested `Agent.as_tool()` run supplied structured input.

docs/examples.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -89,6 +89,7 @@ Check out a variety of sample implementations of the SDK in the examples section
8989
- Hosted container shell with inline skills (`examples/tools/container_shell_inline_skill.py`)
9090
- Hosted container shell with skill references (`examples/tools/container_shell_skill_reference.py`)
9191
- Local shell with local skills (`examples/tools/local_shell_skill.py`)
92+
- Tool search with namespaces and deferred tools (`examples/tools/tool_search.py`)
9293
- Computer use
9394
- Image generation
9495
- Experimental Codex tool workflows (`examples/tools/codex.py`)

docs/mcp.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -102,6 +102,8 @@ asyncio.run(main())
102102

103103
The hosted server exposes its tools automatically; you do not add it to `mcp_servers`.
104104

105+
If you want hosted tool search to load a hosted MCP server lazily, set `tool_config["defer_loading"] = True` and add [`ToolSearchTool`][agents.tool.ToolSearchTool] to the agent. This is supported only on OpenAI Responses models. See [Tools](tools.md#hosted-tool-search) for the complete tool-search setup and constraints.
106+
105107
### Streaming hosted MCP results
106108

107109
Hosted tools support streaming results in exactly the same way as function tools. Use `Runner.run_streamed` to

docs/models/index.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,16 @@ For lower latency, using `reasoning.effort="none"` with `gpt-5.4` is recommended
7373

7474
If you pass a non–GPT-5 model name without custom `model_settings`, the SDK reverts to generic `ModelSettings` compatible with any model.
7575

76+
### Responses-only tool search features
77+
78+
The following tool features are supported only with OpenAI Responses models:
79+
80+
- [`ToolSearchTool`][agents.tool.ToolSearchTool]
81+
- [`tool_namespace()`][agents.tool.tool_namespace]
82+
- `@function_tool(defer_loading=True)` and other deferred-loading Responses tool surfaces
83+
84+
These features are rejected on Chat Completions models and on non-Responses backends. When you use deferred-loading tools, add `ToolSearchTool()` to the agent and let the model load tools through `auto` or `required` tool choice instead of forcing bare namespace names or deferred-only function names. See [Tools](../tools.md#hosted-tool-search) for the setup details and current constraints.
85+
7686
### Responses WebSocket transport
7787

7888
By default, OpenAI Responses API requests use HTTP transport. You can opt in to websocket transport when using OpenAI-backed models.

docs/results.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,12 +63,15 @@ Unlike the JavaScript SDK, Python does not expose a separate `output` property f
6363

6464
- [`MessageOutputItem`][agents.items.MessageOutputItem] for assistant messages
6565
- [`ReasoningItem`][agents.items.ReasoningItem] for reasoning items
66+
- [`ToolSearchCallItem`][agents.items.ToolSearchCallItem] and [`ToolSearchOutputItem`][agents.items.ToolSearchOutputItem] for Responses tool search requests and loaded tool-search results
6667
- [`ToolCallItem`][agents.items.ToolCallItem] and [`ToolCallOutputItem`][agents.items.ToolCallOutputItem] for tool calls and their results
6768
- [`ToolApprovalItem`][agents.items.ToolApprovalItem] for tool calls that paused for approval
6869
- [`HandoffCallItem`][agents.items.HandoffCallItem] and [`HandoffOutputItem`][agents.items.HandoffOutputItem] for handoff requests and completed transfers
6970

7071
Choose `new_items` over `to_input_list()` whenever you need agent associations, tool outputs, handoff boundaries, or approval boundaries.
7172

73+
When you use hosted tool search, inspect `ToolSearchCallItem.raw_item` to see the search request the model emitted, and `ToolSearchOutputItem.raw_item` to see which namespaces, functions, or hosted MCP servers were loaded for that turn.
74+
7275
## Continue or resume the conversation
7376

7477
### Next-turn agent

docs/streaming.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,8 @@ For a full pause/resume walkthrough, see the [human-in-the-loop guide](human_in_
6565
- `handoff_requested`
6666
- `handoff_occured`
6767
- `tool_called`
68+
- `tool_search_called`
69+
- `tool_search_output_created`
6870
- `tool_output`
6971
- `reasoning_item_created`
7072
- `mcp_approval_requested`
@@ -73,6 +75,8 @@ For a full pause/resume walkthrough, see the [human-in-the-loop guide](human_in_
7375

7476
`handoff_occured` is intentionally misspelled for backward compatibility.
7577

78+
When you use hosted tool search, `tool_search_called` is emitted when the model issues a tool-search request and `tool_search_output_created` is emitted when the Responses API returns the loaded subset.
79+
7680
For example, this will ignore raw events and stream updates to the user.
7781

7882
```python

docs/tools.md

Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ Use this page as a catalog, then jump to the section that matches the runtime yo
1515
| If you want to... | Start here |
1616
| --- | --- |
1717
| Use OpenAI-managed tools (web search, file search, code interpreter, hosted MCP, image generation) | [Hosted tools](#hosted-tools) |
18+
| Defer large tool surfaces until runtime with tool search | [Hosted tool search](#hosted-tool-search) |
1819
| Run tools in your own process or environment | [Local runtime tools](#local-runtime-tools) |
1920
| Wrap Python functions as tools | [Function tools](#function-tools) |
2021
| Let one agent call another without a handoff | [Agents as tools](#agents-as-tools) |
@@ -29,6 +30,7 @@ OpenAI offers a few built-in tools when using the [`OpenAIResponsesModel`][agent
2930
- The [`CodeInterpreterTool`][agents.tool.CodeInterpreterTool] lets the LLM execute code in a sandboxed environment.
3031
- The [`HostedMCPTool`][agents.tool.HostedMCPTool] exposes a remote MCP server's tools to the model.
3132
- The [`ImageGenerationTool`][agents.tool.ImageGenerationTool] generates images from a prompt.
33+
- The [`ToolSearchTool`][agents.tool.ToolSearchTool] lets the model load deferred tools, namespaces, or hosted MCP servers on demand.
3234

3335
Advanced hosted search options:
3436

@@ -54,6 +56,69 @@ async def main():
5456
print(result.final_output)
5557
```
5658

59+
### Hosted tool search
60+
61+
Tool search lets OpenAI Responses models defer large tool surfaces until runtime, so the model loads only the subset it needs for the current turn. This is useful when you have many function tools, namespace groups, or hosted MCP servers and want to reduce tool-schema tokens without exposing every tool up front.
62+
63+
Start with hosted tool search when the candidate tools are already known when you build the agent. If your application needs to decide what to load dynamically, the Responses API also supports client-executed tool search, but the standard `Runner` does not auto-execute that mode.
64+
65+
```python
66+
from typing import Annotated
67+
68+
from agents import Agent, Runner, ToolSearchTool, function_tool, tool_namespace
69+
70+
71+
@function_tool(defer_loading=True)
72+
def get_customer_profile(
73+
customer_id: Annotated[str, "The customer ID to look up."],
74+
) -> str:
75+
"""Fetch a CRM customer profile."""
76+
return f"profile for {customer_id}"
77+
78+
79+
@function_tool(defer_loading=True)
80+
def list_open_orders(
81+
customer_id: Annotated[str, "The customer ID to look up."],
82+
) -> str:
83+
"""List open orders for a customer."""
84+
return f"open orders for {customer_id}"
85+
86+
87+
crm_tools = tool_namespace(
88+
name="crm",
89+
description="CRM tools for customer lookups.",
90+
tools=[get_customer_profile, list_open_orders],
91+
)
92+
93+
94+
agent = Agent(
95+
name="Operations assistant",
96+
model="gpt-5.4",
97+
instructions="Load the crm namespace before using CRM tools.",
98+
tools=[*crm_tools, ToolSearchTool()],
99+
)
100+
101+
result = await Runner.run(agent, "Look up customer_42 and list their open orders.")
102+
print(result.final_output)
103+
```
104+
105+
What to know:
106+
107+
- Hosted tool search is available only with OpenAI Responses models. The current Python SDK support depends on `openai>=2.25.0`.
108+
- Add exactly one `ToolSearchTool()` when you configure deferred-loading surfaces on an agent.
109+
- Searchable surfaces include `@function_tool(defer_loading=True)`, `tool_namespace(name=..., description=..., tools=[...])`, and `HostedMCPTool(tool_config={..., "defer_loading": True})`.
110+
- Deferred-loading function tools must be paired with `ToolSearchTool()`. Namespace-only setups may also use `ToolSearchTool()` to let the model load the right group on demand.
111+
- `tool_namespace()` groups `FunctionTool` instances under a shared namespace name and description. This is usually the best fit when you have many related tools, such as `crm`, `billing`, or `shipping`.
112+
- OpenAI's official best-practice guidance is [Use namespaces where possible](https://developers.openai.com/api/docs/guides/tools-tool-search#use-namespaces-where-possible).
113+
- Prefer namespaces or hosted MCP servers over many individually deferred functions when possible. They usually give the model a better high-level search surface and better token savings.
114+
- Namespaces can mix immediate and deferred tools. Tools without `defer_loading=True` remain callable immediately, while deferred tools in the same namespace are loaded through tool search.
115+
- As a rule of thumb, keep each namespace fairly small, ideally fewer than 10 functions.
116+
- Named `tool_choice` cannot target bare namespace names or deferred-only tools. Prefer `auto`, `required`, or a real top-level callable tool name.
117+
- `ToolSearchTool(execution="client")` is for manual Responses orchestration. If the model emits a client-executed `tool_search_call`, the standard `Runner` raises instead of executing it for you.
118+
- Tool search activity appears in [`RunResult.new_items`](results.md#new-items) and in [`RunItemStreamEvent`](streaming.md#run-item-event-names) with dedicated item and event types.
119+
- See `examples/tools/tool_search.py` for complete runnable examples covering both namespaced loading and top-level deferred tools.
120+
- Official platform guide: [Tool search](https://developers.openai.com/api/docs/guides/tools-tool-search).
121+
57122
### Hosted container shell + skills
58123

59124
`ShellTool` also supports OpenAI-hosted container execution. Use this mode when you want the model to run shell commands in a managed container instead of your local runtime.
@@ -168,6 +233,8 @@ You can use any Python function as a tool. The Agents SDK will setup the tool au
168233

169234
We use Python's `inspect` module to extract the function signature, along with [`griffe`](https://mkdocstrings.github.io/griffe/) to parse docstrings and `pydantic` for schema creation.
170235

236+
When you are using OpenAI Responses models, `@function_tool(defer_loading=True)` hides a function tool until `ToolSearchTool()` loads it. You can also group related function tools with [`tool_namespace()`][agents.tool.tool_namespace]. See [Hosted tool search](#hosted-tool-search) for the full setup and constraints.
237+
171238
```python
172239
import json
173240

0 commit comments

Comments
 (0)