Skip to content

Commit 6bd0b37

Browse files
authored
Add non-OpenAI provider setup example
Add a minimal code example demonstrating how to configure a non-OpenAI provider using OpenAIChatCompletionsModel with a custom base_url, api_key, and set_tracing_disabled in the Non-OpenAI models section.
1 parent dddbce1 commit 6bd0b37

1 file changed

Lines changed: 19 additions & 0 deletions

File tree

docs/models/index.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -199,6 +199,25 @@ You can integrate other LLM providers with these built-in paths:
199199

200200
In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](../tracing.md).
201201

202+
``` python
203+
from agents import Agent, AsyncOpenAI, OpenAIChatCompletionsModel, set_tracing_disabled
204+
205+
set_tracing_disabled(disabled=True)
206+
207+
provider = AsyncOpenAI(
208+
api_key="Api_Key",
209+
base_url="Base URL of Provider",
210+
)
211+
212+
model = OpenAIChatCompletionsModel(
213+
model="Model_Name",
214+
openai_client=provider
215+
)
216+
217+
218+
agent= Agent(name="Helping Agent", instructions="You are a Helping Agent", model=model)
219+
```
220+
202221
!!! note
203222

204223
In these examples, we use the Chat Completions API/model, because many LLM providers still do not support the Responses API. If your LLM provider does support it, we recommend using Responses.

0 commit comments

Comments
 (0)