Share via


OpenAIResponsesAgent Class

OpenAI Responses Agent class, enabling interaction with OpenAI's specialized "Responses" API. Supports file/web search tools, function calls, and structured or free-form text outputs.

Constructor

OpenAIResponsesAgent( *, ai_model_id: str, client: AsyncOpenAI, arguments: KernelArguments | None = None, description: str | None = None, function_choice_behavior: FunctionChoiceBehavior | None = None, id: str | None = None, instruction_role: str | None = None, instructions: str | None = None, kernel: Kernel | None = None, metadata: dict[str, str] | None = None, name: str | None = None, plugins: list[KernelPlugin | object] | dict[str, KernelPlugin | object] | None = None, polling_options: RunPollingOptions | None = None, prompt_template_config: PromptTemplateConfig | None = None, store_enabled: bool | None = None, temperature: float | None = None, text: ResponseTextConfigParam | None = None, tools: list[ToolParam] | None = None, top_p: float | None = None, **kwargs: Any )

Keyword-Only Parameters

Name Description
ai_model_id
Required

The AI model ID to use for "Responses" calls.

client
Required

The openai.AsyncOpenAI client.

arguments

Optional KernelArguments for custom behavior or prompt overrides.

description

Agent description.

function_choice_behavior

Configuration for function selection / plugin usage.

id

The unique ID for this agent instance.

instruction_role

The role used for the instructions block (system or developer).

instructions

Plain instructions for the agent. Overridden by prompt_template_config if present.

kernel

The Kernel instance, if any, used for plugin management or function calls.

metadata

Optional additional metadata for the agent.

name

Name of the agent instance.

plugins

A list or dict of KernelPlugins to attach to the agent or its kernel.

polling_options

Options controlling how the agent polls for completions / tool calls.

prompt_template_config

A prompt template config object, can override instructions if set.

store_enabled

Whether to store conversation states in the service or use ephemeral mode.

temperature

The model temperature controlling random sampling.

text

The response text config dict or object.

tools

A list of tool definitions for extended functionalities.

top_p

Alternative nucleus sampling parameter for controlling token distribution.

kwargs

Additional advanced configuration keyword arguments.

Methods

setup_resources

Creates an AsyncOpenAI client and a model ID from provided arguments or environment. Returns the client and model ID.

configure_file_search_tool

Generates a file search tool definition for embedding-based retrieval from vector stores.

configure_web_search_tool

Generates a web search tool definition, optionally including user location and context size.

configure_computer_use_tool

Generates a computer use tool definition (not yet implemented).

configure_response_format

Configures structured or free-form response output formats using JSON schemas or model types.

get_response

Get a single response message from the agent for the given messages and optional arguments.

invoke

Invoke the agent and yield each complete response message returned by the model.

invoke_stream

Stream partial message chunks from the agent as they are produced, optionally collecting results.

_prepare_input_message

Prepares and normalizes user messages into a ChatHistory object before model invocation.

_generate_structured_output_response_format_schema

Wraps a JSON schema in strict formatting for use in structured response mode.

setup_resources

Creates an AsyncOpenAI client and a model ID from provided arguments or environment. Returns the client and model ID.

static setup_resources(...) -> tuple[AsyncOpenAI, str]

configure_file_search_tool

Generates a file search tool definition for embedding-based retrieval from vector stores.

static configure_file_search_tool(...) -> FileSearchToolParam

configure_web_search_tool

Generates a web search tool definition, optionally including user location and context size.

static configure_web_search_tool(...) -> WebSearchToolParam

configure_computer_use_tool

Generates a computer use tool definition (not yet implemented).

static configure_computer_use_tool() -> ComputerToolParam

configure_response_format

Configures structured or free-form response output formats using JSON schemas or model types.

static configure_response_format(...) -> dict[str, Any] | None

get_response

Get a single response message from the agent for the given messages and optional arguments.

async get_response(...) -> AgentResponseItem[ChatMessageContent]

invoke

Invoke the agent and yield each complete response message returned by the model.

async invoke(...) -> AsyncIterable[AgentResponseItem[ChatMessageContent]]

invoke_stream

Stream partial message chunks from the agent as they are produced, optionally collecting results.

async invoke_stream(...) -> AsyncIterable[AgentResponseItem[StreamingChatMessageContent]]

_prepare_input_message

Prepares and normalizes user messages into a ChatHistory object before model invocation.

_prepare_input_message(messages: str | ChatMessageContent | list[str | ChatMessageContent] | None = None) -> ChatHistory

_generate_structured_output_response_format_schema

Wraps a JSON schema in strict formatting for use in structured response mode.

static _generate_structured_output_response_format_schema(name: str, schema: dict) -> dict

Attributes

ai_model_id

The model ID used by the agent for all completions.

ai_model_id: str

client

The AsyncOpenAI client instance backing all model calls.

client: AsyncOpenAI

function_choice_behavior

The behavior used to determine tool/function usage by the model.

function_choice_behavior: FunctionChoiceBehavior

instruction_role

The role assigned to the instruction message, typically "developer" or "system".

instruction_role: str

metadata

Optional key-value metadata stored with the agent.

metadata: dict[str, Any]

temperature

Sampling temperature used to modulate response randomness.

temperature: float | None

top_p

Top-p sampling cutoff used to control nucleus sampling.

top_p: float | None

plugins

List of plugins made available to the agent.

plugins: list[Any]

polling_options

Options governing polling interval and timeout for the agent.

polling_options: RunPollingOptions

store_enabled

Indicates whether agent responses are persisted to storage.

store_enabled: bool

text

Text formatting options passed with the response configuration.

text: dict[str, Any]

tools

A list of tools (functions) that the agent can invoke during execution.

tools: list[ToolParam]