Share via


TextCompletionClientBase Class

Base class for text completion AI services.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Constructor

TextCompletionClientBase(*, ai_model_id: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)], service_id: str = '')

Keyword-Only Parameters

Name Description
ai_model_id
Required
service_id
Required

Methods

get_streaming_text_content

This is the method that is called from the kernel to get a stream response from a text-optimized LLM.

get_streaming_text_contents

Create streaming text contents, in the number specified by the settings.

get_text_content

This is the method that is called from the kernel to get a response from a text-optimized LLM.

get_text_contents

Create text contents, in the number specified by the settings.

get_streaming_text_content

This is the method that is called from the kernel to get a stream response from a text-optimized LLM.

async get_streaming_text_content(prompt: str, settings: PromptExecutionSettings) -> AsyncGenerator[StreamingTextContent | None, Any]

Parameters

Name Description
prompt
Required
str

The prompt to send to the LLM.

settings
Required
<xref:semantic_kernel.connectors.ai.text_completion_client_base.PromptExecutionSettings>

Settings for the request.

Returns

Type Description

A stream representing the response(s) from the LLM.

get_streaming_text_contents

Create streaming text contents, in the number specified by the settings.

async get_streaming_text_contents(prompt: str, settings: PromptExecutionSettings) -> AsyncGenerator[list[StreamingTextContent], Any]

Parameters

Name Description
prompt
Required
str

The prompt to send to the LLM.

settings
Required
<xref:semantic_kernel.connectors.ai.text_completion_client_base.PromptExecutionSettings>

Settings for the request.

get_text_content

This is the method that is called from the kernel to get a response from a text-optimized LLM.

async get_text_content(prompt: str, settings: PromptExecutionSettings) -> TextContent | None

Parameters

Name Description
prompt
Required
str

The prompt to send to the LLM.

settings
Required
<xref:semantic_kernel.connectors.ai.text_completion_client_base.PromptExecutionSettings>

Settings for the request.

Returns

Type Description

A string or list of strings representing the response(s) from the LLM.

get_text_contents

Create text contents, in the number specified by the settings.

async get_text_contents(prompt: str, settings: PromptExecutionSettings) -> list[TextContent]

Parameters

Name Description
prompt
Required
str

The prompt to send to the LLM.

settings
Required
<xref:semantic_kernel.connectors.ai.text_completion_client_base.PromptExecutionSettings>

Settings for the request.

Returns

Type Description

A string or list of strings representing the response(s) from the LLM.

Attributes

ai_model_id

ai_model_id: Annotated[str, StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)]

service_id

service_id: str