models.anthropic
- class agentopera.models.anthropic.AnthropicChatCompletionClient(**kwargs: Unpack)[source]
Bases:
BaseAnthropicChatCompletionClient
Chat completion client for Anthropic’s Claude models.
- Parameters:
model (str) – The Claude model to use (e.g., “claude-3-sonnet-20240229”, “claude-3-opus-20240229”)
api_key (str, optional) – Anthropic API key. Required if not in environment variables.
base_url (str, optional) – Override the default API endpoint.
max_tokens (int, optional) – Maximum tokens in the response. Default is 4096.
temperature (float, optional) – Controls randomness. Lower is more deterministic. Default is 1.0.
top_p (float, optional) – Controls diversity via nucleus sampling. Default is 1.0.
top_k (int, optional) – Controls diversity via top-k sampling. Default is -1 (disabled).
model_info (ModelInfo, optional) – The capabilities of the model. Required if using a custom model.
To use this client, you must install the Anthropic extension:
pip install "agentopera[anthropic]"
Example:
from agentopera.models.anthropic import AnthropicChatCompletionClient from agentopera.core.types.models import UserMessage anthropic_client = AnthropicChatCompletionClient( model="claude-3-sonnet-20240229", api_key="your-api-key", # Optional if ANTHROPIC_API_KEY is set in environment ) result = await anthropic_client.create([UserMessage(content="What is the capital of France?", source="user")]) print(result)
To load the client from a configuration:
from agentopera.core.types.models import ChatCompletionClient config = { "provider": "AnthropicChatCompletionClient", "config": {"model": "claude-3-sonnet-20240229"}, } client = ChatCompletionClient.load_component(config)
The client supports function calling with Claude models that have the capability:
from agentopera.core.tools import FunctionTool def get_weather(location: str) -> str: '''Get the weather for a location''' return f"The weather in {location} is sunny." tool = FunctionTool(get_weather) result = await anthropic_client.create( [UserMessage(content="What's the weather in Paris?", source="user")], tools=[tool], )
- component_type = 'model'
- component_config_schema
- component_provider_override = 'agentopera.agents.models.anthropic.AnthropicChatCompletionClient'
- class agentopera.models.anthropic.BaseAnthropicChatCompletionClient(client: AsyncAnthropic, *, create_args: Dict[str, Any], model_info: ModelInfo | None = None)[source]
Bases:
ChatCompletionClient
- async create(messages: Sequence[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage | VercelMessage], *, tools: Sequence[Tool | ToolSchema] = [], json_output: bool | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) CreateResult [source]
- async create_stream(messages: Sequence[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage | VercelMessage], *, tools: Sequence[Tool | ToolSchema] = [], json_output: bool | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None, max_consecutive_empty_chunk_tolerance: int = 0) AsyncGenerator[str | CreateResult, None] [source]
Creates an AsyncGenerator that yields a stream of completions based on the provided messages and tools.
- count_tokens(messages: Sequence[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage | VercelMessage], *, tools: Sequence[Tool | ToolSchema] = []) int [source]
Estimate the number of tokens used by messages and tools.
Note: This is an estimation based on common tokenization patterns and may not perfectly match Anthropic’s exact token counting for Claude models.
- remaining_tokens(messages: Sequence[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage | VercelMessage], *, tools: Sequence[Tool | ToolSchema] = []) int [source]
Calculate the remaining tokens based on the model’s token limit.
- actual_usage() RequestUsage [source]
- total_usage() RequestUsage [source]
- property capabilities: ModelCapabilities
- class agentopera.models.anthropic.AnthropicClientConfiguration[source]
Bases:
BaseAnthropicClientConfiguration
- tools: List[Dict[str, Any]] | None
- tool_choice: Literal['auto', 'any', 'none'] | Dict[str, Any] | None
- model: str
- max_tokens: int | None
- temperature: float | None
- top_p: float | None
- top_k: int | None
- stop_sequences: List[str] | None
- response_format: ResponseFormat | None
- metadata: Dict[str, str] | None
- api_key: str
- base_url: str | None
- model_capabilities: ModelCapabilities
- timeout: float | None
- max_retries: int | None
- default_headers: Dict[str, str] | None
- class agentopera.models.anthropic.AnthropicClientConfigurationConfigModel(*, model: str, max_tokens: int | None = 4096, temperature: float | None = 1.0, top_p: float | None = None, top_k: int | None = None, stop_sequences: List[str] | None = None, response_format: ResponseFormat | None = None, metadata: Dict[str, str] | None = None, api_key: str | None = None, base_url: str | None = None, model_capabilities: ModelCapabilities | None = None, model_info: ModelInfo | None = None, timeout: float | None = None, max_retries: int | None = None, default_headers: Dict[str, str] | None = None, tools: List[Dict[str, Any]] | None = None, tool_choice: Literal['auto', 'any', 'none'] | Dict[str, Any] | None = None)[source]
Bases:
BaseAnthropicClientConfigurationConfigModel
- tools: List[Dict[str, Any]] | None
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- tool_choice: Literal['auto', 'any', 'none'] | Dict[str, Any] | None
- class agentopera.models.anthropic.CreateArgumentsConfigModel(*, model: str, max_tokens: int | None = 4096, temperature: float | None = 1.0, top_p: float | None = None, top_k: int | None = None, stop_sequences: List[str] | None = None, response_format: ResponseFormat | None = None, metadata: Dict[str, str] | None = None)[source]
Bases:
BaseModel
- model: str
- max_tokens: int | None
- temperature: float | None
- top_p: float | None
- top_k: int | None
- stop_sequences: List[str] | None
- response_format: ResponseFormat | None
- metadata: Dict[str, str] | None
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].