API Reference¶
This section provides detailed documentation for the BorgLLM API.
- class borgllm.borgllm.BorgLLM(*args, **kwargs)[source]¶
Bases:
object
- property config: LLMConfig | None¶
Public property to access the configuration.
- get(name: str | None = None, approximate_tokens: int | None = None, timeout: float | None = None, allow_await_cooldown: bool = True) LLMProviderConfig [source]¶
- classmethod get_instance(config_path: str = 'borg.yaml', initial_config_data: Dict | None = None)[source]¶
Get the singleton BorgLLM instance.
- property providers: Dict[str, LLMProviderConfig]¶
Public property to access the real providers.
LangChain integration for BorgLLM configuration provider.
This module provides automatic LLM client creation with built-in 429 error handling and configuration updates.
- class borgllm.langchain.BorgLLMLangChainClient(borgllm_config: ~borgllm.borgllm.BorgLLM, provider_name: str | None = None, overrides: ~typing.Dict[str, ~typing.Any] | None = None, *, name: str | None = None, cache: ~langchain_core.caches.BaseCache | bool | None = None, verbose: bool = <factory>, callbacks: list[~langchain_core.callbacks.base.BaseCallbackHandler] | ~langchain_core.callbacks.base.BaseCallbackManager | None = None, tags: list[str] | None = None, metadata: dict[str, ~typing.Any] | None = None, custom_get_token_ids: ~typing.Callable[[str], list[int]] | None = None, callback_manager: ~langchain_core.callbacks.base.BaseCallbackManager | None = None, rate_limiter: ~langchain_core.rate_limiters.BaseRateLimiter | None = None, disable_streaming: bool | ~typing.Literal['tool_calling'] = False, client: ~typing.Any = None, async_client: ~typing.Any = None, root_client: ~typing.Any = None, root_async_client: ~typing.Any = None, model: str = 'gpt-3.5-turbo', temperature: float | None = None, model_kwargs: dict[str, ~typing.Any] = <factory>, api_key: ~pydantic.types.SecretStr | None = <factory>, base_url: str | None = None, organization: str | None = None, openai_proxy: str | None = <factory>, timeout: float | tuple[float, float] | ~typing.Any | None = None, stream_usage: bool = False, max_retries: int | None = None, presence_penalty: float | None = None, frequency_penalty: float | None = None, seed: int | None = None, logprobs: bool | None = None, top_logprobs: int | None = None, logit_bias: dict[int, int] | None = None, streaming: bool = False, n: int | None = None, top_p: float | None = None, max_completion_tokens: int | None = None, reasoning_effort: str | None = None, reasoning: dict[str, ~typing.Any] | None = None, tiktoken_model_name: str | None = None, default_headers: ~collections.abc.Mapping[str, str] | None = None, default_query: ~collections.abc.Mapping[str, object] | None = None, http_client: ~typing.Any | None = None, http_async_client: ~typing.Any | None = None, stop_sequences: list[str] | str | None = None, extra_body: ~collections.abc.Mapping[str, ~typing.Any] | None = None, include_response_headers: bool = False, disabled_params: dict[str, ~typing.Any] | None = None, include: list[str] | None = None, service_tier: str | None = None, store: bool | None = None, truncation: str | None = None, use_previous_response_id: bool = False, use_responses_api: bool | None = None, output_version: ~typing.Literal['v0', 'responses/v1'] = 'v0', **kwargs)[source]¶
Custom LangChain OpenAI client that integrates with BorgLLM configuration.
Automatically handles: - Configuration updates for each call - 429 error detection and notification to BorgLLM - Automatic retry with updated configuration
- __init__(borgllm_config: BorgLLM, provider_name: str | None = None, overrides: Dict[str, Any] | None = None, **kwargs)[source]¶
Initialize the LangChain client with BorgLLM integration.
- Parameters:
borgllm_config – The BorgLLM configuration instance
provider_name – Optional name of the provider to use. If None, BorgLLM will use its default logic.
overrides – Optional dictionary of settings to override.
**kwargs – Additional arguments passed to ChatOpenAI
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow', 'populate_by_name': True, 'protected_namespaces': (), 'validate_by_alias': True, 'validate_by_name': True}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- property provider_name: str¶
Get the provider name.
- borgllm.langchain.create_llm(provider_name: str | None = None, config_file: str = 'borg.yaml', initial_config_data: Dict[str, Any] | None = None, overrides: Dict[str, Any] | None = None, **kwargs) BorgLLMLangChainClient [source]¶
Convenience function to create a LangChain LLM client directly.
- Parameters:
provider_name – Optional name of the provider to use. If None, BorgLLM will use its default logic.
config_file – Path to the BorgLLM configuration file.
initial_config_data – Optional initial configuration data as dictionary.
overrides – Optional dictionary of settings to override.
**kwargs – Additional arguments passed to the LangChain client
- Returns:
A LangChain OpenAI client configured with the specified provider
Note
The BorgLLMLangChainClient class is a low-level component and is typically not manually initialized by users. It is primarily used internally by the create_llm convenience function to manage LLM configurations and handle rate limiting.