✨ Key Features

🔄

Unified Interface

Single API for multiple LLM providers including OpenAI, Anthropic, Google, Groq, and more.

🔑

API Key Rotation

Automatic round-robin rotation for multiple API keys with built-in rate limit handling.

🧠

LangChain Compatible

Seamless integration with LangChain framework - drop-in replacement for ChatOpenAI.

🛡️

Provider Fallback

Automatic switching to alternative providers/models in case of failures or rate limits.

Zero Configuration

Works out of the box with environment variables, or configure via borg.yml for advanced features.

🔍

Virtual Providers

Create custom fallback strategies and merge multiple providers seamlessly.

🔌 Supported Providers & Models

BorgLLM supports 17+ LLM providers out of the box. Click any model to copy it to your clipboard.

Any model identifier from these providers will also work out of the box, even if it's not listed here.

Provider Model Description Copy
Showing 1-20 of 32 models

🚀 Quick Start

Installation

pip install borgllm

Basic Usage - Zero Configuration

from borgllm import create_llm
from langchain_core.messages import HumanMessage

# Works with any provider - just specify provider:model
llm = create_llm("openai:gpt-4o", temperature=0.7)

# It's a standard LangChain ChatOpenAI instance
response = llm.invoke([HumanMessage(content="Hello! How are you?")])
print(response.content)

# Try different providers seamlessly
anthropic_llm = create_llm("anthropic:claude-sonnet-4", temperature=0.7)
google_llm = create_llm("google:gemini-2.5-pro", temperature=0.7)
groq_llm = create_llm("groq:llama-3.3-70b-versatile", temperature=0.7)

Advanced Features with borg.yml

# borg.yml - Advanced configuration
llm:
  providers:
    - name: "local_qwen"
      base_url: "http://localhost:1234/v1"
      model: "qwen/qwen3-8b"
      temperature: 0.7
      max_tokens: 8192

  virtual:
    - name: "smart-fallback"
      upstreams:
        # Try fast/cheap provider first
        - name: "groq:llama-3.3-70b-versatile"
        # Fall back to more capable model if needed
        - name: "openai:gpt-4o"
        # Use local model as final fallback
        - name: "local_qwen"
  
  default_model: "smart-fallback"

Using Virtual Providers

from borgllm import create_llm

# With borg.yml configuration above
llm = create_llm()  # Uses default "smart-fallback" virtual provider

# Automatically handles:
# - Rate limits (switches to next provider)
# - Context window limits (uses appropriate model)
# - Provider failures (seamless fallback)
response = llm.invoke([HumanMessage(content="Write a detailed analysis...")])
print(response.content)
📂 See More Examples

Explore comprehensive examples including custom providers, virtual providers, API key rotation, and advanced configurations.