Overview

VibeKit Proxy works with any framework or library that makes HTTP requests to AI providers. Simply configure the proxy URL in your existing setup.

LangChain

from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

# OpenAI models
openai_llm = ChatOpenAI(
    model="gpt-4",
    base_url="http://localhost:8080/v1"  # Add this line
)

# Anthropic models
anthropic_llm = ChatAnthropic(
    model="claude-3-5-sonnet-20241022",
    base_url="http://localhost:8080"  # Add this line
)

LlamaIndex

from llama_index.llms.openai import OpenAI
from llama_index.llms.anthropic import Anthropic

# OpenAI models
openai_llm = OpenAI(
    model="gpt-4",
    api_base="http://localhost:8080/v1"  # Add this line
)

# Anthropic models
anthropic_llm = Anthropic(
    model="claude-3-5-sonnet-20241022",
    api_base="http://localhost:8080"  # Add this line
)

Haystack

Python
from haystack.components.generators import OpenAIGenerator
from haystack.utils import Secret

generator = OpenAIGenerator(
    model="gpt-4",
    api_key=Secret.from_token("your-api-key"),
    api_base_url="http://localhost:8080/v1"  # Add this line
)

CrewAI

Python
from crewai import Agent, LLM

# OpenAI models
openai_llm = LLM(
    model="openai/gpt-4",
    base_url="http://localhost:8080/v1"  # Add this line
)

# Anthropic models
anthropic_llm = LLM(
    model="anthropic/claude-3-5-sonnet-20241022", 
    base_url="http://localhost:8080"  # Add this line
)

agent = Agent(
    role="Customer Service Agent",
    goal="Help customers securely",
    llm=openai_llm
)

AutoGen

Python
from autogen import ConversableAgent

# OpenAI configuration
openai_config = {
    "model": "gpt-4",
    "api_key": "your-api-key",
    "base_url": "http://localhost:8080/v1"  # Add this line
}

agent = ConversableAgent(
    name="assistant",
    llm_config={"config_list": [openai_config]}
)

Guidance

Python
import guidance

# Configure OpenAI with proxy
guidance.llm = guidance.llms.OpenAI(
    model="gpt-4",
    api_base="http://localhost:8080/v1"  # Add this line
)

Instructor

import instructor
from openai import OpenAI

# Patch OpenAI client with proxy
client = instructor.from_openai(
    OpenAI(base_url="http://localhost:8080/v1")  # Add this line
)

LiteLLM

Python
import litellm

# Set proxy URL for all LiteLLM calls
litellm.api_base = "http://localhost:8080"

# Use with any supported provider
response = litellm.completion(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello"}]
)

Generic HTTP Clients

For any framework that uses HTTP clients directly:
import requests

# Direct HTTP requests to proxy
response = requests.post(
    "http://localhost:8080/v1/chat/completions",  # Use proxy URL
    headers={
        "Authorization": "Bearer your-api-key",
        "Content-Type": "application/json"
    },
    json={
        "model": "gpt-4",
        "messages": [{"role": "user", "content": "Hello"}]
    }
)

Environment Configuration

Set up proxy URLs for any framework using environment variables:
Environment Variables
# Development
VIBEKIT_PROXY_URL=http://localhost:8080
OPENAI_BASE_URL=http://localhost:8080/v1
ANTHROPIC_BASE_URL=http://localhost:8080

# Production  
VIBEKIT_PROXY_URL=https://your-proxy-domain.com
OPENAI_BASE_URL=https://your-proxy-domain.com/v1
ANTHROPIC_BASE_URL=https://your-proxy-domain.com

Integration Pattern

Most AI frameworks follow this pattern:
  1. Find the configuration option for base URL, API base, or endpoint
  2. Replace the default URL with your VibeKit Proxy URL
  3. Keep everything else the same - API keys, models, parameters
Common configuration parameter names:
  • base_url, baseURL, base_URL
  • api_base, api_base_url, apiBase
  • endpoint, host, url