import litellm# Set proxy URL for all LiteLLM callslitellm.api_base = "http://localhost:8080"# Use with any supported providerresponse = litellm.completion( model="gpt-4", messages=[{"role": "user", "content": "Hello"}])
Set up proxy URLs for any framework using environment variables:
Environment Variables
Copy
# DevelopmentVIBEKIT_PROXY_URL=http://localhost:8080OPENAI_BASE_URL=http://localhost:8080/v1ANTHROPIC_BASE_URL=http://localhost:8080# Production VIBEKIT_PROXY_URL=https://your-proxy-domain.comOPENAI_BASE_URL=https://your-proxy-domain.com/v1ANTHROPIC_BASE_URL=https://your-proxy-domain.com