Chub AI Integration
Connect Chub AI to LLM API for AI-powered capabilities
Chub AI is a platform for discovering, sharing, and chatting with AI characters. It offers an extensive library of community-created character cards and supports bring-your-own-API-key connections so users can power their chats with their preferred AI models.
Chub AI's API Connections feature supports OpenAI-compatible endpoints, letting you use LLM API to access any model through a single key.
Prerequisites
- An LLM API account with an API key
- Chub AI installed or accessible
Setup
Get Your LLM API Key
- Log in to your LLM API dashboard
- Click Create Key to Start
- Copy your new API key immediately — it will only be shown once
- Store the key securely (e.g., in a password manager or
.envfile)
LLM API is an OpenAI-compatible gateway that gives you access to dozens of AI models through a single API key and endpoint.
Configure LLM API in Chub AI
- Log in to your Chub AI account at chub.ai.
- Navigate to Settings or API Connections.
- Select "OpenAI" or "OpenAI Compatible" as the API type.
- Enter the following details:
- API Key: paste the key you copied from app.llmapi.ai/api-keys
- Reverse Proxy / Base URL: https://api.llmapi.ai/v1
- Enter the model ID you want to use (e.g., openai/gpt-4o, anthropic/claude-3-5-sonnet).
- Click "Save" to apply your settings.
- Start chatting --- all requests will now go through LLM API.
Test the Integration
Verify that Chub AI can successfully communicate with LLM API by sending a test request. All requests will now be routed through LLM API.
LLM API works as a drop-in replacement for OpenAI's API. Any Chub AI feature that supports OpenAI will work seamlessly with LLM API.
Benefits of Using LLM API with Chub AI
- Multi-Provider Access: Use models from OpenAI, Anthropic, Google, and more through a single API
- Cost Control: Track and limit your AI spending with detailed usage analytics
- Unified Billing: One account for all providers instead of managing multiple API keys
- Caching: Reduce costs with response caching for repeated requests
View all available models on the models page.
How is this guide?