AnythingLLM Integration
Connect AnythingLLM to LLM API for AI-powered capabilities
AnythingLLM is a full-stack desktop and Docker application for building private AI assistants. It supports document ingestion, vector search, and chat capabilities with multiple LLM providers --- all running locally or connecting to cloud APIs.
AnythingLLM's provider configuration supports custom OpenAI-compatible endpoints, making it easy to connect LLM API.
Prerequisites
- An LLM API account with an API key
- AnythingLLM installed or accessible
Setup
Get Your LLM API Key
- Log in to your LLM API dashboard
- Click Create Key to Start
- Copy your new API key immediately — it will only be shown once
- Store the key securely (e.g., in a password manager or
.envfile)
LLM API is an OpenAI-compatible gateway that gives you access to dozens of AI models through a single API key and endpoint.
Configure LLM API in AnythingLLM
- Open AnythingLLM (desktop app or web interface).
- Navigate to Settings → LLM Preference.
- Select "Generic OpenAI" as the LLM provider.
- Enter the following details:
- Base URL: https://api.llmapi.ai/v1
- API Key: paste the key you copied from app.llmapi.ai/api-keys
- Model: the model ID you wish to use (e.g., openai/gpt-4o)
- Set the token limit according to your chosen model.
- Click "Save changes" to apply.
- Start a new workspace chat --- all AI requests will now go through LLM API.
Test the Integration
Verify that AnythingLLM can successfully communicate with LLM API by sending a test request. All requests will now be routed through LLM API.
AnythingLLM's "Generic OpenAI" provider works with any OpenAI-compatible API. You can combine LLM API with AnythingLLM's local document processing for a powerful private AI assistant.
Benefits of Using LLM API with AnythingLLM
- Multi-Provider Access: Use models from OpenAI, Anthropic, Google, and more through a single API
- Cost Control: Track and limit your AI spending with detailed usage analytics
- Unified Billing: One account for all providers instead of managing multiple API keys
- Caching: Reduce costs with response caching for repeated requests
View all available models on the models page.
How is this guide?