Aider Integration
Connect Aider to LLM API for AI-powered capabilities
Aider is an AI pair programming tool that works in your terminal. It lets you edit code in your local git repos by chatting with large language models, supporting multiple providers including OpenAI, Anthropic, and any OpenAI-compatible API.
Aider supports custom OpenAI-compatible endpoints through environment variables and configuration files, making it simple to connect LLM API.
Prerequisites
- An LLM API account with an API key
- Aider installed or accessible
Setup
Get Your LLM API Key
- Log in to your LLM API dashboard
- Click Create Key to Start
- Copy your new API key immediately — it will only be shown once
- Store the key securely (e.g., in a password manager or
.envfile)
LLM API is an OpenAI-compatible gateway that gives you access to dozens of AI models through a single API key and endpoint.
Configure LLM API in Aider
Option A --- Environment variables (recommended)
- Set the following environment variables in your shell or .env file:
export OPENAI_API_KEY="your-llm-api-key-here"
export OPENAI_API_BASE="https://api.llmapi.ai/v1"- Launch Aider with the model flag:
aider \--model openai/gpt-4oOption B --- YAML config file
- Add to your .aider.conf.yml:
openai-api-key: your-llm-api-key-here
openai-api-base: https://api.llmapi.ai/v1
model: openai/gpt-4oOption C --- Command line flags
aider \--openai-api-key your-llm-api-key-here \\
\--openai-api-base https://api.llmapi.ai/v1 \\
\--model openai/gpt-4oTest the Integration
Verify that Aider can successfully communicate with LLM API by sending a test request. All requests will now be routed through LLM API.
Setting OPENAI_API_BASE overrides the default OpenAI endpoint. Aider will send all requests to LLM API instead.
Benefits of Using LLM API with Aider
- Multi-Provider Access: Use models from OpenAI, Anthropic, Google, and more through a single API
- Cost Control: Track and limit your AI spending with detailed usage analytics
- Unified Billing: One account for all providers instead of managing multiple API keys
- Caching: Reduce costs with response caching for repeated requests
View all available models on the models page.
How is this guide?