Zed Integration
Connect Zed to LLM API for AI-powered capabilities
Zed is a high-performance, modern code editor built for speed and collaboration. It includes built-in AI features through its Agent Panel, supporting multiple LLM providers including OpenAI, Anthropic, Google, and any OpenAI-compatible endpoint.
Zed's settings file lets you add custom OpenAI-compatible providers, making it easy to connect LLM API for AI-assisted coding.
Prerequisites
- An LLM API account with an API key
- Zed installed or accessible
Setup
Get Your LLM API Key
- Log in to your LLM API dashboard
- Click Create Key to Start
- Copy your new API key immediately — it will only be shown once
- Store the key securely (e.g., in a password manager or
.envfile)
LLM API is an OpenAI-compatible gateway that gives you access to dozens of AI models through a single API key and endpoint.
Configure LLM API in Zed
- Open Zed and access the Agent Panel settings (run the command: agent: open settings).
- Add the following configuration to your Zed settings file:
{
"language_models": {
"openai_compatible": {
"LLM API": {
"api_url": "https://api.llmapi.ai/v1",
"available_models": [
{
"name": "openai/gpt-4o",
"display_name": "GPT-4o via LLM API",
"max_tokens": 128000
}
]
}
}
}
}- Set the API key as an environment variable:
export LLM_API_API_KEY="your-llm-api-key-here"- Restart Zed --- your LLM API models will appear in the Agent Panel model dropdown.
Test the Integration
Verify that Zed can successfully communicate with LLM API by sending a test request. All requests will now be routed through LLM API.
The environment variable name is derived from the provider name: spaces become underscores, converted to uppercase, with _API_KEY appended.
Benefits of Using LLM API with Zed
- Multi-Provider Access: Use models from OpenAI, Anthropic, Google, and more through a single API
- Cost Control: Track and limit your AI spending with detailed usage analytics
- Unified Billing: One account for all providers instead of managing multiple API keys
- Caching: Reduce costs with response caching for repeated requests
View all available models on the models page.
How is this guide?