OpenHands Integration
Connect OpenHands to LLM API for AI-powered capabilities
OpenHands (formerly OpenDevin) is an open-source AI software development agent that can write code, run commands, browse the web, and interact with APIs. It provides a powerful coding assistant that can tackle complex multi-step development tasks autonomously.
OpenHands supports custom LLM providers through its settings, allowing you to connect LLM API as your AI backend.
Prerequisites
- An LLM API account with an API key
- OpenHands installed or accessible
Setup
Get Your LLM API Key
- Log in to your LLM API dashboard
- Click Create Key to Start
- Copy your new API key immediately — it will only be shown once
- Store the key securely (e.g., in a password manager or
.envfile)
LLM API is an OpenAI-compatible gateway that gives you access to dozens of AI models through a single API key and endpoint.
Configure LLM API in OpenHands
- Open OpenHands in your browser.
- Click the Settings icon (gear) in the interface.
- Navigate to the "LLM Settings" section.
- In the "LLM Provider" or "Model" dropdown, select "Custom" or enter a custom model string.
- Fill in the following fields:
- API Key: paste the key you copied from app.llmapi.ai/api-keys
- Base URL: https://api.llmapi.ai/v1
- Model: the model ID you wish to use (e.g., openai/gpt-4o)
- Click "Save" to apply the settings.
export LLM_API_KEY="your-llm-api-key-here"
export LLM_BASE_URL="https://api.llmapi.ai/v1"
export LLM_MODEL="openai/gpt-4o"Test the Integration
Verify that OpenHands can successfully communicate with LLM API by sending a test request. All requests will now be routed through LLM API.
OpenHands reads LLM configuration from both its UI settings and environment variables. Environment variables take precedence if both are set.
Benefits of Using LLM API with OpenHands
- Multi-Provider Access: Use models from OpenAI, Anthropic, Google, and more through a single API
- Cost Control: Track and limit your AI spending with detailed usage analytics
- Unified Billing: One account for all providers instead of managing multiple API keys
- Caching: Reduce costs with response caching for repeated requests
View all available models on the models page.
How is this guide?