Dify Integration
Connect Dify to LLM API for AI-powered capabilities
Dify is a low-code platform for building AI-powered applications with workflow automation, RAG pipelines, and agent capabilities. It supports a wide range of model providers and allows workspace administrators to manage API keys and model access centrally.
Dify's Model Providers feature supports OpenAI-compatible APIs, allowing you to add LLM API as a custom provider for all your Dify applications.
Prerequisites
- An LLM API account with an API key
- Dify installed or accessible
Setup
Get Your LLM API Key
- Log in to your LLM API dashboard
- Click Create Key to Start
- Copy your new API key immediately — it will only be shown once
- Store the key securely (e.g., in a password manager or
.envfile)
LLM API is an OpenAI-compatible gateway that gives you access to dozens of AI models through a single API key and endpoint.
Add LLM API as a Model Provider in Dify
- Log in to your Dify workspace as an admin or owner.
- Navigate to Settings → Model Providers.
- Find "OpenAI-API-compatible" in the provider list and click to configure.
- Enter the following details:
- Model Name: the model ID you wish to use (e.g., openai/gpt-4o)
- API Key: paste the key you copied from app.llmapi.ai/api-keys
- API Endpoint URL: https://api.llmapi.ai/v1
- Click "Save" --- Dify will validate the credentials before enabling the provider.
- The model will now be available for selection in your Dify applications.
Test the Integration
Verify that Dify can successfully communicate with LLM API by sending a test request. All requests will now be routed through LLM API.
You can add multiple model configurations under the same provider. Each can use a different model ID while sharing the same LLM API key and endpoint.
Benefits of Using LLM API with Dify
- Multi-Provider Access: Use models from OpenAI, Anthropic, Google, and more through a single API
- Cost Control: Track and limit your AI spending with detailed usage analytics
- Unified Billing: One account for all providers instead of managing multiple API keys
- Caching: Reduce costs with response caching for repeated requests
View all available models on the models page.
How is this guide?