PrivateGPT Integration
Connect PrivateGPT to LLM API for AI-powered capabilities
PrivateGPT is a production-ready AI project that allows you to interact with your documents using LLMs while keeping everything 100% private. It supports both local models and cloud API providers.
PrivateGPT's settings.yaml supports OpenAI-compatible endpoints for cloud model access.
Prerequisites
- An LLM API account with an API key
- PrivateGPT installed or accessible
Setup
Get Your LLM API Key
- Log in to your LLM API dashboard
- Click Create Key to Start
- Copy your new API key immediately — it will only be shown once
- Store the key securely (e.g., in a password manager or
.envfile)
LLM API is an OpenAI-compatible gateway that gives you access to dozens of AI models through a single API key and endpoint.
Configure LLM API in PrivateGPT
- Open the settings.yaml file in your PrivateGPT installation.
- Configure the OpenAI provider:
llm:
mode: openai
openai:
api_key: your-llm-api-key-here
api_base: https://api.llmapi.ai/v1
model: openai/gpt-4o- Save the file and restart PrivateGPT.
Test the Integration
Verify that PrivateGPT can successfully communicate with LLM API by sending a test request. All requests will now be routed through LLM API.
PrivateGPT processes documents locally while using LLM API for generation --- your documents stay on your machine.
Benefits of Using LLM API with PrivateGPT
- Multi-Provider Access: Use models from OpenAI, Anthropic, Google, and more through a single API
- Cost Control: Track and limit your AI spending with detailed usage analytics
- Unified Billing: One account for all providers instead of managing multiple API keys
- Caching: Reduce costs with response caching for repeated requests
View all available models on the models page.
How is this guide?