GuidesRAG & Enterprise Search
Pathway Integration
Connect Pathway to LLM API for AI-powered capabilities
Pathway is a real-time data processing framework with built-in LLM integration for building live AI pipelines. It processes streaming data and connects to vector stores and LLMs for real-time RAG.
Pathway's LLM xpack supports OpenAI-compatible endpoints.
Prerequisites
- An LLM API account with an API key
- Pathway installed or accessible
Setup
Get Your LLM API Key
- Log in to your LLM API dashboard
- Click Create Key to Start
- Copy your new API key immediately — it will only be shown once
- Store the key securely (e.g., in a password manager or
.envfile)
LLM API is an OpenAI-compatible gateway that gives you access to dozens of AI models through a single API key and endpoint.
Configure LLM API in Pathway
- Install Pathway:
pip install pathway[all]- Configure in code:
import pathway as pw
from pathway.xpacks.llm import llms
model = llms.OpenAIChat(
model="openai/gpt-4o",
api_key="your-llm-api-key-here",
api_base="https://api.llmapi.ai/v1"
)- Use in your Pathway pipeline.
Test the Integration
Verify that Pathway can successfully communicate with LLM API by sending a test request. All requests will now be routed through LLM API.
Pathway's real-time processing combined with LLM API creates live-updating RAG applications.
Benefits of Using LLM API with Pathway
- Multi-Provider Access: Use models from OpenAI, Anthropic, Google, and more through a single API
- Cost Control: Track and limit your AI spending with detailed usage analytics
- Unified Billing: One account for all providers instead of managing multiple API keys
- Caching: Reduce costs with response caching for repeated requests
View all available models on the models page.
How is this guide?