LangChain
Use Promptly as your LLM backend in LangChain
Integrate Promptly with LangChain to automatically optimize costs on every LLM call. Smart routing, caching, and prompt compression work transparently.
Setup Guide
Install LangChain
Install the LangChain OpenAI integration package.
pip install langchain-openaiConfigure the Client
Point LangChain's OpenAI client to Promptly's endpoint. All chains, agents, and tools work automatically.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
api_key="sk-promptly-...",
base_url="https://api.getpromptly.in/v1",
)
response = llm.invoke("Explain quantum computing simply.")
print(response.content)Use with Chains & Agents
Every chain and agent call is automatically optimized - prompts are compressed, similar queries are cached, and simple requests route to cheaper models.
from langchain_core.prompts import PromptTemplate
prompt = PromptTemplate.from_template("Summarize: {text}")
chain = prompt | llm
result = chain.invoke({"text": "Your long document here..."})
print(result.content)Why Use Promptly with LangChain?
- Zero code changes to existing LangChain pipelines
- Automatic prompt compression before every LLM call
- Semantic caching across chain executions
- Smart routing works with LangChain's model parameter
- Full analytics for every chain step in Promptly dashboard
Start optimizing LangChain costs
Sign up, grab your API key, and change your base URL. Under 2 minutes.
Other Integrations
Promptly Python SDK
Official Python SDK - the fastest way to get started
Promptly Node.js SDK
Official Node.js SDK - TypeScript-first, zero config
Vercel AI SDK
Optimize AI SDK streaming responses with Promptly
LlamaIndex
Optimize your RAG pipeline costs with Promptly
OpenAI Python SDK
Drop-in optimization for the official OpenAI Python library