🦜

LangChain

Use Promptly as your LLM backend in LangChain

Integrate Promptly with LangChain to automatically optimize costs on every LLM call. Smart routing, caching, and prompt compression work transparently.


Setup Guide

1

Install LangChain

Install the LangChain OpenAI integration package.

bash
pip install langchain-openai
2

Configure the Client

Point LangChain's OpenAI client to Promptly's endpoint. All chains, agents, and tools work automatically.

python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-4o",
    api_key="sk-promptly-...",
    base_url="https://api.getpromptly.in/v1",
)

response = llm.invoke("Explain quantum computing simply.")
print(response.content)
3

Use with Chains & Agents

Every chain and agent call is automatically optimized - prompts are compressed, similar queries are cached, and simple requests route to cheaper models.

python
from langchain_core.prompts import PromptTemplate

prompt = PromptTemplate.from_template("Summarize: {text}")
chain = prompt | llm

result = chain.invoke({"text": "Your long document here..."})
print(result.content)

Why Use Promptly with LangChain?

  • Zero code changes to existing LangChain pipelines
  • Automatic prompt compression before every LLM call
  • Semantic caching across chain executions
  • Smart routing works with LangChain's model parameter
  • Full analytics for every chain step in Promptly dashboard

Start optimizing LangChain costs

Sign up, grab your API key, and change your base URL. Under 2 minutes.