Promptly Python SDK
Official Python SDK - the fastest way to get started
Install the Promptly SDK and start saving in 30 seconds. Extends the OpenAI client with smart defaults - no base_url needed.
View guide →Promptly Node.js SDK
Official Node.js SDK - TypeScript-first, zero config
Install the Promptly SDK for Node.js and start saving immediately. Extends the OpenAI client with Promptly defaults.
View guide →LangChain
Use Promptly as your LLM backend in LangChain
Integrate Promptly with LangChain to automatically optimize costs on every LLM call. Smart routing, caching, and prompt compression work transparently.
View guide →Vercel AI SDK
Optimize AI SDK streaming responses with Promptly
Use Promptly with Vercel AI SDK to reduce costs on streaming LLM responses in Next.js, SvelteKit, and other frameworks. Drop-in replacement with full streaming support.
View guide →LlamaIndex
Optimize your RAG pipeline costs with Promptly
Integrate Promptly with LlamaIndex to reduce LLM costs in RAG applications. Especially effective for long-context queries where context pruning and caching deliver maximum savings.
View guide →OpenAI Python SDK
Drop-in optimization for the official OpenAI Python library
Add Promptly to any OpenAI Python SDK project in 2 lines. Change base_url and api_key - everything else stays the same.
View guide →