Vercel AI SDK

Optimize AI SDK streaming responses with Promptly

Use Promptly with Vercel AI SDK to reduce costs on streaming LLM responses in Next.js, SvelteKit, and other frameworks. Drop-in replacement with full streaming support.


Setup Guide

1

Install Dependencies

Install Vercel AI SDK and the OpenAI provider.

bash
npm install ai @ai-sdk/openai
2

Configure the Provider

Create a custom OpenAI provider pointing to Promptly. Use this across your app.

typescript
import { createOpenAI } from "@ai-sdk/openai";

const promptly = createOpenAI({
  apiKey: "sk-promptly-...",
  baseURL: "https://api.getpromptly.in/v1",
});

export default promptly;
3

Use in API Routes

Streaming works perfectly. Every streamed response is optimized - prompts compressed, conversations pruned, simple queries routed to cheaper models.

typescript
import { streamText } from "ai";
import promptly from "@/lib/promptly";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: promptly("gpt-4o"),
    messages,
  });

  return result.toDataStreamResponse();
}

Why Use Promptly with Vercel AI SDK?

  • Full streaming support with zero overhead
  • Works with useChat, useCompletion, and all AI SDK hooks
  • Automatic context pruning for chat conversations
  • Cost analytics per API route in Promptly dashboard
  • Compatible with Next.js, SvelteKit, Nuxt, and SolidStart

Start optimizing Vercel AI SDK costs

Sign up, grab your API key, and change your base URL. Under 2 minutes.