Vercel AI SDK: Build AI Applications with Next.js
Master the Vercel AI SDK to build streaming AI chatbots, generative UIs, and AI-powered features in Next.js. Complete guide with OpenAI, Anthropic, and tool use examples.
Moshiour Rahman
Advertisement
What is Vercel AI SDK?
Vercel AI SDK is a TypeScript library for building AI-powered applications with React and Next.js. It provides streaming responses, unified provider APIs, and React hooks that make building chat interfaces and generative UIs remarkably simple.
Why Vercel AI SDK?
| Feature | Benefit |
|---|---|
| Streaming | Real-time token-by-token responses |
| Provider Agnostic | OpenAI, Anthropic, Google, Mistral, Ollama |
| React Hooks | useChat, useCompletion, useObject |
| Edge Runtime | Deploy on Vercel Edge for low latency |
| Tool Calling | Function calling with type safety |
| Generative UI | Stream React components |
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Next.js Application │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ React Frontend │ │
│ │ useChat() ──→ StreamableValue ──→ UI Updates │ │
│ └──────────────────────────┬──────────────────────────┘ │
│ │ │
│ ┌──────────────────────────▼──────────────────────────┐ │
│ │ API Route Handler │ │
│ │ streamText() / generateText() / streamObject() │ │
│ └──────────────────────────┬──────────────────────────┘ │
└─────────────────────────────┼───────────────────────────────┘
│
┌───────────────────┼───────────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ OpenAI │ │ Anthropic│ │ Ollama │
└──────────┘ └──────────┘ └──────────┘
Get the Code
Clone the working example from GitHub:
git clone https://github.com/Moshiour027/techyowls-io-blog-public.git
cd techyowls-io-blog-public/vercel-ai-sdk-nextjs-guide
npm install && npm run dev
Getting Started
Installation
# Create Next.js app
npx create-next-app@latest ai-app --typescript --tailwind --app
cd ai-app
# Install AI SDK
npm install ai @ai-sdk/openai @ai-sdk/anthropic
# Optional providers
npm install @ai-sdk/google @ai-sdk/mistral @ai-sdk/cohere
Environment Setup
# .env.local
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_GENERATIVE_AI_API_KEY=...
Basic Text Generation
Simple Generation
// app/api/generate/route.ts
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { prompt } = await req.json();
const { text } = await generateText({
model: openai('gpt-4-turbo'),
prompt,
});
return Response.json({ text });
}
Streaming Text
// app/api/stream/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
prompt,
});
return result.toDataStreamResponse();
}
Frontend with useCompletion
// app/page.tsx
'use client';
import { useCompletion } from 'ai/react';
export default function CompletionPage() {
const { completion, input, handleInputChange, handleSubmit, isLoading } =
useCompletion({
api: '/api/stream',
});
return (
<div className="max-w-2xl mx-auto p-4">
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Enter a prompt..."
className="flex-1 border rounded px-3 py-2"
/>
<button
type="submit"
disabled={isLoading}
className="bg-blue-500 text-white px-4 py-2 rounded"
>
{isLoading ? 'Generating...' : 'Generate'}
</button>
</form>
{completion && (
<div className="mt-4 p-4 bg-gray-100 rounded whitespace-pre-wrap">
{completion}
</div>
)}
</div>
);
}
Building a Chat Interface
Chat API Route
// app/api/chat/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
system: `You are a helpful AI assistant. Be concise and friendly.`,
});
return result.toDataStreamResponse();
}
Chat Frontend with useChat
// app/chat/page.tsx
'use client';
import { useChat } from 'ai/react';
import { useRef, useEffect } from 'react';
export default function ChatPage() {
const { messages, input, handleInputChange, handleSubmit, isLoading, error } =
useChat({
api: '/api/chat',
});
const messagesEndRef = useRef<HTMLDivElement>(null);
useEffect(() => {
messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
}, [messages]);
return (
<div className="flex flex-col h-screen max-w-3xl mx-auto">
{/* Header */}
<header className="border-b p-4">
<h1 className="text-xl font-bold">AI Chat</h1>
</header>
{/* Messages */}
<div className="flex-1 overflow-y-auto p-4 space-y-4">
{messages.length === 0 && (
<p className="text-gray-500 text-center mt-8">
Start a conversation by typing a message below.
</p>
)}
{messages.map((message) => (
<div
key={message.id}
className={`flex ${
message.role === 'user' ? 'justify-end' : 'justify-start'
}`}
>
<div
className={`max-w-[80%] rounded-lg px-4 py-2 ${
message.role === 'user'
? 'bg-blue-500 text-white'
: 'bg-gray-100 text-gray-900'
}`}
>
<p className="whitespace-pre-wrap">{message.content}</p>
</div>
</div>
))}
{isLoading && (
<div className="flex justify-start">
<div className="bg-gray-100 rounded-lg px-4 py-2">
<span className="animate-pulse">Thinking...</span>
</div>
</div>
)}
{error && (
<div className="text-red-500 text-center">
Error: {error.message}
</div>
)}
<div ref={messagesEndRef} />
</div>
{/* Input */}
<form onSubmit={handleSubmit} className="border-t p-4">
<div className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Type your message..."
className="flex-1 border rounded-lg px-4 py-2 focus:outline-none focus:ring-2 focus:ring-blue-500"
/>
<button
type="submit"
disabled={isLoading || !input.trim()}
className="bg-blue-500 text-white px-6 py-2 rounded-lg disabled:opacity-50"
>
Send
</button>
</div>
</form>
</div>
);
}
Multi-Provider Support
Provider Configuration
// lib/ai-providers.ts
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';
import { mistral } from '@ai-sdk/mistral';
import { createOllama } from 'ollama-ai-provider';
export const providers = {
openai: {
gpt4: openai('gpt-4-turbo'),
gpt35: openai('gpt-3.5-turbo'),
gpt4o: openai('gpt-4o'),
},
anthropic: {
claude3: anthropic('claude-3-5-sonnet-20241022'),
claudeHaiku: anthropic('claude-3-haiku-20240307'),
},
google: {
geminiPro: google('gemini-1.5-pro'),
geminiFlash: google('gemini-1.5-flash'),
},
mistral: {
large: mistral('mistral-large-latest'),
medium: mistral('mistral-medium-latest'),
},
ollama: {
llama: createOllama()('llama3.2'),
codellama: createOllama()('codellama'),
},
};
Dynamic Provider Selection
// app/api/chat/route.ts
import { streamText } from 'ai';
import { providers } from '@/lib/ai-providers';
export async function POST(req: Request) {
const { messages, provider = 'openai', model = 'gpt4' } = await req.json();
const selectedModel = providers[provider]?.[model];
if (!selectedModel) {
return Response.json({ error: 'Invalid provider or model' }, { status: 400 });
}
const result = streamText({
model: selectedModel,
messages,
});
return result.toDataStreamResponse();
}
Provider Selector Component
// components/ProviderSelector.tsx
'use client';
interface ProviderSelectorProps {
provider: string;
model: string;
onProviderChange: (provider: string) => void;
onModelChange: (model: string) => void;
}
const PROVIDERS = {
openai: ['gpt4', 'gpt35', 'gpt4o'],
anthropic: ['claude3', 'claudeHaiku'],
google: ['geminiPro', 'geminiFlash'],
};
export function ProviderSelector({
provider,
model,
onProviderChange,
onModelChange,
}: ProviderSelectorProps) {
return (
<div className="flex gap-4 p-4 border-b">
<select
value={provider}
onChange={(e) => {
onProviderChange(e.target.value);
onModelChange(PROVIDERS[e.target.value][0]);
}}
className="border rounded px-3 py-1"
>
{Object.keys(PROVIDERS).map((p) => (
<option key={p} value={p}>
{p.charAt(0).toUpperCase() + p.slice(1)}
</option>
))}
</select>
<select
value={model}
onChange={(e) => onModelChange(e.target.value)}
className="border rounded px-3 py-1"
>
{PROVIDERS[provider]?.map((m) => (
<option key={m} value={m}>
{m}
</option>
))}
</select>
</div>
);
}
Tool Calling (Function Calling)
Defining Tools
// app/api/chat-with-tools/route.ts
import { streamText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
getWeather: tool({
description: 'Get current weather for a location',
parameters: z.object({
location: z.string().describe('City name'),
unit: z.enum(['celsius', 'fahrenheit']).default('celsius'),
}),
execute: async ({ location, unit }) => {
// Simulate weather API call
const temp = Math.round(Math.random() * 30 + 5);
return {
location,
temperature: temp,
unit,
condition: ['sunny', 'cloudy', 'rainy'][Math.floor(Math.random() * 3)],
};
},
}),
searchProducts: tool({
description: 'Search for products in the catalog',
parameters: z.object({
query: z.string().describe('Search query'),
maxResults: z.number().default(5),
}),
execute: async ({ query, maxResults }) => {
// Simulate product search
return {
query,
results: [
{ id: 1, name: `${query} Pro`, price: 99.99 },
{ id: 2, name: `${query} Basic`, price: 49.99 },
].slice(0, maxResults),
};
},
}),
calculateTotal: tool({
description: 'Calculate total with tax',
parameters: z.object({
amount: z.number().describe('Base amount'),
taxRate: z.number().default(0.1).describe('Tax rate (0.1 = 10%)'),
}),
execute: async ({ amount, taxRate }) => {
const tax = amount * taxRate;
return {
subtotal: amount,
tax: tax.toFixed(2),
total: (amount + tax).toFixed(2),
};
},
}),
},
});
return result.toDataStreamResponse();
}
Handling Tool Results in UI
// app/chat-tools/page.tsx
'use client';
import { useChat } from 'ai/react';
export default function ChatWithTools() {
const { messages, input, handleInputChange, handleSubmit, isLoading } =
useChat({
api: '/api/chat-with-tools',
maxSteps: 5, // Allow multiple tool calls
});
return (
<div className="max-w-3xl mx-auto p-4">
<div className="space-y-4 mb-4">
{messages.map((message) => (
<div key={message.id}>
<div
className={`rounded-lg p-4 ${
message.role === 'user' ? 'bg-blue-100' : 'bg-gray-100'
}`}
>
<p className="font-semibold mb-1">
{message.role === 'user' ? 'You' : 'Assistant'}
</p>
<p>{message.content}</p>
{/* Display tool invocations */}
{message.toolInvocations?.map((tool, index) => (
<div key={index} className="mt-2 p-2 bg-white rounded border">
<p className="text-sm font-mono text-gray-600">
Tool: {tool.toolName}
</p>
{tool.state === 'result' && (
<pre className="text-xs mt-1 overflow-x-auto">
{JSON.stringify(tool.result, null, 2)}
</pre>
)}
</div>
))}
</div>
</div>
))}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask about weather, products, or calculations..."
className="flex-1 border rounded px-3 py-2"
/>
<button
type="submit"
disabled={isLoading}
className="bg-blue-500 text-white px-4 py-2 rounded"
>
Send
</button>
</form>
</div>
);
}
Structured Output with useObject
Schema Definition
// app/api/generate-recipe/route.ts
import { streamObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const recipeSchema = z.object({
name: z.string().describe('Recipe name'),
description: z.string().describe('Brief description'),
prepTime: z.number().describe('Preparation time in minutes'),
cookTime: z.number().describe('Cooking time in minutes'),
servings: z.number().describe('Number of servings'),
ingredients: z.array(
z.object({
item: z.string(),
amount: z.string(),
unit: z.string(),
})
),
instructions: z.array(z.string()),
nutritionInfo: z.object({
calories: z.number(),
protein: z.number(),
carbs: z.number(),
fat: z.number(),
}),
});
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = streamObject({
model: openai('gpt-4-turbo'),
schema: recipeSchema,
prompt: `Generate a detailed recipe for: ${prompt}`,
});
return result.toTextStreamResponse();
}
useObject Frontend
// app/recipe/page.tsx
'use client';
import { experimental_useObject as useObject } from 'ai/react';
import { z } from 'zod';
import { useState } from 'react';
const recipeSchema = z.object({
name: z.string(),
description: z.string(),
prepTime: z.number(),
cookTime: z.number(),
servings: z.number(),
ingredients: z.array(
z.object({
item: z.string(),
amount: z.string(),
unit: z.string(),
})
),
instructions: z.array(z.string()),
nutritionInfo: z.object({
calories: z.number(),
protein: z.number(),
carbs: z.number(),
fat: z.number(),
}),
});
export default function RecipeGenerator() {
const [prompt, setPrompt] = useState('');
const { object, submit, isLoading, error } = useObject({
api: '/api/generate-recipe',
schema: recipeSchema,
});
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
submit({ prompt });
};
return (
<div className="max-w-3xl mx-auto p-4">
<h1 className="text-2xl font-bold mb-4">AI Recipe Generator</h1>
<form onSubmit={handleSubmit} className="flex gap-2 mb-6">
<input
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder="Enter a dish (e.g., 'chocolate chip cookies')"
className="flex-1 border rounded px-3 py-2"
/>
<button
type="submit"
disabled={isLoading}
className="bg-green-500 text-white px-4 py-2 rounded"
>
{isLoading ? 'Generating...' : 'Generate Recipe'}
</button>
</form>
{error && <p className="text-red-500 mb-4">{error.message}</p>}
{object && (
<div className="space-y-6">
{/* Recipe Header */}
<div>
<h2 className="text-xl font-bold">{object.name || 'Loading...'}</h2>
<p className="text-gray-600">{object.description}</p>
</div>
{/* Time Info */}
<div className="flex gap-4 text-sm">
<span>Prep: {object.prepTime} min</span>
<span>Cook: {object.cookTime} min</span>
<span>Servings: {object.servings}</span>
</div>
{/* Ingredients */}
{object.ingredients && (
<div>
<h3 className="font-semibold mb-2">Ingredients</h3>
<ul className="list-disc list-inside space-y-1">
{object.ingredients.map((ing, i) => (
<li key={i}>
{ing.amount} {ing.unit} {ing.item}
</li>
))}
</ul>
</div>
)}
{/* Instructions */}
{object.instructions && (
<div>
<h3 className="font-semibold mb-2">Instructions</h3>
<ol className="list-decimal list-inside space-y-2">
{object.instructions.map((step, i) => (
<li key={i}>{step}</li>
))}
</ol>
</div>
)}
{/* Nutrition */}
{object.nutritionInfo && (
<div className="bg-gray-100 p-4 rounded">
<h3 className="font-semibold mb-2">Nutrition (per serving)</h3>
<div className="grid grid-cols-4 gap-2 text-center">
<div>
<p className="text-2xl font-bold">{object.nutritionInfo.calories}</p>
<p className="text-sm text-gray-600">Calories</p>
</div>
<div>
<p className="text-2xl font-bold">{object.nutritionInfo.protein}g</p>
<p className="text-sm text-gray-600">Protein</p>
</div>
<div>
<p className="text-2xl font-bold">{object.nutritionInfo.carbs}g</p>
<p className="text-sm text-gray-600">Carbs</p>
</div>
<div>
<p className="text-2xl font-bold">{object.nutritionInfo.fat}g</p>
<p className="text-sm text-gray-600">Fat</p>
</div>
</div>
</div>
)}
</div>
)}
</div>
);
}
Generative UI with RSC
Server Actions for Streaming UI
// app/actions.tsx
'use server';
import { createStreamableUI, createStreamableValue } from 'ai/rsc';
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function streamMessage(prompt: string) {
const stream = createStreamableValue('');
(async () => {
const { textStream } = streamText({
model: openai('gpt-4-turbo'),
prompt,
});
for await (const text of textStream) {
stream.update(text);
}
stream.done();
})();
return stream.value;
}
export async function streamUIComponent(query: string) {
const ui = createStreamableUI(<LoadingSpinner />);
(async () => {
// Simulate processing
ui.update(<ProcessingMessage query={query} />);
const { text } = await generateText({
model: openai('gpt-4-turbo'),
prompt: `Analyze: ${query}`,
});
ui.done(<ResultCard result={text} />);
})();
return ui.value;
}
// Components
function LoadingSpinner() {
return (
<div className="animate-spin h-8 w-8 border-4 border-blue-500 rounded-full border-t-transparent" />
);
}
function ProcessingMessage({ query }: { query: string }) {
return (
<div className="p-4 bg-yellow-100 rounded">
Processing: {query}...
</div>
);
}
function ResultCard({ result }: { result: string }) {
return (
<div className="p-4 bg-green-100 rounded">
<h3 className="font-bold">Result</h3>
<p>{result}</p>
</div>
);
}
Client Component with RSC
// app/generative-ui/page.tsx
'use client';
import { useState, useTransition } from 'react';
import { useStreamableValue } from 'ai/rsc';
import { streamMessage, streamUIComponent } from '../actions';
export default function GenerativeUIPage() {
const [isPending, startTransition] = useTransition();
const [streamedText, setStreamedText] = useState('');
const [ui, setUI] = useState<React.ReactNode>(null);
const handleTextStream = async () => {
startTransition(async () => {
const stream = await streamMessage('Tell me a short story about AI');
for await (const text of readStreamableValue(stream)) {
setStreamedText(text || '');
}
});
};
const handleUIStream = async () => {
startTransition(async () => {
const component = await streamUIComponent('Analyze market trends');
setUI(component);
});
};
return (
<div className="max-w-2xl mx-auto p-4 space-y-6">
<div>
<button
onClick={handleTextStream}
disabled={isPending}
className="bg-blue-500 text-white px-4 py-2 rounded"
>
Stream Text
</button>
<div className="mt-4 p-4 bg-gray-100 rounded min-h-[100px]">
{streamedText || 'Click to start streaming...'}
</div>
</div>
<div>
<button
onClick={handleUIStream}
disabled={isPending}
className="bg-green-500 text-white px-4 py-2 rounded"
>
Stream UI Component
</button>
<div className="mt-4">{ui}</div>
</div>
</div>
);
}
RAG with AI SDK
Document Retrieval Integration
// app/api/chat-rag/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
process.env.SUPABASE_URL!,
process.env.SUPABASE_KEY!
);
async function getRelevantDocuments(query: string) {
// Generate embedding for query
const embeddingResponse = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: query,
});
const embedding = embeddingResponse.data[0].embedding;
// Search similar documents in Supabase
const { data: documents } = await supabase.rpc('match_documents', {
query_embedding: embedding,
match_threshold: 0.7,
match_count: 5,
});
return documents?.map((doc: any) => doc.content).join('\n\n') || '';
}
export async function POST(req: Request) {
const { messages } = await req.json();
const lastMessage = messages[messages.length - 1].content;
// Retrieve relevant context
const context = await getRelevantDocuments(lastMessage);
const result = streamText({
model: openai('gpt-4-turbo'),
system: `You are a helpful assistant. Use the following context to answer questions:
${context}
If the context doesn't contain relevant information, say so.`,
messages,
});
return result.toDataStreamResponse();
}
Image Generation
DALL-E Integration
// app/api/generate-image/route.ts
import { openai } from '@ai-sdk/openai';
import { experimental_generateImage as generateImage } from 'ai';
export async function POST(req: Request) {
const { prompt } = await req.json();
const { image } = await generateImage({
model: openai.image('dall-e-3'),
prompt,
size: '1024x1024',
});
return Response.json({ imageUrl: image.base64 });
}
Image Generation Component
// app/image-gen/page.tsx
'use client';
import { useState } from 'react';
export default function ImageGenerator() {
const [prompt, setPrompt] = useState('');
const [imageUrl, setImageUrl] = useState('');
const [isLoading, setIsLoading] = useState(false);
const handleGenerate = async (e: React.FormEvent) => {
e.preventDefault();
setIsLoading(true);
const response = await fetch('/api/generate-image', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ prompt }),
});
const { imageUrl } = await response.json();
setImageUrl(`data:image/png;base64,${imageUrl}`);
setIsLoading(false);
};
return (
<div className="max-w-2xl mx-auto p-4">
<h1 className="text-2xl font-bold mb-4">AI Image Generator</h1>
<form onSubmit={handleGenerate} className="flex gap-2 mb-6">
<input
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder="Describe the image..."
className="flex-1 border rounded px-3 py-2"
/>
<button
type="submit"
disabled={isLoading}
className="bg-purple-500 text-white px-4 py-2 rounded"
>
{isLoading ? 'Generating...' : 'Generate'}
</button>
</form>
{imageUrl && (
<img
src={imageUrl}
alt={prompt}
className="w-full rounded-lg shadow-lg"
/>
)}
</div>
);
}
Middleware and Error Handling
Custom Middleware
// middleware.ts
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
export function middleware(request: NextRequest) {
// Rate limiting check
const ip = request.ip ?? '127.0.0.1';
const rateLimit = checkRateLimit(ip);
if (!rateLimit.allowed) {
return NextResponse.json(
{ error: 'Rate limit exceeded' },
{ status: 429 }
);
}
return NextResponse.next();
}
export const config = {
matcher: '/api/:path*',
};
Error Handling in Routes
// app/api/chat/route.ts
import { streamText, APICallError } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
try {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
onFinish: ({ usage }) => {
console.log('Tokens used:', usage);
},
});
return result.toDataStreamResponse();
} catch (error) {
if (error instanceof APICallError) {
console.error('API Error:', error.message);
return Response.json(
{ error: 'AI service unavailable' },
{ status: 503 }
);
}
return Response.json(
{ error: 'Internal server error' },
{ status: 500 }
);
}
}
Production Best Practices
Environment Configuration
// lib/config.ts
export const config = {
openai: {
apiKey: process.env.OPENAI_API_KEY!,
defaultModel: 'gpt-4-turbo',
maxTokens: 4096,
},
anthropic: {
apiKey: process.env.ANTHROPIC_API_KEY!,
defaultModel: 'claude-3-5-sonnet-20241022',
},
rateLimit: {
maxRequests: 100,
windowMs: 60 * 1000, // 1 minute
},
};
Token Usage Tracking
// lib/usage-tracker.ts
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
process.env.SUPABASE_URL!,
process.env.SUPABASE_KEY!
);
export async function trackUsage(userId: string, tokens: number, model: string) {
await supabase.from('usage_logs').insert({
user_id: userId,
tokens_used: tokens,
model,
timestamp: new Date().toISOString(),
});
}
Summary
| Feature | Hook/Function | Use Case |
|---|---|---|
| Chat | useChat | Conversational interfaces |
| Completion | useCompletion | Single-turn generation |
| Structured | useObject | Type-safe JSON output |
| Tools | tool() | Function calling |
| Streaming | streamText | Real-time responses |
| RSC | createStreamableUI | Generative React components |
Vercel AI SDK provides a powerful, type-safe foundation for building AI applications with React and Next.js, supporting multiple providers and advanced features like tool calling and generative UI.
Advertisement
Moshiour Rahman
Software Architect & AI Engineer
Enterprise software architect with deep expertise in financial systems, distributed architecture, and AI-powered applications. Building large-scale systems at Fortune 500 companies. Specializing in LLM orchestration, multi-agent systems, and cloud-native solutions. I share battle-tested patterns from real enterprise projects.
Related Articles
Next.js 14 Tutorial: Complete Guide with App Router
Master Next.js 14 with App Router. Learn server components, data fetching, routing, server actions, and build full-stack React applications.
JavaScriptshadcn/ui: Build Beautiful React Components
Master shadcn/ui for React applications. Learn component installation, customization, theming, and build accessible, beautiful user interfaces.
JavaScriptReact Hooks Complete Guide: useState to Custom Hooks
Master all React hooks from basics to advanced. Learn useState, useEffect, useContext, useReducer, useMemo, useCallback, and create custom hooks.
Comments
Comments are powered by GitHub Discussions.
Configure Giscus at giscus.app to enable comments.