Appearance
Vercel AI SDK 是 Next.js 生态首选的 AI 集成库,通过 @openrouter/ai-sdk-provider 包原生支持 OpenRouter。核心用法:createOpenRouter({ apiKey }) 初始化 Provider,传入模型 ID 调用 streamText() 获得流式响应。除文本生成外,还支持 videoModel() 调用视频生成模型(如 google/veo-3.1),experimental_generateVideo() 自动处理异步提交-轮询-下载流程,支持图生视频、generateAudio、extraBody passthrough 等高级参数。
Vercel AI SDK 是 Next.js 生态最常用的 AI 集成库,通过 @openrouter/ai-sdk-provider 可以接入 OpenRouter 访问 300+ 模型。
安装
bash
npm install @openrouter/ai-sdk-provider ai文本生成(Streaming)
typescript
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { streamText } from 'ai';
export const getLasagnaRecipe = async (modelName: string) => {
const openrouter = createOpenRouter({
apiKey: '<OPENROUTER_API_KEY>',
});
const response = streamText({
model: openrouter(modelName),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
await response.consumeStream();
return response.text;
};Tool Calling
typescript
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { streamText } from 'ai';
import { z } from 'zod';
export const getWeather = async (modelName: string) => {
const openrouter = createOpenRouter({
apiKey: '<OPENROUTER_API_KEY>',
});
const response = streamText({
model: openrouter(modelName),
prompt: 'What is the weather in San Francisco, CA in Fahrenheit?',
tools: {
getCurrentWeather: {
description: 'Get the current weather in a given location',
parameters: z.object({
location: z.string().describe('The city and state, e.g. San Francisco, CA'),
unit: z.enum(['celsius', 'fahrenheit']).optional(),
}),
execute: async ({ location, unit = 'celsius' }) => {
const weatherData: Record<string, Record<string, string>> = {
'San Francisco, CA': { celsius: '18°C', fahrenheit: '64°F' },
};
const weather = weatherData[location];
if (!weather) return `Weather data for ${location} is not available.`;
return `The current weather in ${location} is ${weather[unit]}.`;
},
},
},
});
await response.consumeStream();
return response.text;
};视频生成
OpenRouter 通过 AI SDK 的 experimental_generateVideo API 支持视频生成,Provider 自动处理异步提交-轮询-下载流程:
typescript
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { experimental_generateVideo as generateVideo } from 'ai';
const openrouter = createOpenRouter({
apiKey: '<OPENROUTER_API_KEY>',
});
const { video } = await generateVideo({
model: openrouter.videoModel('google/veo-3.1'),
prompt: 'A golden retriever playing fetch on a sunny beach with waves crashing in the background',
aspectRatio: '16:9',
duration: 4,
});
console.log(video.mediaType); // 'video/mp4'
console.log(video.uint8Array.byteLength); // 视频大小(字节)视频模型设置
| 参数 | 类型 | 默认值 | 说明 |
|---|---|---|---|
generateAudio | boolean | false | 是否同时生成音频 |
pollIntervalMs | number | 2000 | 生成完成前的轮询间隔(毫秒) |
maxPollTimeMs | number | 600000 | 最长等待时间(毫秒) |
extraBody | object | — | 合并到每次请求的默认参数 |
Passthrough 参数
通过 extraBody 传入模型专属参数:
typescript
const { video } = await generateVideo({
model: openrouter.videoModel('google/veo-3.1', {
generateAudio: true,
extraBody: {
provider: {
options: {
'google-vertex': {
parameters: {
personGeneration: 'allow_all',
enhancePrompt: true,
},
},
},
},
},
}),
prompt: 'A timelapse of a flower blooming in a sunlit garden',
aspectRatio: '16:9',
});图生视频
传入参考图片引导视频生成:
typescript
const { video } = await generateVideo({
model: openrouter.videoModel('alibaba/wan-2.7'),
prompt: 'A character walking through a forest',
image: new URL('https://example.com/first-frame.png'),
resolution: '1920x1080',
});响应元数据
typescript
const result = await generateVideo({
model: openrouter.videoModel('google/veo-3.1'),
prompt: 'A slow pan across a calm mountain lake at sunrise',
aspectRatio: '16:9',
});
console.log(result.providerMetadata?.openrouter);
// { generationId: 'gen-...', cost: 0.25 }更多文档
常见问题
Q: @openrouter/ai-sdk-provider 和 @ai-sdk/openai + base URL 哪个更好?
A: 推荐使用 @openrouter/ai-sdk-provider,这是 OpenRouter 官方维护的 Provider 包,支持 videoModel()、OpenRouter 专有的 extraBody 参数和 providerMetadata 响应元数据。@ai-sdk/openai + base URL 只支持文本模型的基础调用。
Q: 视频生成需要等多久?
A: 取决于模型和视频长度,通常需要 30 秒到几分钟。SDK 默认每 2 秒轮询一次(pollIntervalMs),最长等待 10 分钟(maxPollTimeMs)。如果需要更快的响应或不同的超时设置,可以调整这两个参数。
Q: 如何在 Next.js API Route 中使用 streamText?
A: 使用 AI SDK 的 toDataStreamResponse() 方法将流式响应转换为 Next.js Response:
typescript
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openrouter('anthropic/claude-sonnet-4.6'),
messages,
});
return result.toDataStreamResponse();
}