Fastify

You can use the AI SDK in a Fastify server to generate and stream text and objects to the client.

Examples

The examples start a simple HTTP server that listens on port 8080. You can e.g. test it using curl:

curl -X POST http://localhost:8080

The examples use the Vercel AI Gateway. Ensure that your AI Gateway API key is set in the AI_GATEWAY_API_KEY environment variable.

Full example: github.com/vercel/ai/examples/fastify

UI Message Stream

You can use the toUIMessageStream method to get a UI message stream from the result and then pipe it to the response.

index.ts
import { streamText } from 'ai';
import Fastify from 'fastify';
const fastify = Fastify({ logger: true });
fastify.post('/', async function (request, reply) {
const result = streamText({
model: 'openai/gpt-4o',
prompt: 'Invent a new holiday and describe its traditions.',
});
reply.header('Content-Type', 'text/plain; charset=utf-8');
return reply.send(result.toUIMessageStream());
});
fastify.listen({ port: 8080 });

Sending Custom Data

createUIMessageStream can be used to send custom data to the client.

index.ts
import { createUIMessageStream, streamText } from 'ai';
import Fastify from 'fastify';
const fastify = Fastify({ logger: true });
fastify.post('/stream-data', async function (request, reply) {
// immediately start streaming the response
const stream = createUIMessageStream({
execute: async ({ writer }) => {
writer.write({ type: 'start' });
writer.write({
type: 'data-custom',
data: {
custom: 'initialized call',
},
});
const result = streamText({
model: 'openai/gpt-4o',
prompt: 'Invent a new holiday and describe its traditions.',
});
writer.merge(result.toUIMessageStream({ sendStart: false }));
},
onError: error => {
// Error messages are masked by default for security reasons.
// If you want to expose the error message to the client, you can do so here:
return error instanceof Error ? error.message : String(error);
},
});
reply.header('Content-Type', 'text/plain; charset=utf-8');
return reply.send(stream);
});
fastify.listen({ port: 8080 });

Text Stream

You can use the textStream property to get a text stream from the result and then pipe it to the response.

index.ts
import { streamText } from 'ai';
import Fastify from 'fastify';
const fastify = Fastify({ logger: true });
fastify.post('/', async function (request, reply) {
const result = streamText({
model: 'openai/gpt-4o',
prompt: 'Invent a new holiday and describe its traditions.',
});
reply.header('Content-Type', 'text/plain; charset=utf-8');
return reply.send(result.textStream);
});
fastify.listen({ port: 8080 });

Troubleshooting

  • Streaming not working when proxied