Hugging Face Provider

The Hugging Face provider offers access to thousands of language models through Hugging Face Inference Providers, including models from Meta, DeepSeek, Qwen, and more.

API keys can be obtained from Hugging Face Settings.

Setup

The Hugging Face provider is available via the @ai-sdk/huggingface module. You can install it with:

pnpm add @ai-sdk/huggingface

Provider Instance

You can import the default provider instance huggingface from @ai-sdk/huggingface:

import { huggingface } from '@ai-sdk/huggingface';

For custom configuration, you can import createHuggingFace and create a provider instance with your settings:

import { createHuggingFace } from '@ai-sdk/huggingface';
const huggingface = createHuggingFace({
apiKey: process.env.HUGGINGFACE_API_KEY ?? '',
});

You can use the following optional settings to customize the Hugging Face provider instance:

  • baseURL string

    Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is https://router.huggingface.co/v1.

  • apiKey string

    API key that is being sent using the Authorization header. It defaults to the HUGGINGFACE_API_KEY environment variable. You can get your API key from Hugging Face Settings.

  • headers Record<string,string>

    Custom headers to include in the requests.

  • fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>

    Custom fetch implementation.

Language Models

You can create language models using a provider instance:

import { huggingface } from '@ai-sdk/huggingface';
import { generateText } from 'ai';
const { text } = await generateText({
model: huggingface('deepseek-ai/DeepSeek-V3-0324'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

You can also use the .responses() or .languageModel() factory methods:

const model = huggingface.responses('deepseek-ai/DeepSeek-V3-0324');
// or
const model = huggingface.languageModel('moonshotai/Kimi-K2-Instruct');

Hugging Face language models can be used in the streamText function (see AI SDK Core).

You can explore the latest and trending models with their capabilities, context size, throughput and pricing on the Hugging Face Inference Models page.

Provider Options

Hugging Face language models support provider-specific options that you can pass via providerOptions.huggingface:

import { huggingface } from '@ai-sdk/huggingface';
import { generateText } from 'ai';
const { text } = await generateText({
model: huggingface('deepseek-ai/DeepSeek-R1'),
prompt: 'Explain the theory of relativity.',
providerOptions: {
huggingface: {
reasoningEffort: 'high',
instructions: 'Respond in a clear and educational manner.',
},
},
});

The following provider options are available:

  • metadata Record<string, string>

    Additional metadata to include with the request.

  • instructions string

    Instructions for the model. Can be used to provide additional context or guidance.

  • strictJsonSchema boolean

    Whether to use strict JSON schema validation for structured outputs. Defaults to false.

  • reasoningEffort string

    Controls the reasoning effort for reasoning models like DeepSeek-R1. Higher values result in more thorough reasoning.

Reasoning Output

For reasoning models like deepseek-ai/DeepSeek-R1, you can control the reasoning effort and access the model's reasoning process in the response:

import { huggingface } from '@ai-sdk/huggingface';
import { streamText } from 'ai';
const result = streamText({
model: huggingface('deepseek-ai/DeepSeek-R1'),
prompt: 'How many r letters are in the word strawberry?',
providerOptions: {
huggingface: {
reasoningEffort: 'high',
},
},
});
for await (const part of result.fullStream) {
if (part.type === 'reasoning') {
console.log(`Reasoning: ${part.textDelta}`);
} else if (part.type === 'text-delta') {
process.stdout.write(part.textDelta);
}
}

For non-streaming calls with generateText, the reasoning content is available in the reasoning field of the response:

import { huggingface } from '@ai-sdk/huggingface';
import { generateText } from 'ai';
const result = await generateText({
model: huggingface('deepseek-ai/DeepSeek-R1'),
prompt: 'What is 25 * 37?',
providerOptions: {
huggingface: {
reasoningEffort: 'medium',
},
},
});
console.log('Reasoning:', result.reasoning);
console.log('Answer:', result.text);

Image Input

For vision-capable models like Qwen/Qwen2.5-VL-7B-Instruct, you can pass images as part of the message content:

import { huggingface } from '@ai-sdk/huggingface';
import { generateText } from 'ai';
import { readFileSync } from 'fs';
const result = await generateText({
model: huggingface('Qwen/Qwen2.5-VL-7B-Instruct'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Describe this image in detail.' },
{
type: 'image',
image: readFileSync('./image.png'),
},
],
},
],
});

You can also pass image URLs:

{
type: 'image',
image: 'https://example.com/image.png',
}

Model Capabilities

ModelImage InputObject GenerationTool UsageTool Streaming
meta-llama/Llama-3.1-8B-Instruct
meta-llama/Llama-3.1-70B-Instruct
meta-llama/Llama-3.3-70B-Instruct
meta-llama/Llama-4-Maverick-17B-128E-Instruct
deepseek-ai/DeepSeek-V3.1
deepseek-ai/DeepSeek-V3-0324
deepseek-ai/DeepSeek-R1
deepseek-ai/DeepSeek-R1-Distill-Llama-70B
Qwen/Qwen3-32B
Qwen/Qwen3-Coder-480B-A35B-Instruct
Qwen/Qwen2.5-VL-7B-Instruct
google/gemma-3-27b-it
moonshotai/Kimi-K2-Instruct

The table above lists popular models. You can explore all available models on the Hugging Face Inference Models page. The capabilities depend on the specific model you're using. Check the model documentation on Hugging Face Hub for detailed information about each model's features.