How to call your AI Agent using the Chat API
Copy page
Learn about details of the Vercel AI SDK data stream protocol that powers the `/run/api/chat` API endpoint.
Overview
This guide shows how to call your agent directly over HTTP and stream responses using the Vercel AI SDK data stream format. It covers the exact endpoint, headers, request body, and the event stream response you should expect.
If you are building a React UI, consider our prebuilt components under React UI Components or Vercel AI Elements headless primitives. This page is for the low-level streaming API.
Endpoint
- Path (mounted by the Run API):
/run/api/chat - Method:
POST - Protocol: Server-Sent Events (SSE) encoded JSON, using Vercel AI SDK data-stream v2
- Content-Type (response):
text/event-stream - Response Header:
x-vercel-ai-data-stream: v2 - Response Header (durable mode):
x-workflow-run-id— Returned when the agent uses durable execution mode. Use this ID to reconnect to the stream or check execution status.
If your agent is configured with durable execution mode, the /run/api/chat endpoint automatically uses workflow-backed execution. The response includes an x-workflow-run-id header you can use to reconnect to the stream via GET /run/api/executions/:executionId/stream if the connection drops. You can also start durable executions explicitly via POST /run/api/executions. See Durable Executions below.
Authentication
Choose the authentication method:
See Authentication → Run API for more details.
Request Body Schema
Field Notes:
messages— Must include at least oneusermessagecontent— Optional string message text (legacy-compatible)parts— Optional multi-part content array (text + images)content+partstogether — Both are accepted.contentis prepended as a text part unless an identical text part already exists.conversationId— Optional; server generates one if omitted
File inputs (images and PDFs)
The chat endpoint supports files in message content parts with different rules by type:
- Images (
image_url): support either remote URLs (https://...) or base64data:URIs - Files (
file): currently support PDF only - PDF file inputs: support either remote URLs (
https://...) or base64data:payloads
Request shape for file parts
Base64 and size requirements
- Encoding: data URIs must include a valid base64 payload
- Maximum size: each inline file payload is limited to 10 MB
- Vercel
partsfile typing: for/run/api/chat, file parts use Vercel-compatible shape withurl(data URI orhttps://for PDFs) and requiredmediaType - Content validation:
- Images are validated against supported image bytes
- PDFs must start with the
%PDF-signature bytes
- Remote PDF URLs: must be reachable over
httporhttpsfrom the Run API. Unreachable hosts, non-PDF responses, or oversized bodies result in HTTP 400 with an error message (query strings and fragments on the URL are not preserved in stored metadata).
Optional Headers
x-emit-operations— Set totrueto include detailed data operations in the response stream. Useful for debugging and monitoring agent behavior. See Data Operations for details.
Example cURL
When using an API key for auth:
Response: Vercel AI SDK Data Stream (v2)
The response is an SSE stream of JSON events compatible with the Vercel AI SDK UI message stream. The server sets x-vercel-ai-data-stream: v2.
Event Types
Text Streaming Events
text-start— Indicates a new text segment is startingtext-delta— Carries the text content delta for the current segmenttext-end— Marks the end of the current text segment
Data Events
data-component— Structured UI data emitted by the agent (for rich UIs)data-artifact— Artifact data emitted by tools/agents (documents, files, saved results)data-operation— Low-level operational events (agent lifecycle, completion, errors)data-summary— AI-generated status updates with user-friendly labels and contextual details
Tool Events
tool-input-start,tool-input-delta,tool-input-available— Tool input streamingtool-approval-request— Emitted when a tool requires user approval before executiontool-output-available,tool-output-denied— Tool output (including approval denial)
Tool approval
If a tool is configured with needsApproval: true, the run pauses and the stream includes a tool-approval-request event:
To continue the run, your client must approve/deny the pending tool call. See Tool approvals.
Durable execution mode
When an agent is configured with durable execution mode (executionMode: 'durable'), tool approvals work differently. Instead of holding the connection open, the workflow suspends its state and frees server resources until the approval is received.
Approve or deny a pending tool call via a dedicated REST endpoint:
After submitting the approval, reconnect to the execution stream to receive the remaining events:
You can also submit tool approvals through the existing /run/api/chat endpoint by sending an approval response message part with the same conversationId. The server detects the suspended durable execution and resumes it automatically.
Tool input & output events
Depending on UI needs, the stream may include tool input and tool output events.
Example Stream (abbreviated)
Data Event Details
data-operation Events
Low-level operational events with technical context. Common types include:
agent_initializing— The agent runtime is startingagent_ready— Agent is ready and processingcompletion— The agent completed the task (includes agent ID and iteration count)error— Error information (also emitted as a top-levelerrorevent)
data-summary Events
AI-generated status updates designed for end-user consumption. Structure:
label— User-friendly description (required)details— Optional structured/unstructured context data
data-artifact Events
Emitted when an agent saves an artifact from a tool result. The event carries the artifact's preview fields in summary — these are the fields marked inPreview: true in the artifact schema and are available immediately in the agent's context.
The complete artifact data (including non-preview fields) is persisted and can be fetched on demand. See Artifact Components for details on preview vs. non-preview fields.
data-component Events
Structured UI data for rich interface components:
Text Streaming Behavior
- For each text segment, the server emits
text-start→text-delta→text-end - The server avoids splitting content word-by-word; a segment is usually a coherent chunk
- Operational events are queued during active text emission and flushed shortly after to preserve ordering and readability
Durable Executions
Agents configured with executionMode: 'durable' use workflow-backed execution that persists state across connection drops and long-running tool approvals. Durable executions are available through both the standard /run/api/chat endpoint (automatically when the agent is in durable mode) and the dedicated /run/api/executions endpoint.
Executions endpoint
- Create execution:
POST /run/api/executions - Get execution status:
GET /run/api/executions/:executionId - Reconnect to stream:
GET /run/api/executions/:executionId/stream - Approve/deny tool call:
POST /run/api/executions/:executionId/approvals/:toolCallId
Creating a durable execution
The response streams SSE events identical to /run/api/chat. The x-workflow-run-id response header contains the execution ID for reconnection and status checks.
Execution status
GET /run/api/executions/:executionId returns the current state of the execution:
| Status | Description |
|---|---|
running | Execution is actively processing |
suspended | Execution is paused, typically waiting for a tool approval |
completed | Execution finished successfully |
failed | Execution encountered an error |
Stream reconnection
If the client disconnects, reconnect to an in-progress or suspended execution:
Error Responses
Streamed Errors
Errors are now delivered as data-operation events with unified structure:
Non-Streaming Errors
Validation failures and other errors return JSON with an appropriate HTTP status code.
HTTP Status Codes
200— Stream opened successfully401— Missing/invalid authentication404— Agent not found400— Invalid request body/context500— Internal server error
Development Notes
- Default local base URL:
http://localhost:3002 - Endpoint mounting in the server:
/run/api/chat→ Vercel data stream (this page)/run/api/executions→ Durable execution endpoints/run/v1/mcp→ MCP JSON-RPC endpoint
To test quickly without a UI, use curl -N or a tool that supports
Server-Sent Events.