Webhook API
API reference for passthrough mode webhooks
Overview
In passthrough mode, chans sends transcribed speech to your webhook and expects a text response.
Request Format
POST /your-webhook-url
chans sends a POST request with the following JSON body:
{ "type": "transcription_complete", "session_id": "room-abc123", "user_id": "end-user-123", "timestamp": "2024-01-15T10:30:00.000000", "data": { "transcript": "What's my order status?", "context": null }}
Fields
| Field | Type | Description |
|---|---|---|
type | string | Always "transcription_complete" for passthrough calls |
session_id | string | Unique session/room identifier |
user_id | string | null | End user identifier (from client connection) |
timestamp | string | ISO 8601 timestamp |
data.transcript | string | Transcribed user speech |
data.context | string | null | Optional RAG context (if enabled) |
Response Format
Your webhook must respond with JSON:
{ "response": "Your order #1234 is out for delivery!"}
Response Fields
| Field | Type | Description |
|---|---|---|
response | string | Text to be spoken back to the user |
Timeout
Your webhook has 30 seconds to respond by default. This is configurable per-agent.
If your webhook times out, the user will hear nothing for that turn, and the agent will continue listening.
Authentication
If you configure an API key in your agent settings, chans includes it in the request:
Authorization: Bearer your-api-key
Error Handling
If your webhook returns a non-2xx status code:
- The error is logged
- The user hears no response for that turn
- The agent continues listening for the next utterance
Example Implementation
Node.js / Express
app.post('/webhook', async (req, res) => { const { data, user_id, session_id } = req.body const { transcript } = data
// Call your LLM const response = await yourLLM.chat(transcript, { userId: user_id, sessionId: session_id })
res.json({ response: response.text })})
Python / FastAPI
@app.post("/webhook")async def webhook(payload: dict): transcript = payload["data"]["transcript"] user_id = payload.get("user_id")
# Call your LLM response = await your_llm.chat(transcript, user_id=user_id)
return {"response": response}
Event Types
While the main passthrough call uses transcription_complete, chans can also send async events if you configure an events URL:
| Event Type | Description |
|---|---|
transcription_complete | User speech transcribed (sync, expects response) |
llm_response | Agent response generated |
session_start | Voice session started |
session_end | Voice session ended |