Introduction
Voice infrastructure for AI applications
Why chans?
You focus on your AI. We handle the voice.
Building voice into AI apps is hard – WebRTC, speech recognition, text-to-speech, latency optimization. chans handles all of it so you can ship faster.
- Simple – One SDK, voice AI in 5 minutes. No WebRTC expertise needed.
- Flexible – Use our built-in AI or bring your own. Swap providers via config, not code.
- Your data – Export anytime. Self-host when you're ready. No lock-in.
- Open source – Transparent and auditable. See exactly what runs your voice stack.
- Production-ready – Memory, error handling, MCP tools included.
Start Fast, Graduate to Self-Hosted
Managed Service → Self-Hosted (zero setup) (full control) Same SDK, same code
Prototype quickly on our managed service. When you need full control, self-host on your infrastructure – no rewrites, no migration headaches.
How It Works
import { useVoiceAgent } from "@ai-chans/sdk-react"
function App() { const { state, connect } = useVoiceAgent({ agentToken: "agt_xxx" }) return <button onClick={connect}>{state === "idle" ? "Talk" : "Listening..."}</button>}
That's it. Your users can now talk to your AI.
Two Ways to Build
Enhanced Mode
Get started instantly with built-in AI:
- Create an agent in the dashboard
- Set a system prompt
- Connect with the SDK
No backend required. Conversation memory and MCP tools included.
Passthrough Mode
Keep your existing LLM pipeline:
- We transcribe speech → send to your webhook
- You process with your AI
- Return text → we speak it back
Full control over the AI. We just handle voice transport.