Voxta docs

Voxta Server

Run Voxta locally on your machine with full control over models, voices, and integrations.

Voxta Server is the desktop runtime that orchestrates everything: speech recognition, language models, text-to-speech, characters, scenarios, and the integrations that tie them together. It runs locally on your machine and exposes:

  • A web UI at http://127.0.0.1:5384/ for chatting and authoring.
  • A WebSocket / SignalR API that Voxy, the VAM plugin, and your own integrations connect to.
  • A plugin system that loads each AI service (OpenAI, Anthropic, ElevenLabs, llama.cpp, Vosk, and dozens more) as its own module.

You choose every part of the stack — the LLM, the voice, the transcription service, the integrations. Local, cloud, or a mix.

Get oriented

Use the app

Pick your AI

Reference & support

On this page