Voxta Server
Run Voxta locally on your machine with full control over models, voices, and integrations.
Voxta Server is the desktop runtime that orchestrates everything: speech recognition, language models, text-to-speech, characters, scenarios, and the integrations that tie them together. It runs locally on your machine and exposes:
- A web UI at
http://127.0.0.1:5384/for chatting and authoring. - A WebSocket / SignalR API that Voxy, the VAM plugin, and your own integrations connect to.
- A plugin system that loads each AI service (OpenAI, Anthropic, ElevenLabs, llama.cpp, Vosk, and dozens more) as its own module.
You choose every part of the stack — the LLM, the voice, the transcription service, the integrations. Local, cloud, or a mix.
Get oriented
What is Voxta
One-page orientation for newcomers.
Install Voxta Server
Five minutes from download to a running server.
Your first conversation
From fresh install to talking with a character.
Use the app
Interface Guide
Every screen in the Voxta UI — menu, wizard, services, chat, diagnostics.
Studio
Build characters, scenarios, events, actions, memory books, scripts.
Pick your AI
Services catalog
Every LLM, voice, transcription, vision, search, and integration that plugs into Voxta.
Voxta Cloud (managed)
Skip the local setup — use our hosted backend instead.