Voxta docs

Add Voxta to a scene

Wire the Voxta plugin into a VAM scene from scratch, with lipsync, eyes, and AI-driven animations.

This guide walks through adding Voxta to a VAM scene you're building yourself — not loading one that already has Voxta wired in.

Prerequisites

Add the plugin to a Person atom

Open or create a scene

Start VAM. Load whatever scene you want to build on, or start from a blank scene.

Add the Voxta plugin

  1. Open the menu (U).
  2. Turn on Edit Mode (E).
  3. Select the Person atom (N).
  4. Go to the Plugins tab.
  5. Click Add Plugin.
  6. Browse to AcidBubbles.Voxta.VERSION → select Voxta.cslist.

Open the plugin's Custom UI

In the same Plugins tab, click Open Custom UI next to the Voxta plugin.

Verify the connection

The Connection tab should show Connected. If not, check your Voxta server is running and that you completed the security permissions step.

Pick a character and scenario

In the Chat tab of the plugin UI, select the character you want this Person atom to be. Optionally select a scenario.

Save the scene

Save the scene normally — the plugin and its character/scenario selection are persisted with the scene.

Make the character feel alive

Out of the box the character can speak but won't move its mouth, eyes, or body. A few additions go a long way:

Lipsync

Enable VAM's built-in lipsync on the Person atom:

Person atom → Control → Auto Behaviors → Lip Sync → Enabled

Eye movement

Add VAM's Glance plugin (or any similar look-at plugin) to the Person atom for naturalistic eye movement. This is what makes the character feel like it's actually looking at you instead of through you.

Idle animations

The character is more believable when it idles, breathes, and shifts weight. Use Timeline to author idle animations and trigger them via Voxta's State storable (it has values like idle, thinking, listening, speaking — you can play different animations for each).

Driving the scene from Voxta

There are two routes for "when the AI does something, make the scene react":

Use Voxta Actions + App Triggers to invoke VAM storables directly. Simple, no extra plugins, runs entirely from your Voxta scenario.

See the App Triggers page for the patterns.

Route 2 — Action List Manager

If you'd rather wire actions inside VAM with no Voxta-side scripting at all, use the Action List Manager companion plugin. You define a list of named actions in VAM, and the Voxta plugin can trigger them by name from your scenario.

For complex multi-animation flows where one action transitions through many intermediate states (sitting → standing → walking → bathroom), use Routimator — it does the pathfinding for you.

Telling the AI what's happening in the scene

You can flow information back into Voxta so the AI knows the scene state:

  • Contexts — durable scene facts ("the character is standing", "the lights are off"). Voxta injects these into the prompt.
  • Flags — boolean state bits the AI can read and write. Useful for "is the character sitting?" style logic.
  • TriggerMessage — the plugin can send /event {{ user }} touched {{ char }}'s shoulder directly to the chat, so the AI reacts to scene events.

See Studio → Contexts and Studio → Flags.

What's next

On this page