HyMotion
Tencent's HY-Motion 1.0 text-to-motion model — generates 3D character animations from natural-language prompts.
HyMotion uses Tencent's HY-Motion 1.0 model to generate SMPL-H character animations from text prompts. The generated motion data is consumed by the Voxta VAM plugin and replayed on the character in real time.
Marked experimental in the registry. Behavior and config may change.
Setup
Add the service
Manage Services → + Add Services → HyMotion → Add. Voxta installs the Python runtime and model weights automatically on first use (watch the Terminal for progress — initial download is sizeable).
Pick output format
The default output is SMPL-H motion data that the Voxta VAM plugin knows how to consume. No additional configuration is usually needed.
What it's for
- Driving full-body animation in VAM scenes from AI-generated prompts.
- Letting a scenario say "have the character walk to the window and look out" and getting actual motion instead of canned animations.
For animation routing across scripted Timeline animations (rather than generative motion), see Routimator on the VAM side.