Glossary

VOXTA: A platform designed for creating and interacting with AI characters using natural language.

AI (Artificial Intelligence): Technology designed to simulate human intelligence. AI powers applications and services to perform tasks like learning, reasoning, problem-solving, and understanding language.

LLM (Large Language Model): Refers to AI models that process, understand, and generate human-like text. LLMs, such as those created by OpenAI (e.g., GPT series), are trained on vast datasets of human language

GPT (Generative Pre-trained Transformer): A deep learning model designed for natural language understanding and generation. GPT models, developed by OpenAI

Llama/Llama v2: A Large Language Model (LLM) developed by Meta (formerly Facebook). It’s designed for understanding and generating human-like text, making it highly effective for applications in conversation AI, content creation, and more.

TTS (Text-to-Speech): Technology that converts written text into spoken words. In Voxta, TTS services provide the voice for AI characters, enabling them to “speak” responses generated by the language models.

STT (Speech-to-Text): Technology that converts spoken words into written text. Voxta utilizes STT services to interpret user speech, allowing for natural voice interactions with AI characters.

SFW (Safe For Work): Content that is appropriate to be viewed in professional or public settings. It doesn’t contain any offensive or adult material, so you wouldn’t have to worry about someone looking over your shoulder.

NSFW (Not Safe For Work): Content that isn’t suitable for viewing in professional or public settings due to its adult or offensive nature. It’s the stuff you wouldn’t want on your screen when there are people around who might be offended or when you’re at work.

ERP (Erotic Role-Play): Describes adult role-playing with Voxta.

VR (Virtual Reality): A simulated experience that can be similar to or completely different from the real world. VR typically requires a headset device and is used for gaming, training, education, and more.

AR (Augmented Reality): An enhanced version of reality created by adding digital information on an image of something. AR is used in apps for smartphones and tablets to blend digital components into the real world.

VAM (Virt-A-Mate): is a 3D character creation and interaction software designed primarily for VR environments but also available on desktop. It allows users to create, customize, and interact with virtual characters in highly detailed and immersive 3D worlds. Known for its advanced physics and lifelike animations.

Mocap (Motion Capture): is a technology used to record the movement of objects or people. In the context of Virt-A-Mate (VAM) and similar applications, mocaps are used to create highly realistic animations of 3D characters by capturing the motion of real actors. This technology translates human movements into digital form, allowing virtual characters to move and behave in ways that closely mimic real-life actions.

Action Inference: This term refers to the process of predicting or deciding the next action based on the current context or data. In the realm of AI and software like Voxta, action inference involves the AI analyzing the conversation or situation and then determining the most appropriate response or action to take next. This can range from generating relevant text responses to executing specific commands or behaviors.

Context: The specific information that LLMs use to generate responses, including conversation history and any provided background details. It helps ensure relevance and coherence in the AI’s replies. Due to size limitations, LLMs may “forget” older parts of the conversation beyond their immediate context window.

Token: A basic unit of text, such as a word, part of a word, or punctuation mark, used by LLMs to process and generate language. Tokens represent the input and output elements during model training and interaction. Example: In the word “unbelievable”, it might be split into three tokens as “un”, “believ”, “able”.

SDK (Software Development Kit): A collection of software tools and libraries provided by hardware or software vendors to enable developers to create applications for specific platforms, devices, or frameworks.

HF (Hugging Face): A platform that provides tools for working with text-based AI, making it easier for people to use advanced technology without needing to start from scratch. It’s known for a big library of ready-to-use AI models that understand and generate language.

API (Application Programming Interface): is a toolbox for software development. It allows different software programs to communicate with each other, giving developers access to a set of tools and data without needing to build everything from scratch. An API key acts as a digital keychain that identifies and authenticates the software making requests, helping to keep the service secure by controlling who can access it.

RAG (Retrieval-Augmented Generation): A method in AI that improves text creation by first finding related information from a database, then using this information to generate detailed and relevant text. It combines searching for facts with creative writing, making responses in chatbots and search engines more informative and precise.

AGI (Artificial General Intelligence): A type of AI that can perform any intellectual task that a human can do. Unlike most AI, which handles specific tasks, AGI can learn, understand, and apply information across different areas, mimicking human intelligence. AGI remains a goal for future AI research.

RLHF (Reinforcement Learning from Human Feedback): Fine-tuning LLMs using human ratings/comparisons to align outputs.

GPTQ (GPT-Quantization): Refers to a technique used to compress and optimize large language models like GPT, Llama or Llama v2 for efficient deployment on devices with limited computational resources.

MoE (Mixture of Experts): A machine learning model architecture that combines multiple smaller sub-models or “experts”, each specializing in different tasks or data distributions. A gating network routes inputs to the most relevant expert(s), whose outputs are then ensembled to produce the final prediction. This allows conditionally utilizing a subset of experts per example, improving efficiency and performance.