Glossary
Term |
Description |
Animaze Apps |
The Animaze platform is thought out as a multi-platform interconnected set of apps. At this moment the Animaze Apps are the iOS Animaze Avatar app and the upcoming Windows Desktop app. |
Animaze Editor |
The Editor is the extra tool to convert/transcode your assets to Animaze friendly format (also called Animaze resources). In other words, the avatar is modeled outside of the Animaze Editor in a separate content generation pipeline. Once the avatar content is modeled, the Editor is used to add the content into the Animaze apps. The Editor also provides the means to configure and customize avatar and related content such as: icons, names, descriptions, materials, physics, particle systems, sounds, backgrounds, lights, etc. |
Animaze Engine |
The core functionality of the Animaze Editor and Animaze Apps including:
|
Assets |
The source files of a model (be it 2D or 3D). These source files are imported in the Animaze Editor and converted to a Animaze friendly format that can be efficiently stored and loaded at runtime. The Animaze Editor creates an Assets folder where it keeps all imported source files for various other operations. |
Blendshape |
Blendshape animation, per-vertex animation, shape interpolation, shape keys, or blend shapes is a 3D computer animation method used together with skeletal animation. In a morph target animation, a "deformed" version of a mesh is stored as a series of vertex positions. In each key frame of an animation, the vertices are then interpolated between these stored positions. The "morph target" is a deformed version of a shape. When applied to a human face, for example, the head is first modelled with a neutral expression and a "target deformation" is then created for each other expression. When the face is being animated, the animator can then smoothly morph (or "blend") between the base shape and one or several morph targets. Blendshape animations give a better expression control, yet they are limited from the movement point of view because they are linear animations. However, there are some animations that cannot be attained using blendshapes, therefore the skeletal animations will be used. |
Cubism |
Cubism is the technology that drives the Live2D models and it’s integrated in Animaze Editor and Animaze Apps, enabling both software to render and animate the 2D characters. |
Cubism Editor |
Cubism Editor is the software where you can create Live2D models and then use the Animaze Editor to convert/translate the models to Animaze friendly avatars and send them to the Animaze Apps. |
Resources |
Animaze friendly runtime files specifically created to be stored and loaded efficiently. Resources are avatars, props, backgrounds, etc. |
Retargeting |
Animaze Apps use facial tracking technologies to detect and track users facial expression. Retargeting is the mapping system that takes raw tracking data, processes it, interprets it and then maps to the avatars animations inputs. |
Shader |
Terminology wise, the Shader is the GPU program that enables the item rendering, while the Material is the setup configuration for the Shader, with info such as textures and different factors or float values used by the Animaze Shader. |
Skeletal animation – all-joint-based |
It’s a technique in computer animation in which a character (or other articulated object) is represented in two parts: a surface representation used to draw the character (called skin or mesh) and a hierarchical set of interconnected bones (called the skeleton or rig) used to animate (pose and keyframe) the mesh. While this technique is often used to animate humans or more generally for organic modeling, it only serves to make the animation process more intuitive, and the same technique can be used to control the deformation of any object—such as a door, a spoon, a building, or a galaxy. When the animated object is more general than, for example, a humanoid character, the set of bones may not be hierarchical or interconnected, but it just represents a higher level description of the motion of the part of mesh or skin it is influencing. This technique is used by constructing a series of 'bones,' sometimes referred to as rigging. Each bone has a three-dimensional transformation from the default bind pose (which includes its position, scale and orientation), and an optional parent bone. The bones therefore form a hierarchy. The full transform of a child node is the product of its parent transform and its own transform. |
Special Actions |
Special Actions are animations (for Live2D models we use *.motion3.json files) that override the current idle motion of the avatar. The recommendation is to start from idle pose and end in idle pose, as these animations only run once per trigger. Special Actions aren’t triggered by the retargeting system, rather the app user triggers them by keybind input or button input depending on the Animaze App implementation. Think of Special Actions as dance moves. |
Special Poses |
Special Poses are animation poses - single frame - (for Live2D models we use *.exp3.json files) that are added and/or override the current idle pose. Special Poses are toggle-able and there can be more than one active at a time. Special Poses aren’t triggered by the retargeting system, rather the app user toggles on or off through keybinds or button inputs depending on the Animaze App implementation. |
Subsurface scattering - SSS |
Subsurface scattering works by simulating how light penetrates a translucent surface like a grape for instance, and is absorbed and scattered and exits the surface at a different location. When light rays are traveling and hit a surface there are many different things that occur all at once. Some of the light is going to be reflected off, giving specular light. However, with certain materials that have a level of translucency some of the light rays are actually going to be absorbed into the surface. Once inside, the light rays will scatter all around and exit the surface at different locations, providing subsurface scattering. |