This options are for for developers only and might help debug animation related issues.
* Enable Inverse Kinematics: this can be toggled to disable IK for the avatar.
* Enable Anim Pre and Post Rotations: this option can be used to use FBX pre-rotations from source avatar animations, instead of the current default, which is to use them from the source model.
This only effects FBX files loaded by the animation system, it does not affect changing model orientations via JavaScript.
Hand animations now have 5 states:
* idle
* open
* grasp
* point
* farGrasp
The handControllerGrab.js script now chooses one of these five animations, based on the state of the HandController object.
Also, removed hand trigger AnimVar setting from C++ Rig class.
This makes the accessible for controller mapping and to JavaScript.
Added 'neuronAvatar.js' as an example of reading joints from the neuron and setting them
on the avatar. NOTE: the rotations are currently in the wrong coordinate frame.
Is a subclass of Model, it overrides the updateClusterMatrices so it will pull
the actual joint matrices from a different rig override.
For the avatar soft attachment system, this override will be the Avatar::_skeletonModel rig.
This will give us the ability for an avatar to "wear" non-rigid attachments, such as clothing.
New JavaScript API to get the avatar's default pose.
MyAvatar.getDefaultJointRotation(index);
MyAvatar.getDefaultJointTranslation(index);
See `examples/tPose.js` for example usage
It can be called from script with minimal blocking,
because it inspects a copy of the joint values from the Rig, which is updated atomically.
This copy occurs in Rig::updateAnimations()
This Y_180 flip is defined in skeletonModel not in the rig.
This is important if we wish to use the Rig for both Avatars (180 flip) and Entity models (no 180 flip).
We can hide this 180 flip from script, if we wish, by including it in all the accessors to and from
MyAvatar -> skeletalModel -> Rig.
Added Quaternions::Y_180 to GLMHelpers.
This makes it much simpler for code out side of the rig to manipulate AnimVars
* Removed mat4 type from AnimVars
* AnimVariantMap now has a _rigToGeometryTransform matrix
used to transform positions and rotations into the correct coordinate frame.
* Moved AnimPose code to extract a quat from a scaled matrix into GLMHelpers
* Moved JointData into shared library
* added methods to the rig to copy into and out of JointData
* JointData translations must be in meters this is so the
fixed point compression wont overflow, also, it's a
consistent wire format.
* No longer normalizing scale in AnimSkeleton and AnimClip
This means graph is animating in 'geometry' coordinates
before unit scale is even applied. This is necessary to
properly work with both Avatar based models and ModelEntity
based models
Many things are broken.
* debug rendering (translations are x100)
* IK hand targets
* follow cam
* I did not even dare to try HMD mode
Changed default eye position to 1.9 meters because
the hifi_team avatars are 2.0 meters tall.
Also, prevent array access with negative indices when eye bones are missing.
ಠ_ಠ
Except for SkeletalModel::computeBounds() JointStates are now completly
encapsulated by the Rig. Now we can start using AnimPoses instead and
in parallel with the JointState implementation. Then we can assert that
they are identical, before removing JointStates.
This check in has many comments with the AJT tag.
Each one of these cases will need to be revisitied and fixed.
In particular // AJT: LEGACY will be used to enclose all code
in the Rig which manipulates the _jointState QVector.
* Deleted AnimationHandle class
* Removed enableAnimGraph and anableRigAnimations from Menu.
* Removed *some* references to old IK system.
But it is still used when computing collision bounding volumes
Also moved Rig::updateAnimations() now occurs after
Rig::updateFromHeadParameters() and Rig::updateFromHandParameters().
This should remove a frame of lag for head and hand IK targets.
Rig::updateFromEyeParameters() occurs after Rig::updateAnimations().
But now the eye JointStates are re-computed, this is the actual
fix for the local eye tracking issue.
While in the HMD, updates can occur with very small deltaTime values.
These this makes the position delta method of computing a velocity very
susceptible to noise and precision errors.
MyAvatar: refactored updateFromHMDSensorMatrix() a bit by splitting it into several methods, because
it was getting quite large and becoming hard to follow.
* beginStraighteningLean() - can be called when we would like to trigger a re-centering action.
* shouldBeginStraighteningLean() - contains some of the logic to decide if we should begin a re-centering action.
for now it encapulates the capsule check.
* processStraighteningLean() - performs the actual re-centering calculation.
New code was added to MyAvatar::updateFromHMDSensorMatrix() to trigger re-centering when the avatar speed rises
over a threshold.
Secondly the Rig::computeMotionAnimationState() state machine for animGraph added a state change hysteresis
of 100ms. This hysteresis should help smooth over two issues.
1) When the delta position is 0, because the physics timestep was not evaluated.
2) During re-centering due to desired motion, the avatar velocity can fluctuate causing undesired animation state fluctuation.