* Removed MyAvatar.reset() access from JavaScript
* Added HMD.centerUI() to JavaScript, which can be used to reset the 3D UI sphere around the current HMD orientation.
* Added MyAvatar.clearIKJOintLimitHistory() which can be used to reset any remembered IK joint limit history.
* Added MyAvatar.centerBody() which can be used to instantly re-orient the avatar's so that the hips and toes
are facing the same direction as the current HMD orientation.
away.js now uses the above new API's instead of MyAvatar.reset()
Used to disable the 'room-scale' avatar re-centering code.
Disabling this can prevent sliding when the avatar is supposed to be sitting or mounted on a stationary object.
Also, removed a bunch of old, unused leaning and torso twisting code.
* Bug fix for eye tracking in HMD, the "up" orientation of your eyes now match your head.
* DebugDraw: added drawRay method.
* Application: Renamed preRender to postUpdate
* AvatarManager: added postUpdate method that iterates over all avatars.
* MyAvatar: Renamed preRender to preDisplaySide
* MyAvatar: split preRender code into postUpdate and preDisplaySide.
* Removed "Show who is looking at me", "Render focus indicator" and "Render lookat target" debug draw.
* Split "Show Look At Vectors" into "Show My Look At Vectors" and "Show Other Look At Vectors", to make it easier to debug eye tracking.
* "Show Look at Vectors" now draws the right eye red and the left eye blue.
* Removed Avatar and MyAvatar renderBody
* Removed look at rendering from head.
* GLMHelpers: Bugfix for generateBasisVectors when up primary and secondary axis were orthogonal
Calling glm::axis() on an identity quaternion does not result in a normalized vector.
This vector was used within Rig::updateEyeJoint() to limit the rotation of the eye balls,
to prevent the eyes from rolling back into the avatar's head.
If the avatar was looking straight ahead, this could result in bad quaternions in the eye ball
joint matrices, which in turn would cause the eye ball mesh or any mesh influenced by the eyeball joints
not to render.
Previously we were using a infinitely tall vertical cylinder
to push hand IK targets out of the body, this had the
side-effect of preventing the hands from being raised over
the head. Now, we collide against the same 3d capsule
used by the physics system for avatar collisions.
There were three things that were causing issues with eye look at vectors while wearing an HMD.
1) The matrix returned by AvatarUpdate->getHeadPose() was in the wrong space, it should be in avatar space.
it was actually returning a matrix in sensor/room space.
2) The lookAtPosition was incorrect while wearing an HMD and with no avatars to look at.
3) The eye rotation limits in Rig::updateEyeJoint were relative to the model's zero orientation, NOT relative to the head.
this was causing the eyes to hit limits when the avatar head turned.
Now there are two sets of of jump takeoff and in-air animations.
* Run - Used when the character jumps or falls with a small forward velocity.
* Standing - Used when the character jumps or falls in-place or backward.
CharacterController
* increased takeoff duration to 250 ms
* increased takeoff to fly duration to 1100 ms
* added standing jump and in-air animations
* added 250 milisecond delay between ground and hover, to prevent going into hover when walking over cracks.
* take-off to in-air transitions now use the new snapshotPrev interp type for a smoother tweening.
This options are for for developers only and might help debug animation related issues.
* Enable Inverse Kinematics: this can be toggled to disable IK for the avatar.
* Enable Anim Pre and Post Rotations: this option can be used to use FBX pre-rotations from source avatar animations, instead of the current default, which is to use them from the source model.
This only effects FBX files loaded by the animation system, it does not affect changing model orientations via JavaScript.
Hand animations now have 5 states:
* idle
* open
* grasp
* point
* farGrasp
The handControllerGrab.js script now chooses one of these five animations, based on the state of the HandController object.
Also, removed hand trigger AnimVar setting from C++ Rig class.
This makes the accessible for controller mapping and to JavaScript.
Added 'neuronAvatar.js' as an example of reading joints from the neuron and setting them
on the avatar. NOTE: the rotations are currently in the wrong coordinate frame.
Is a subclass of Model, it overrides the updateClusterMatrices so it will pull
the actual joint matrices from a different rig override.
For the avatar soft attachment system, this override will be the Avatar::_skeletonModel rig.
This will give us the ability for an avatar to "wear" non-rigid attachments, such as clothing.
New JavaScript API to get the avatar's default pose.
MyAvatar.getDefaultJointRotation(index);
MyAvatar.getDefaultJointTranslation(index);
See `examples/tPose.js` for example usage
It can be called from script with minimal blocking,
because it inspects a copy of the joint values from the Rig, which is updated atomically.
This copy occurs in Rig::updateAnimations()
This Y_180 flip is defined in skeletonModel not in the rig.
This is important if we wish to use the Rig for both Avatars (180 flip) and Entity models (no 180 flip).
We can hide this 180 flip from script, if we wish, by including it in all the accessors to and from
MyAvatar -> skeletalModel -> Rig.
Added Quaternions::Y_180 to GLMHelpers.
This makes it much simpler for code out side of the rig to manipulate AnimVars
* Removed mat4 type from AnimVars
* AnimVariantMap now has a _rigToGeometryTransform matrix
used to transform positions and rotations into the correct coordinate frame.
* Moved AnimPose code to extract a quat from a scaled matrix into GLMHelpers
* Moved JointData into shared library
* added methods to the rig to copy into and out of JointData
* JointData translations must be in meters this is so the
fixed point compression wont overflow, also, it's a
consistent wire format.
* No longer normalizing scale in AnimSkeleton and AnimClip
This means graph is animating in 'geometry' coordinates
before unit scale is even applied. This is necessary to
properly work with both Avatar based models and ModelEntity
based models
Many things are broken.
* debug rendering (translations are x100)
* IK hand targets
* follow cam
* I did not even dare to try HMD mode
Changed default eye position to 1.9 meters because
the hifi_team avatars are 2.0 meters tall.
Also, prevent array access with negative indices when eye bones are missing.
ಠ_ಠ
Except for SkeletalModel::computeBounds() JointStates are now completly
encapsulated by the Rig. Now we can start using AnimPoses instead and
in parallel with the JointState implementation. Then we can assert that
they are identical, before removing JointStates.
This check in has many comments with the AJT tag.
Each one of these cases will need to be revisitied and fixed.
In particular // AJT: LEGACY will be used to enclose all code
in the Rig which manipulates the _jointState QVector.
* Deleted AnimationHandle class
* Removed enableAnimGraph and anableRigAnimations from Menu.
* Removed *some* references to old IK system.
But it is still used when computing collision bounding volumes
Also moved Rig::updateAnimations() now occurs after
Rig::updateFromHeadParameters() and Rig::updateFromHandParameters().
This should remove a frame of lag for head and hand IK targets.
Rig::updateFromEyeParameters() occurs after Rig::updateAnimations().
But now the eye JointStates are re-computed, this is the actual
fix for the local eye tracking issue.
While in the HMD, updates can occur with very small deltaTime values.
These this makes the position delta method of computing a velocity very
susceptible to noise and precision errors.
MyAvatar: refactored updateFromHMDSensorMatrix() a bit by splitting it into several methods, because
it was getting quite large and becoming hard to follow.
* beginStraighteningLean() - can be called when we would like to trigger a re-centering action.
* shouldBeginStraighteningLean() - contains some of the logic to decide if we should begin a re-centering action.
for now it encapulates the capsule check.
* processStraighteningLean() - performs the actual re-centering calculation.
New code was added to MyAvatar::updateFromHMDSensorMatrix() to trigger re-centering when the avatar speed rises
over a threshold.
Secondly the Rig::computeMotionAnimationState() state machine for animGraph added a state change hysteresis
of 100ms. This hysteresis should help smooth over two issues.
1) When the delta position is 0, because the physics timestep was not evaluated.
2) During re-centering due to desired motion, the avatar velocity can fluctuate causing undesired animation state fluctuation.
The calculation that determined where the body position relative to the HMD
was incorrect, one of the components was in the wrong coordinate frame.
Now use the skeleton's bind pose to compute the proper avatar offsets for the eyes, neck and hips.
Use these offsets to calculate where the hips should be given a specific hmd position and orientation.
Also, added a bug fix for Rig, which would cause a -1 index deference when an avatar was missing
certain named joints, such as, "LeftEye", "Neck" and "Head".
The main fix for this was to set the JointState animation priority to 3.0
The secondary fix was only noticed when we changed the animation priority
Basically, the debugRendering was using the JointStates after they were
manipulated by SkeletonModel to 'relax' them toward thier default pose for
IK purposes.
Because the current IK system doesn't quite handle what we need
for the head and neck IK, we do it procedurally in the rig, and
manually set both neck and head IK targets.
Restore the hmd conditional on setting head position. This had been removed when failing to pin it cause lean.
I believe that lean was being caused by coordinate system issues that are now addressed by the above and Andrew's big cleanup.
The hand state machine has the following features
* There's a idle to point animation, followed by a looping point hold state.
* There's a point to idle animation.
* The grab state is composed of a linear blend between an open and closed pose.
Additionally the C++ code will ramp on the left and right hand overlays,
This allows the fingers to be animated normally when the user is not actively
pointing or grabbing.
* Application: Forward trigger values to the MyAvatar's PalmData
* SkeletonModel: Pass PalmData to Rig via updateRigFromHandData() this is more explicit then
the Rig::inverseKinematics methods.
* AnimNodeLoader & AnimOverlay: add support for LeftHand and RightHand bone sets
* Rig::updateRigFromHandData() read the triggers and set stateMachine trigger vars
* avatar.json - udpated with new hand state machine with temporary animations
* In HMD mode head orientation and position is set.
* When not in HMD only orientation is set, position should
default to the underlying pose position.
This should help improve our idle and walk animations, because
animation on the "lean" joint was being lost, even when we did
not require procedural leaning.
* Now always works, regardless of whether or not Rig or AnimGraph animations
are enabled.
* Changed joint radius to 1 cm.
* Changed xyz axis length to 4 cm.
Now triggers 7 states.
Idle, WalkFwd, WalkBwd, StrafeLeft, StrafeRight, TurnLeft & TurnRight.
As well as variable speed walking to match current velocity.
Default animations are the new ones for our standard T-pose. (Had been for the double-A pose fightbot.)
Rig state machine now does "backup", and doesn't apply strafe while turning.
* Updated SkeletionModel::updateRig to explicitly pass a set of HeadParameters
to the rig to do procedural animation.
* Moved torso lean procedural animation from SkeletonModel into Rig.
* Moved eye tracking procedural animation from HeadModel into Rig.
* Moved neck procedural animation from HeadModel into Rig.
* cauterize code is used as at render time and is not dependent on
the jointStates.
* MyAvatar now initialize the bone set used for cauterization and
makes the decision to perform cauterization or not in preRender.