Pass a InputCalibrationData to each inputPlugin and inputDevice.
This contains the most up sensorToWorldMatrix, avatarMat and hmdSensorMatrix.
Each input plugin can use this data to transform it's poses into Avatar space
before sending it up the chain.
This fixes a bug in the handControllerGrab.js script that relied on the hand controller
rotation/positions being in the avatar frame.
The rotations from the neuron are effectively in world space and are deltas
from the default pose.
There still is an issue with the thumb, due to the missing joint from the Neuron.
The Neuron only has 3 thumb joints, not 4.
Changed euler angle compisition based on experiments in maya.
Also, the neuronAvatar.js attempts to transform the neuron input quaternions into
a pose relative to the avatar's default pose, but doesn't it doesn't work.
This makes the accessible for controller mapping and to JavaScript.
Added 'neuronAvatar.js' as an example of reading joints from the neuron and setting them
on the avatar. NOTE: the rotations are currently in the wrong coordinate frame.