Merge remote-tracking branch 'refs/remotes/kasen/master'

Conflicts:
	BUILD_WIN.md
	CODING_STANDARD.md
	LICENSE
	cmake/installer/installer-header.bmp
	cmake/installer/installer.ico
	cmake/installer/uninstaller-header.bmp
	interface/resources/images/about-vircadia.png
	interface/resources/images/vircadia-logo.svg
	interface/resources/qml/LoginDialog.qml
	interface/resources/qml/dialogs/TabletLoginDialog.qml
	interface/resources/qml/hifi/dialogs/TabletAboutDialog.qml
	interface/src/Application.cpp
	pkg-scripts/athena-server.spec
	scripts/system/more/app-more.js
	scripts/system/more/css/styles.css
	scripts/system/more/more.html
This commit is contained in:
motofckr9k 2020-06-10 02:49:13 +02:00
commit 3d05cdd61e
211 changed files with 4038 additions and 3908 deletions

View file

@ -70,9 +70,9 @@ jobs:
shell: bash
run: |
echo "${{ steps.buildenv1.outputs.symbols_archive }}"
echo ::set-env name=ARTIFACT_PATTERN::ProjectAthena-Alpha-PR${{ github.event.number }}-*.$INSTALLER_EXT
echo ::set-env name=ARTIFACT_PATTERN::Vircadia-Alpha-PR${{ github.event.number }}-*.$INSTALLER_EXT
# Build type variables
echo ::set-env name=INSTALLER::HighFidelity-Beta-$RELEASE_NUMBER-$GIT_COMMIT_SHORT.$INSTALLER_EXT
echo ::set-env name=INSTALLER::Vircadia-Alpha-$RELEASE_NUMBER-$GIT_COMMIT_SHORT.$INSTALLER_EXT
- name: Clear Working Directory
if: startsWith(matrix.os, 'windows')
shell: bash

View file

@ -1,6 +1,6 @@
# General Build Information
*Last Updated on December 21, 2019*
*Last Updated on May 17, 2020*
### OS Specific Build Guides
@ -22,7 +22,7 @@ These dependencies need not be installed manually. They are automatically downlo
- [Bullet Physics Engine](https://github.com/bulletphysics/bullet3/releases): 2.83
- [glm](https://glm.g-truc.net/0.9.8/index.html): 0.9.8
- [Oculus SDK](https://developer.oculus.com/downloads/): 1.11 (Win32) / 0.5 (Mac)
- [OpenVR](https://github.com/ValveSoftware/openvr): 1.0.6 (Win32 only)
- [OpenVR](https://github.com/ValveSoftware/openvr): 1.11.11 (Win32 only)
- [Polyvox](http://www.volumesoffun.com/): 0.2.1
- [QuaZip](https://sourceforge.net/projects/quazip/files/quazip/): 0.7.3
- [SDL2](https://www.libsdl.org/download-2.0.php): 2.0.3
@ -38,7 +38,7 @@ These are not placed in your normal build tree when doing an out of source build
#### CMake
Athena uses CMake to generate build files and project files for your platform.
Vircadia uses CMake to generate build files and project files for your platform.
#### Qt
CMake will download Qt 5.12.3 using vcpkg.
@ -51,9 +51,9 @@ This can either be entered directly into your shell session before you build or
export QT_CMAKE_PREFIX_PATH=/usr/local/Cellar/qt5/5.12.3/lib/cmake
export QT_CMAKE_PREFIX_PATH=/usr/local/opt/qt5/lib/cmake
#### Vcpkg
#### VCPKG
Athena uses vcpkg to download and build dependencies.
Vircadia uses vcpkg to download and build dependencies.
You do not need to install vcpkg.
Building the dependencies can be lengthy and the resulting files will be stored in your OS temp directory.
@ -63,7 +63,33 @@ export HIFI_VCPKG_BASE=/path/to/directory
Where /path/to/directory is the path to a directory where you wish the build files to get stored.
#### Generating build files
#### Generating Build Files
##### Possible Environment Variables
// The URL to post the dump to.
CMAKE_BACKTRACE_URL
// The identifying tag of the release.
CMAKE_BACKTRACE_TOKEN
// The release version.
RELEASE_NUMBER
// The build commit.
BUILD_NUMBER
// The type of release.
RELEASE_TYPE=PRODUCTION|PR
RELEASE_BUILD=PRODUCTION|PR
// TODO: What do these do?
PRODUCTION_BUILD=0|1
STABLE_BUILD=0|1
// TODO: What do these do?
USE_STABLE_GLOBAL_SERVICES=1
BUILD_GLOBAL_SERVICES=STABLE
##### Generate Files
Create a build directory in the root of your checkout and then run the CMake build from there. This will keep the rest of the directory clean.
@ -71,7 +97,7 @@ Create a build directory in the root of your checkout and then run the CMake bui
cd build
cmake ..
If cmake gives you the same error message repeatedly after the build fails, try removing `CMakeCache.txt`.
If CMake gives you the same error message repeatedly after the build fails, try removing `CMakeCache.txt`.
##### Generating a release/debug only vcpkg build
@ -97,13 +123,13 @@ For example, to pass the QT_CMAKE_PREFIX_PATH variable (if not using the vcpkg'e
The following applies for dependencies we do not grab via CMake ExternalProject (OpenSSL is an example), or for dependencies you have opted not to grab as a CMake ExternalProject (via -DUSE_LOCAL_$NAME=0). The list of dependencies we grab by default as external projects can be found in [the CMake External Project Dependencies section](#cmake-external-project-dependencies).
You can point our [Cmake find modules](cmake/modules/) to the correct version of dependencies by setting one of the three following variables to the location of the correct version of the dependency.
You can point our [CMake find modules](cmake/modules/) to the correct version of dependencies by setting one of the three following variables to the location of the correct version of the dependency.
In the examples below the variable $NAME would be replaced by the name of the dependency in uppercase, and $name would be replaced by the name of the dependency in lowercase (ex: OPENSSL_ROOT_DIR, openssl).
* $NAME_ROOT_DIR - pass this variable to Cmake with the -DNAME_ROOT_DIR= flag when running Cmake to generate build files
* $NAME_ROOT_DIR - set this variable in your ENV
* HIFI_LIB_DIR - set this variable in your ENV to your High Fidelity lib folder, should contain a folder '$name'
* HIFI_LIB_DIR - set this variable in your ENV to your Vircadia lib folder, should contain a folder '$name'
### Optional Components

View file

@ -1,4 +1,7 @@
## This guide is specific to Ubuntu 16.04.
THIS DOCUMENT IS OUTDATED.
Deb packages of High Fidelity domain server and assignment client are stored on debian.highfidelity.com
```

View file

@ -6,7 +6,7 @@ Please read the [general build guide](BUILD.md) for information on dependencies
### Homebrew
[Homebrew](https://brew.sh/) is an excellent package manager for macOS. It makes install of some High Fidelity dependencies very simple.
[Homebrew](https://brew.sh/) is an excellent package manager for macOS. It makes install of some Vircadia dependencies very simple.
brew install cmake openssl

View file

@ -1,6 +1,6 @@
# Build Windows
*Last Updated on January 13, 2020*
*Last Updated on May 17, 2020*
This is a stand-alone guide for creating your first Vircadia build for Windows 64-bit.
@ -68,7 +68,7 @@ To create this variable:
### Step 5. Running CMake to Generate Build Files
Run Command Prompt from Start and run the following commands:
`cd "%HIFI_DIR%"`
`cd "%VIRCADIA_DIR%"`
`mkdir build`
`cd build`
@ -78,11 +78,11 @@ Run `cmake .. -G "Visual Studio 15 Win64"`.
#### If you're using Visual Studio 2019,
Run `cmake .. -G "Visual Studio 16 2019" -A x64`.
Where `%HIFI_DIR%` is the directory for the highfidelity repository.
Where `%VIRCADIA_DIR%` is the directory for the Vircadia repository.
### Step 6. Making a Build
Open `%HIFI_DIR%\build\athena.sln` using Visual Studio.
Open `%VIRCADIA_DIR%\build\vircadia.sln` using Visual Studio.
Change the Solution Configuration (menu ribbon under the menu bar, next to the green play button) from "Debug" to "Release" for best performance.
@ -98,22 +98,22 @@ Restart Visual Studio again.
In Visual Studio, right+click "interface" under the Apps folder in Solution Explorer and select "Set as Startup Project". Run from the menu bar `Debug > Start Debugging`.
Now, you should have a full build of Vircadia and be able to run the Interface using Visual Studio. Please check our [Docs](https://docs.vircadia.dev/) for more information regarding the programming workflow.
Now, you should have a full build of Vircadia and be able to run the Interface using Visual Studio.
Note: You can also run Interface by launching it from command line or File Explorer from `%HIFI_DIR%\build\interface\Release\interface.exe`
Note: You can also run Interface by launching it from command line or File Explorer from `%VIRCADIA_DIR%\build\interface\Release\interface.exe`
## Troubleshooting
For any problems after Step #6, first try this:
* Delete your locally cloned copy of the highfidelity repository
* Delete your locally cloned copy of the Vircadia repository
* Restart your computer
* Redownload the [repository](https://github.com/kasenvr/project-athena)
* Restart directions from Step #6
#### CMake gives you the same error message repeatedly after the build fails
Remove `CMakeCache.txt` found in the `%HIFI_DIR%\build` directory.
Remove `CMakeCache.txt` found in the `%VIRCADIA_DIR%\build` directory.
#### CMake can't find OpenSSL
Remove `CMakeCache.txt` found in the `%HIFI_DIR%\build` directory. Verify that your HIFI_VCPKG_BASE environment variable is set and pointing to the correct location. Verify that the file `${HIFI_VCPKG_BASE}/installed/x64-windows/include/openssl/ssl.h` exists.
Remove `CMakeCache.txt` found in the `%VIRCADIA_DIR%\build` directory. Verify that your HIFI_VCPKG_BASE environment variable is set and pointing to the correct location. Verify that the file `${HIFI_VCPKG_BASE}/installed/x64-windows/include/openssl/ssl.h` exists.

View file

@ -97,7 +97,7 @@ endif()
option(VCPKG_APPLOCAL_DEPS OFF)
project(athena)
project(vircadia)
include("cmake/init.cmake")
include("cmake/compiler.cmake")
option(VCPKG_APPLOCAL_DEPS OFF)
@ -270,7 +270,6 @@ find_package( Threads )
add_definitions(-DGLM_FORCE_RADIANS)
add_definitions(-DGLM_ENABLE_EXPERIMENTAL)
add_definitions(-DGLM_FORCE_CTOR_INIT)
add_definitions(-DGLM_LANG_STL11_FORCED) # Workaround for GLM not detecting support for C++11 templates on Android
if (WIN32)
# Deal with fakakta Visual Studo 2017 bug

View file

@ -976,14 +976,13 @@ while (true) {
#### [4.3.4] Source files (header and implementation) must include a boilerplate.
Boilerplates should include the filename, location, creator, copyright Vircadia contributors, and Apache 2.0 License
information. This should be placed at the top of the file. If editing an existing file that is copyright High Fidelity, add a
second copyright line, copyright Vircadia contributors.
Boilerplates should include the filename, creator, copyright Vircadia contributors, and Apache 2.0 License information.
This should be placed at the top of the file. If editing an existing file that is copyright High Fidelity, add a second
copyright line, copyright Vircadia contributors.
```cpp
//
// NodeList.h
// libraries/shared/src
//
// Created by Stephen Birarda on 15 Feb 2013.
// Copyright 2013 High Fidelity, Inc.

View file

@ -21,7 +21,7 @@ Contributing
```
git remote add upstream https://github.com/kasenvr/project-athena
git pull upstream kasen/core
git pull upstream master
```
Resolve any conflicts that arise with this step.
@ -29,7 +29,7 @@ Contributing
7. Push to your fork
```
git push origin kasen/core
git push origin new_branch_name
```
8. Submit a pull request

View file

@ -15,7 +15,7 @@ To produce an installer, run the `package` target.
To produce an executable installer on Windows, the following are required:
1. [7-zip](<https://www.7-zip.org/download.html>)
1. [7-zip](<https://www.7-zip.org/download.html>)
1. [Nullsoft Scriptable Install System](http://nsis.sourceforge.net/Download) - 3.04
Install using defaults (will install to `C:\Program Files (x86)\NSIS`)
@ -56,22 +56,23 @@ To produce an executable installer on Windows, the following are required:
1. Copy `Release\ApplicationID.dll` to `C:\Program Files (x86)\NSIS\Plugins\x86-ansi\`
1. Copy `ReleaseUnicode\ApplicationID.dll` to `C:\Program Files (x86)\NSIS\Plugins\x86-unicode\`
1. [npm](<https://www.npmjs.com/get-npm>)
1. [Node.JS and NPM](<https://www.npmjs.com/get-npm>)
1. Install version 10.15.0 LTS
1. Perform a clean cmake from a new terminal.
1. Open the `athena.sln` solution and select the Release configuration.
1. Open the `vircadia.sln` solution with elevated (administrator) permissions on Visual Studio and select the **Release** configuration.
1. Build the solution.
1. Build CMakeTargets->INSTALL
1. Build `packaged-server-console-npm-install` (found under **hidden/Server Console**)
1. Build `packaged-server-console` (found under **Server Console**)
This will add 2 folders to `build\server-console\` -
`server-console-win32-x64` and `x64`
1. Build CMakeTargets->PACKAGE
Installer is now available in `build\_CPack_Packages\win64\NSIS`
1. Build CMakeTargets->PACKAGE
The installer is now available in `build\_CPack_Packages\win64\NSIS`
#### OS X
1. [npm](<https://www.npmjs.com/get-npm>)
Install version 10.15.0 LTS
Install version 12.16.3 LTS
1. Perform a clean CMake.
1. Perform a Release build of ALL_BUILD
@ -80,3 +81,9 @@ To produce an executable installer on Windows, the following are required:
Sandbox-darwin-x64
1. Perform a Release build of `package`
Installer is now available in `build/_CPack_Packages/Darwin/DragNDrop
### FAQ
1. **Problem:** Failure to open a file. ```File: failed opening file "\FOLDERSHARE\XYZSRelease\...\Credits.rtf" Error in script "C:\TFS\XYZProject\Releases\NullsoftInstaller\XYZWin7Installer.nsi" on line 77 -- aborting creation process```
1. **Cause:** The complete path (current directory + relative path) has to be < 260 characters to any of the relevant files.
1. **Solution:** Move your build and packaging folder as high up in the drive as possible to prevent an overage.

View file

@ -1,7 +1,7 @@
Copyright (c) 2013-2019, High Fidelity, Inc.
Copyright (c) 2019-2020, Vircadia Contributors.
Copyright (c) 2019-2020, Vircadia contributors.
All rights reserved.
https://vircadia.com/
https://vircadia.com
Licensed under the Apache License version 2.0 (the "License");
You may not use this software except in compliance with the License.

View file

@ -12,11 +12,11 @@ Vircadia is a 3D social software project seeking to incrementally bring about a
### How to build the Interface
[For Windows](https://github.com/kasenvr/project-athena/blob/kasen/core/BUILD_WIN.md)
[For Windows](https://github.com/kasenvr/project-athena/blob/master/BUILD_WIN.md)
[For Linux](https://github.com/kasenvr/project-athena/blob/kasen/core/BUILD_LINUX.md)
[For Linux](https://github.com/kasenvr/project-athena/blob/master/BUILD_LINUX.md)
[For Linux - Athena Builder](https://github.com/daleglass/athena-builder)
[For Linux - Athena Builder](https://github.com/kasenvr/vircadia-builder)
### How to deploy a Server
@ -24,7 +24,7 @@ Vircadia is a 3D social software project seeking to incrementally bring about a
### How to build a Server
[For Linux - Athena Builder](https://github.com/daleglass/athena-builder)
[For Linux - Athena Builder](https://github.com/kasenvr/vircadia-builder)
### Boot to Metaverse: The Goal

View file

@ -1,3 +1,5 @@
# THIS DOCUMENT IS OUTDATED
High Fidelity (hifi) is an early-stage technology lab experimenting with Virtual Worlds and VR.
This repository contains the source to many of the components in our
@ -15,7 +17,7 @@ Come chat with us in [our Gitter](https://gitter.im/highfidelity/hifi) if you ha
Documentation
=========
Documentation is available at [docs.highfidelity.com](https://docs.highfidelity.com), if something is missing, please suggest it via a new job on Worklist (add to the hifi-docs project).
Documentation is available at [docs.highfidelity.com](https://docs.highfidelity.com/), if something is missing, please suggest it via a new job on Worklist (add to the hifi-docs project).
There is also detailed [documentation on our coding standards](CODING_STANDARD.md).

View file

@ -27,9 +27,9 @@
<string name="online">Online</string>
<string name="signup">Sign Up</string>
<string name="signup_uppercase">SIGN UP</string>
<string name="creating_account">Creating your High Fidelity account</string>
<string name="creating_account">Creating your Vircadia account</string>
<string name="signup_email_username_or_password_incorrect">Email, username or password incorrect.</string>
<string name="signedin_welcome">You are now signed into High Fidelity</string>
<string name="signedin_welcome">You are now signed into Vircadia</string>
<string name="logged_in_welcome">You are now logged in!</string>
<string name="welcome">Welcome</string>
<string name="cancel">Cancel</string>

View file

@ -144,10 +144,10 @@ void ScriptableAvatar::update(float deltatime) {
}
_animationDetails.currentFrame = currentFrame;
const std::vector<HFMJoint>& modelJoints = _bind->getHFMModel().joints;
const QVector<HFMJoint>& modelJoints = _bind->getHFMModel().joints;
QStringList animationJointNames = _animation->getJointNames();
const auto nJoints = (int)modelJoints.size();
const int nJoints = modelJoints.size();
if (_jointData.size() != nJoints) {
_jointData.resize(nJoints);
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

After

Width:  |  Height:  |  Size: 100 KiB

View file

@ -6,9 +6,9 @@ vcpkg_from_github(
REPO
xiph/opus
REF
e85ed7726db5d677c9c0677298ea0cb9c65bdd23
72a3a6c13329869000b34a12ba27d8bfdfbc22b3
SHA512
a8c7e5bf383c06f1fdffd44d9b5f658f31eb4800cb59d12da95ddaeb5646f7a7b03025f4663362b888b1374d4cc69154f006ba07b5840ec61ddc1a1af01d6c54
590b852e966a497e33d129b58bc07d1205fe8fea9b158334cd8a3c7f539332ef9702bba4a37bd0be83eb5f04a218cef87645251899f099695d01c1eb8ea6e2fd
HEAD_REF
master)

View file

@ -24,7 +24,7 @@
<div class="row">
<div class="col-md-12">
<span class='step-description'>
<a target='_blank' href='https://docs.highfidelity.com/create-and-explore/start-working-in-your-sandbox/place-names'>Place names</a> are similar to web addresses. Users who want to visit your domain can
<a target='_blank' href='https://docs.vircadia.dev/create-and-explore/start-working-in-your-sandbox/place-names'>Place names</a> are similar to web addresses. Users who want to visit your domain can
enter its Place Name in High Fidelity's Interface. You can choose a Place Name for your domain.</br>
Your domain may also be reachable by <b>IP address</b>.
</span>

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 281 KiB

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 281 KiB

After

Width:  |  Height:  |  Size: 37 KiB

View file

@ -596,7 +596,7 @@
<h2>Want to learn more?</h2>
<p>You can find out much more about the blockchain and about commerce in High Fidelity by visiting our Docs site:</p>
<p><a href="http://docs.highfidelity.com" class="btn">Visit High Fidelity's Docs</a></p>
<p><a href="http://docs.vircadia.dev" class="btn">Visit High Fidelity's Docs</a></p>
<hr>
</div>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 KiB

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 158 KiB

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 159 KiB

After

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 128 KiB

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

After

Width:  |  Height:  |  Size: 132 KiB

View file

@ -77,9 +77,9 @@
var handControllerImageURL = null;
var index = 0;
var count = 3;
var handControllerRefURL = "https://docs.projectathena.dev/explore/get-started/vr-controls.html#vr-controls";
var keyboardRefURL = "https://docs.projectathena.dev/explore/get-started/desktop.html#movement-controls";
var gamepadRefURL = "https://docs.projectathena.dev/explore/get-started/vr-controls.html#gamepad";
var handControllerRefURL = "https://docs.vircadia.dev/explore/get-started/vr-controls.html#vr-controls";
var keyboardRefURL = "https://docs.vircadia.dev/explore/get-started/desktop.html#movement-controls";
var gamepadRefURL = "https://docs.vircadia.dev/explore/get-started/vr-controls.html#gamepad";
function showKbm() {
document.getElementById("main_image").setAttribute("src", "img/tablet-help-keyboard.jpg");
@ -189,7 +189,7 @@
<a href="#" id="right_button" onmousedown="cycleRight()"></a>
<a href="#" id="image_button"></a>
</div>
<a href="mailto:support@projectathena.io" id="report_problem">Report Problem</a>
<a href="mailto:support@vircadia.com" id="report_problem">Report Problem</a>
</body>
</html>

View file

@ -1,95 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
id="svg33"
xml:space="preserve"
enable-background="new 0 0 1880.00 320.00"
viewBox="0 0 1908 641.3"
height="121"
width="360"
version="1.1"><metadata
id="metadata39"><rdf:RDF><cc:Work
rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" /><dc:title></dc:title></cc:Work></rdf:RDF></metadata><defs
id="defs37" />
<radialGradient
gradientTransform="translate(19.976565,140.82379)"
gradientUnits="userSpaceOnUse"
r="3176.3899"
cy="-604.15698"
cx="-571.52899"
id="SVGID_Fill1_">
<stop
id="stop2"
stop-opacity="1"
stop-color="#01BDFF"
offset="0.451163" />
<stop
id="stop4"
stop-opacity="1"
stop-color="#8C1AFF"
offset="0.827907" />
</radialGradient>
<path
style="fill:url(#SVGID_Fill1_);stroke-width:0.2;stroke-linejoin:round"
id="path7"
d="M 48.699565,145.82525 H 1726.6966 c 19.89,0 43.52,16.11774 52.79,36.00004 l 110.98,237.9975 c 9.27,19.882 0.67,36 -19.21,36 H 193.25357 c -19.882,0 -43.516,-16.118 -52.787,-36 L 29.486685,181.82529 c -9.271248,-19.8823 -0.66933,-36.00004 19.21288,-36.00004 z" />
<path
style="fill:#36393f;fill-opacity:1;stroke-width:0.2;stroke-linejoin:round"
id="path9"
d="m 698.91557,163.83679 1018.36103,-6e-4 c 26.1,-0.08 42.02,12.239 52.55,35.4787 l 94.66,203.0169 c 16.07,28.007 15.01,36.877 -19.45,35.479 H 697.03757" />
<g
transform="translate(19.976565,140.82379)"
id="g15">
<path
style="fill:#fafafa;fill-opacity:1;stroke-width:0.2;stroke-linejoin:round"
id="path11"
d="m 699.7,159.966 c 0,-14.685 2.332,-28.873 6.998,-42.562 4.665,-13.69 11.365,-25.824 20.098,-36.4024 8.733,-10.5785 19.2,-19.0412 31.403,-25.3883 12.202,-6.347 25.84,-9.5206 40.913,-9.5206 h 107.667 v 47.7896 h -106.59 c -5.503,0 -11.186,1.8668 -17.047,5.6004 -5.862,3.7333 -11.246,8.6493 -16.15,14.7473 -4.905,6.098 -8.973,13.13 -12.203,21.095 -3.23,7.965 -4.845,16.054 -4.845,24.268 0,8.214 1.555,16.303 4.666,24.268 3.11,7.965 7.178,15.059 12.202,21.281 5.025,6.223 10.647,11.201 16.868,14.935 6.221,3.733 12.442,5.6 18.662,5.6 h 104.437 v 47.79 H 798.036 c -15.792,0 -29.788,-3.423 -41.991,-10.268 -12.202,-6.845 -22.49,-15.743 -30.864,-26.695 -8.374,-10.951 -14.715,-23.148 -19.021,-36.589 -4.307,-13.44 -6.46,-26.757 -6.46,-39.949 z" />
<rect
style="fill:#fafafa;fill-opacity:1;stroke-width:0.2;stroke-linejoin:round"
id="rect13"
height="227.37399"
width="55.987099"
y="46.092701"
x="1464.8101" />
</g>
<path
style="fill:#ffffff;fill-opacity:1;stroke-width:0.2;stroke-linejoin:round"
id="path17"
d="m 512.99657,238.43969 v 41.9861 h -0.02 v 60 h 0.02 v 73.865 h -53.897 v -227.3742 h 127.614 c 11.59,0 21.5,2.1779 29.73,6.5337 8.229,4.3558 14.952,10.0806 20.168,17.1744 5.216,7.0937 8.983,15.1209 11.301,24.0815 2.318,8.9606 3.477,18.0456 3.477,27.2546 0,7.467 -0.985,14.872 -2.956,22.215 -1.97,7.343 -4.694,14.188 -8.171,20.535 -3.477,6.347 -7.708,12.009 -12.692,16.987 -4.984,4.978 -10.49,8.836 -16.517,11.574 l 63.286,81.019 h -66.068 l -67.295,-85.179 v -38.776 h 45.737 c 1.622,0 3.245,-0.995 4.868,-2.986 1.622,-1.992 3.013,-4.356 4.172,-7.094 1.16,-2.738 2.145,-5.538 2.956,-8.401 0.811,-2.862 1.217,-5.04 1.217,-6.533 0,-2.241 -0.232,-4.854 -0.695,-7.841 -0.464,-2.987 -1.217,-5.911 -2.26,-8.774 -1.044,-2.862 -2.435,-5.289 -4.173,-7.28 -1.739,-1.9915 -3.767,-2.9871 -6.085,-2.9871 z" />
<g
transform="translate(19.976565,140.82379)"
id="g21">
<rect
style="fill:#ffffff;fill-opacity:1;stroke-width:0.2;stroke-linejoin:round"
id="rect19"
height="227.37399"
width="54.9846"
y="46.092701"
x="340.465" />
</g>
<path
style="fill:#ffffff;fill-opacity:1;stroke-width:0.2;stroke-linejoin:round"
id="path23"
d="m 75.493165,186.91679 106.444405,227.374 h 52.518 l 2.147,-3.778 -101.19,-223.596 z" />
<path
style="fill:#ffffff;fill-opacity:1;stroke-width:0.2;stroke-linejoin:round"
id="path25"
d="m 251.89657,376.53279 87.593,-189.6164 h -59.919 l -56.637,125.5764 z" />
<path
style="fill:#fafafa;fill-opacity:1;stroke-width:0.2;stroke-linejoin:round"
id="path27"
d="m 1007.8506,414.29079 h -61.01103 l 107.30703,-227.3743 h 53.83 l 107.67,227.3743 h -61.01 l -27.99,-60.484 h -59.22 l -2.62,-0.065 21.11,-48.471 h 21.71 l -26.92,-57.871" />
<path
style="fill:#fafafa;fill-opacity:1;stroke-width:0.2;stroke-linejoin:round"
id="path29"
d="m 1625.1066,414.29079 h -61.01 l 107.31,-227.3743 h 53.83 l 107.67,227.3743 h -61.02 l -27.99,-60.484 h -59.22 l -2.7,-0.065 21.24,-48.471 h 21.66 l -26.92,-57.871" />
<path
style="fill:#fafafa;fill-opacity:1;stroke-width:0.2;stroke-linejoin:round"
id="path31"
d="m 1458.9566,300.78979 c 0,13.192 -2.15,26.509 -6.46,39.949 -4.3,13.441 -10.64,25.638 -19.02,36.589 -8.37,10.952 -18.66,19.85 -30.86,26.695 -12.21,6.845 -26.2,10.268 -41.99,10.268 h -122.74 v -151.567 h 54.19 v 103.777 h 63.88 c 6.22,0 12.44,-1.867 18.66,-5.6 6.22,-3.734 11.85,-8.712 16.87,-14.935 5.03,-6.222 9.09,-13.316 12.2,-21.281 3.11,-7.965 4.67,-16.054 4.67,-24.268 0,-8.961 -1.8,-17.423 -5.38,-25.388 -3.59,-7.965 -7.96,-14.934 -13.1,-20.908 -5.15,-5.974 -10.59,-10.703 -16.33,-14.1876 -5.75,-3.4847 -10.77,-5.227 -15.08,-5.227 h -66.39 v 0.0177 h -54.19 v -47.8074 h 114.12 c 18.43,0 34.4,2.9869 47.92,8.9606 13.51,5.9736 24.64,14.063 33.37,24.2682 8.74,10.205 15.2,22.2145 19.38,36.0285 4.19,13.815 6.28,28.687 6.28,44.616 z" />
</svg>

Before

Width:  |  Height:  |  Size: 5.6 KiB

View file

@ -6,7 +6,7 @@ import controlsUit 1.0
WebView {
id: webview
url: "https://projectathena.io/"
url: "https://vircadia.com/"
profile: FileTypeProfile;
property var parentRoot: null

View file

@ -85,7 +85,9 @@ FocusScope {
Image {
id: banner
anchors.centerIn: parent
source: "../images/vircadia-banner.svg"
sourceSize.width: 500
sourceSize.height: 91
source: "../images/vircadia-logo.svg"
horizontalAlignment: Image.AlignHCenter
}
}

View file

@ -402,7 +402,7 @@ Item {
font.pixelSize: linkAccountBody.textFieldFontSize
font.bold: linkAccountBody.fontBold
text: "<a href='metaverse.projectathena.io/users/password/new'> Can't access your account?</a>"
text: "<a href='metaverse.vircadia.com/users/password/new'> Can't access your account?</a>"
verticalAlignment: Text.AlignVCenter
horizontalAlignment: Text.AlignHCenter
@ -527,7 +527,7 @@ Item {
leftMargin: hifi.dimensions.contentSpacing.x
}
text: "<a href='metaverse.projectathena.io/users/register'>Sign Up</a>"
text: "<a href='metaverse.vircadia.com/users/register'>Sign Up</a>"
linkColor: hifi.colors.blueAccent
onLinkActivated: {

View file

@ -129,7 +129,9 @@ FocusScope {
Image {
id: banner
anchors.centerIn: parent
source: "../../images/vircadia-banner.svg"
sourceSize.width: 400
sourceSize.height: 73
source: "../../images/vircadia-logo.svg"
horizontalAlignment: Image.AlignHCenter
}
}

View file

@ -229,7 +229,7 @@ Item {
}
function openDocs() {
Qt.openUrlExternally("https://docs.projectathena.dev/create/avatars/package-avatar.html");
Qt.openUrlExternally("https://docs.vircadia.dev/create/avatars/package-avatar.html");
}
function openVideo() {

View file

@ -318,7 +318,7 @@ Item {
text: "This item is not for sale yet, <a href='#'>learn more</a>."
onLinkActivated: {
Qt.openUrlExternally("https://docs.projectathena.dev/sell/add-item/upload-avatar.html");
Qt.openUrlExternally("https://docs.vircadia.dev/sell/add-item/upload-avatar.html");
}
}

View file

@ -7,7 +7,7 @@ MessageBox {
popup.onButton2Clicked = callback;
popup.titleText = 'Specify Avatar URL'
popup.bodyText = 'This will not overwrite your existing favorite if you are wearing one.<br>' +
'<a href="https://docs.vircadia.dev/create/avatars.html">' +
'<a href="https://docs.vircadia.dev/create/avatars/create-avatars.html">' +
'Learn to make a custom avatar by opening this link on your desktop.' +
'</a>'
popup.inputText.visible = true;

View file

@ -778,7 +778,7 @@ Rectangle {
lightboxPopup.bodyText = "Rezzing this content set will replace the existing environment and all of the items in this domain. " +
"If you want to save the state of the content in this domain, create a backup before proceeding.<br><br>" +
"For more information about backing up and restoring content, " +
"<a href='https://docs.projectathena.dev/host/maintain-domain/backup-domain.html'>" +
"<a href='https://docs.vircadia.dev/host/maintain-domain/backup-domain.html'>" +
"click here to open info on your desktop browser.";
lightboxPopup.button1text = "CANCEL";
lightboxPopup.button1method = function() {

View file

@ -602,7 +602,7 @@ Rectangle {
lightboxPopup.bodyText = "Rezzing this content set will replace the existing environment and all of the items in this domain. " +
"If you want to save the state of the content in this domain, create a backup before proceeding.<br><br>" +
"For more information about backing up and restoring content, " +
"<a href='https://docs.projectathena.dev/host/maintain-domain/backup-domain.html'>" +
"<a href='https://docs.vircadia.dev/host/maintain-domain/backup-domain.html'>" +
"click here to open info on your desktop browser.";
lightboxPopup.button1text = "CANCEL";
lightboxPopup.button1method = function() {

View file

@ -207,7 +207,7 @@ At the moment, there is currently no way to convert HFC to other currencies. Sta
if (link === "#privateKeyPath") {
Qt.openUrlExternally("file:///" + root.keyFilePath.substring(0, root.keyFilePath.lastIndexOf('/')));
} else if (link === "#blockchain") {
Qt.openUrlExternally("https://docs.projectathena.dev/explore/shop.html");
Qt.openUrlExternally("https://docs.vircadia.dev/explore/shop.html");
} else if (link === "#bank") {
if ((Account.metaverseServerURL).toString().indexOf("staging") >= 0) {
Qt.openUrlExternally("hifi://hifiqa-master-metaverse-staging"); // So that we can test in staging.

View file

@ -23,9 +23,9 @@ Rectangle {
spacing: 5
Image {
sourceSize.width: 295
sourceSize.height: 75
source: "../../../images/about-vircadia.png"
width: 400; height: 73
fillMode: Image.PreserveAspectFit
source: "../../../images/vircadia-logo.svg"
}
Item { height: 30; width: 1 }
Column {
@ -116,7 +116,7 @@ Rectangle {
Item { height: 20; width: 1 }
RalewayRegular {
color: "white"
text: "© 2019 - 2020 Vircadia Contributors."
text: "© 2019-2020 Vircadia contributors."
size: 14
}
RalewayRegular {

View file

@ -656,8 +656,8 @@ private:
/**jsdoc
* <p>The <code>Controller.Hardware.Application</code> object has properties representing Interface's state. The property
* values are integer IDs, uniquely identifying each output. <em>Read-only.</em></p>
* <p>These states can be mapped to actions or functions or <code>Controller.Standard</code> items in a {@link RouteObject}
* mapping (e.g., using the {@link RouteObject#when} method). Each data value is either <code>1.0</code> for "true" or
* <p>These states can be mapped to actions or functions or <code>Controller.Standard</code> items in a {@link RouteObject}
* mapping (e.g., using the {@link RouteObject#when} method). Each data value is either <code>1.0</code> for "true" or
* <code>0.0</code> for "false".</p>
* <table>
* <thead>
@ -679,7 +679,7 @@ private:
* <tr><td><code>CameraIndependent</code></td><td>number</td><td>number</td><td>The camera is in independent mode.</td></tr>
* <tr><td><code>CameraEntity</code></td><td>number</td><td>number</td><td>The camera is in entity mode.</td></tr>
* <tr><td><code>InHMD</code></td><td>number</td><td>number</td><td>The user is in HMD mode.</td></tr>
* <tr><td><code>AdvancedMovement</code></td><td>number</td><td>number</td><td>Advanced movement (walking) controls are
* <tr><td><code>AdvancedMovement</code></td><td>number</td><td>number</td><td>Advanced movement (walking) controls are
* enabled.</td></tr>
* <tr><td><code>StrafeEnabled</code></td><td>number</td><td>number</td><td>Strafing is enabled</td></tr>
* <tr><td><code>LeftHandDominant</code></td><td>number</td><td>number</td><td>Dominant hand set to left.</td></tr>
@ -829,7 +829,7 @@ bool setupEssentials(int& argc, char** argv, bool runningMarkerExisted) {
audioDLLPath += "/audioWin7";
}
QCoreApplication::addLibraryPath(audioDLLPath);
#endif
#endif
QString defaultScriptsOverrideOption = getCmdOption(argc, constArgv, "--defaultScriptsOverride");
@ -949,7 +949,7 @@ bool setupEssentials(int& argc, char** argv, bool runningMarkerExisted) {
DependencyManager::set<AvatarPackager>();
DependencyManager::set<ScreenshareScriptingInterface>();
PlatformHelper::setup();
QObject::connect(PlatformHelper::instance(), &PlatformHelper::systemWillWake, [] {
QMetaObject::invokeMethod(DependencyManager::get<NodeList>().data(), "noteAwakening", Qt::QueuedConnection);
QMetaObject::invokeMethod(DependencyManager::get<AudioClient>().data(), "noteAwakening", Qt::QueuedConnection);
@ -1092,8 +1092,8 @@ Application::Application(int& argc, char** argv, QElapsedTimer& startupTimer, bo
{
// identify gpu as early as possible to help identify OpenGL initialization errors.
auto gpuIdent = GPUIdent::getInstance();
setCrashAnnotation("gpu_name", gpuIdent->getName().toStdString());
setCrashAnnotation("gpu_driver", gpuIdent->getDriver().toStdString());
setCrashAnnotation("sentry[contexts][gpu][name]", gpuIdent->getName().toStdString());
setCrashAnnotation("sentry[contexts][gpu][version]", gpuIdent->getDriver().toStdString());
setCrashAnnotation("gpu_memory", std::to_string(gpuIdent->getMemory()));
}
@ -1162,7 +1162,7 @@ Application::Application(int& argc, char** argv, QElapsedTimer& startupTimer, bo
deadlockWatchdogThread->setMainThreadID(QThread::currentThreadId());
deadlockWatchdogThread->start();
// Pause the deadlock watchdog when we sleep, or it might
// Pause the deadlock watchdog when we sleep, or it might
// trigger a false positive when we wake back up
auto platformHelper = PlatformHelper::instance();
@ -3177,7 +3177,7 @@ void Application::showLoginScreen() {
#endif
}
static const QUrl AUTHORIZED_EXTERNAL_QML_SOURCE { "https://content.highfidelity.com/Experiences/Releases" };
static const QUrl AUTHORIZED_EXTERNAL_QML_SOURCE { "https://cdn.vircadia.com/community-apps/applications" };
void Application::initializeUi() {
@ -3196,14 +3196,16 @@ void Application::initializeUi() {
safeURLS += settingsSafeURLS;
// END PULL SAFEURLS FROM INTERFACE.JSON Settings
bool isInWhitelist = false; // assume unsafe
for (const auto& str : safeURLS) {
if (!str.isEmpty() && str.endsWith(".qml") && url.toString().endsWith(".qml") &&
url.toString().startsWith(str)) {
qCDebug(interfaceapp) << "Found matching url!" << url.host();
isInWhitelist = true;
return true;
if (AUTHORIZED_EXTERNAL_QML_SOURCE.isParentOf(url)) {
return true;
} else {
for (const auto& str : safeURLS) {
if (!str.isEmpty() && str.endsWith(".qml") && url.toString().endsWith(".qml") &&
url.toString().startsWith(str)) {
qCDebug(interfaceapp) << "Found matching url!" << url.host();
return true;
}
}
}
@ -3788,9 +3790,8 @@ void Application::setPreferredCursor(const QString& cursorName) {
if (_displayPlugin && _displayPlugin->isHmd()) {
_preferredCursor.set(cursorName.isEmpty() ? DEFAULT_CURSOR_NAME : cursorName);
}
else {
_preferredCursor.set(cursorName.isEmpty() ? Cursor::Manager::getIconName(Cursor::Icon::SYSTEM) : cursorName);
} else {
_preferredCursor.set(cursorName.isEmpty() ? Cursor::Manager::getIconName(Cursor::Icon::SYSTEM) : cursorName);
}
showCursor(Cursor::Manager::lookupIcon(_preferredCursor.get()));
@ -3977,7 +3978,7 @@ void Application::handleSandboxStatus(QNetworkReply* reply) {
DependencyManager::get<AddressManager>()->loadSettings(addressLookupString);
sentTo = SENT_TO_PREVIOUS_LOCATION;
}
UserActivityLogger::getInstance().logAction("startup_sent_to", {
{ "sent_to", sentTo },
{ "sandbox_is_running", sandboxIsRunning },
@ -4212,7 +4213,7 @@ bool Application::event(QEvent* event) {
idle();
#ifdef DEBUG_EVENT_QUEUE_DEPTH
// The event queue may very well grow beyond 400, so
// The event queue may very well grow beyond 400, so
// this code should only be enabled on local builds
{
int count = ::hifi::qt::getEventQueueSize(QThread::currentThread());
@ -4251,7 +4252,7 @@ bool Application::event(QEvent* event) {
{ //testing to see if we can set focus when focus is not set to root window.
_glWidget->activateWindow();
_glWidget->setFocus();
return true;
return true;
}
case QEvent::TouchBegin:
@ -5236,7 +5237,7 @@ void Application::idle() {
}
}
#endif
checkChangeCursor();
#if !defined(DISABLE_QML)
@ -5489,7 +5490,7 @@ void Application::loadSettings() {
RenderScriptingInterface::getInstance()->loadSettings();
// Setup the PerformanceManager which will enforce the several settings to match the Preset
// On the first run, the Preset is evaluated from the
// On the first run, the Preset is evaluated from the
getPerformanceManager().setupPerformancePresetSettings(_firstRun.get());
// finish initializing the camera, based on everything we checked above. Third person camera will be used if no settings
@ -5535,7 +5536,7 @@ bool Application::importEntities(const QString& urlOrFilename, const bool isObse
_entityClipboard->withWriteLock([&] {
_entityClipboard->eraseAllOctreeElements();
// FIXME: readFromURL() can take over the main event loop which may cause problems, especially if downloading the JSON
// FIXME: readFromURL() can take over the main event loop which may cause problems, especially if downloading the JSON
// from the Web.
success = _entityClipboard->readFromURL(urlOrFilename, isObservable, callerId);
if (success) {
@ -7063,7 +7064,7 @@ void Application::updateWindowTitle() const {
auto accountManager = DependencyManager::get<AccountManager>();
auto isInErrorState = nodeList->getDomainHandler().isInErrorState();
QString buildVersion = " - Vircadia v0.86.0 K2 - "
QString buildVersion = " - Vircadia - "
+ (BuildInfo::BUILD_TYPE == BuildInfo::BuildType::Stable ? QString("Version") : QString("Build"))
+ " " + applicationVersion();
@ -7073,7 +7074,7 @@ void Application::updateWindowTitle() const {
nodeList->getDomainHandler().isConnected() ? "" : " (NOT CONNECTED)";
QString username = accountManager->getAccountInfo().getUsername();
setCrashAnnotation("username", username.toStdString());
setCrashAnnotation("sentry[user][username]", username.toStdString());
QString currentPlaceName;
if (isServerlessMode()) {
@ -7747,7 +7748,7 @@ bool Application::askToReplaceDomainContent(const QString& url) {
static const QString infoText = simpleWordWrap("Your domain's content will be replaced with a new content set. "
"If you want to save what you have now, create a backup before proceeding. For more information about backing up "
"and restoring content, visit the documentation page at: ", MAX_CHARACTERS_PER_LINE) +
"\nhttps://docs.projectathena.dev/host/maintain-domain/backup-domain.html";
"\nhttps://docs.vircadia.dev/host/maintain-domain/backup-domain.html";
ModalDialogListener* dig = OffscreenUi::asyncQuestion("Are you sure you want to replace this domain's content set?",
infoText, QMessageBox::Yes | QMessageBox::No, QMessageBox::No);
@ -8735,7 +8736,7 @@ bool Application::isThrottleRendering() const {
bool Application::hasFocus() const {
bool result = (QApplication::activeWindow() != nullptr);
#if defined(Q_OS_WIN)
// On Windows, QWidget::activateWindow() - as called in setFocus() - makes the application's taskbar icon flash but doesn't
// take user focus away from their current window. So also check whether the application is the user's current foreground

View file

@ -84,10 +84,9 @@ bool startCrashHandler(std::string appPath) {
std::vector<std::string> arguments;
std::map<std::string, std::string> annotations;
annotations["token"] = BACKTRACE_TOKEN;
annotations["format"] = "minidump";
annotations["version"] = BuildInfo::VERSION.toStdString();
annotations["build_number"] = BuildInfo::BUILD_NUMBER.toStdString();
annotations["sentry[release]"] = BACKTRACE_TOKEN;
annotations["sentry[contexts][app][app_version]"] = BuildInfo::VERSION.toStdString();
annotations["sentry[contexts][app][app_build]"] = BuildInfo::BUILD_NUMBER.toStdString();
annotations["build_type"] = BuildInfo::BUILD_TYPE_STRING.toStdString();
auto machineFingerPrint = uuidStringWithoutCurlyBraces(FingerprintUtils::getMachineFingerprint());

View file

@ -20,7 +20,7 @@ class FancyCamera : public Camera {
/**jsdoc
* The <code>Camera</code> API provides access to the "camera" that defines your view in desktop and HMD display modes.
* The High Fidelity camera has axes <code>x</code> = right, <code>y</code> = up, <code>-z</code> = forward.
* The Vircadia camera has axes <code>x</code> = right, <code>y</code> = up, <code>-z</code> = forward.
*
* @namespace Camera
*

View file

@ -223,9 +223,9 @@ Menu::Menu() {
MenuWrapper* startupLocationMenu = navigateMenu->addMenu(MenuOption::StartUpLocation);
QActionGroup* startupLocatiopnGroup = new QActionGroup(startupLocationMenu);
startupLocatiopnGroup->setExclusive(true);
startupLocatiopnGroup->addAction(addCheckableActionToQMenuAndActionHash(startupLocationMenu, MenuOption::HomeLocation, 0,
startupLocatiopnGroup->addAction(addCheckableActionToQMenuAndActionHash(startupLocationMenu, MenuOption::HomeLocation, 0,
false));
startupLocatiopnGroup->addAction(addCheckableActionToQMenuAndActionHash(startupLocationMenu, MenuOption::LastLocation, 0,
startupLocatiopnGroup->addAction(addCheckableActionToQMenuAndActionHash(startupLocationMenu, MenuOption::LastLocation, 0,
true));
// Settings menu ----------------------------------
@ -288,13 +288,13 @@ Menu::Menu() {
hmd->toggleShouldShowTablet();
}
});
// Settings > Entity Script / QML Whitelist
action = addActionToQMenuAndActionHash(settingsMenu, "Entity Script / QML Whitelist");
connect(action, &QAction::triggered, [] {
auto tablet = DependencyManager::get<TabletScriptingInterface>()->getTablet("com.highfidelity.interface.tablet.system");
auto hmd = DependencyManager::get<HMDScriptingInterface>();
tablet->pushOntoStack("hifi/dialogs/security/EntityScriptQMLWhitelist.qml");
if (!hmd->getShouldShowTablet()) {
@ -310,10 +310,10 @@ Menu::Menu() {
// Developer menu ----------------------------------
MenuWrapper* developerMenu = addMenu("Developer", "Developer");
// Developer > Scripting >>>
MenuWrapper* scriptingOptionsMenu = developerMenu->addMenu("Scripting");
// Developer > Scripting > Console...
addActionToQMenuAndActionHash(scriptingOptionsMenu, MenuOption::Console, Qt::CTRL | Qt::ALT | Qt::Key_J,
DependencyManager::get<StandAloneJSConsole>().data(),
@ -328,7 +328,7 @@ Menu::Menu() {
defaultScriptsLoc.setPath(defaultScriptsLoc.path() + "developer/utilities/tools/currentAPI.js");
DependencyManager::get<ScriptEngines>()->loadScript(defaultScriptsLoc.toString());
});
// Developer > Scripting > Entity Script Server Log
auto essLogAction = addActionToQMenuAndActionHash(scriptingOptionsMenu, MenuOption::EntityScriptServerLog, 0,
qApp, SLOT(toggleEntityScriptServerLogDialog()));
@ -348,7 +348,7 @@ Menu::Menu() {
// Developer > Scripting > Verbose Logging
addCheckableActionToQMenuAndActionHash(scriptingOptionsMenu, MenuOption::VerboseLogging, 0, false,
qApp, SLOT(updateVerboseLogging()));
// Developer > Scripting > Enable Speech Control API
#if defined(Q_OS_MAC) || defined(Q_OS_WIN)
auto speechRecognizer = DependencyManager::get<SpeechRecognizer>();
@ -360,20 +360,20 @@ Menu::Menu() {
UNSPECIFIED_POSITION);
connect(speechRecognizer.data(), SIGNAL(enabledUpdated(bool)), speechRecognizerAction, SLOT(setChecked(bool)));
#endif
// Developer > UI >>>
MenuWrapper* uiOptionsMenu = developerMenu->addMenu("UI");
action = addCheckableActionToQMenuAndActionHash(uiOptionsMenu, MenuOption::DesktopTabletToToolbar, 0,
qApp->getDesktopTabletBecomesToolbarSetting());
// Developer > UI > Show Overlays
addCheckableActionToQMenuAndActionHash(uiOptionsMenu, MenuOption::Overlays, 0, true);
// Developer > UI > Desktop Tablet Becomes Toolbar
connect(action, &QAction::triggered, [action] {
qApp->setDesktopTabletBecomesToolbarSetting(action->isChecked());
});
// Developer > UI > HMD Tablet Becomes Toolbar
action = addCheckableActionToQMenuAndActionHash(uiOptionsMenu, MenuOption::HMDTabletToToolbar, 0,
qApp->getHmdTabletBecomesToolbarSetting());
@ -617,6 +617,12 @@ Menu::Menu() {
false,
&UserActivityLogger::getInstance(),
SLOT(disable(bool)));
addCheckableActionToQMenuAndActionHash(networkMenu,
MenuOption::DisableCrashLogger,
0,
false,
&UserActivityLogger::getInstance(),
SLOT(crashMonitorDisable(bool)));
addActionToQMenuAndActionHash(networkMenu, MenuOption::ShowDSConnectTable, 0,
qApp, SLOT(loadDomainConnectionDialog()));
@ -702,7 +708,7 @@ Menu::Menu() {
result = QProcessEnvironment::systemEnvironment().contains(HIFI_SHOW_DEVELOPER_CRASH_MENU);
if (result) {
MenuWrapper* crashMenu = developerMenu->addMenu("Crash");
// Developer > Crash > Display Crash Options
addCheckableActionToQMenuAndActionHash(crashMenu, MenuOption::DisplayCrashOptions, 0, true);
@ -741,7 +747,7 @@ Menu::Menu() {
addActionToQMenuAndActionHash(crashMenu, MenuOption::CrashOnShutdown, 0, qApp, SLOT(crashOnShutdown()));
}
// Developer > Show Statistics
addCheckableActionToQMenuAndActionHash(developerMenu, MenuOption::Stats, 0, true);

View file

@ -86,6 +86,7 @@ namespace MenuOption {
const QString DeleteAvatarEntitiesBookmark = "Delete Avatar Entities Bookmark";
const QString DeleteBookmark = "Delete Bookmark...";
const QString DisableActivityLogger = "Disable Activity Logger";
const QString DisableCrashLogger = "Disable Crash Logger";
const QString DisableEyelidAdjustment = "Disable Eyelid Adjustment";
const QString DisableLightEntities = "Disable Light Entities";
const QString DisplayCrashOptions = "Display Crash Options";

View file

@ -80,7 +80,7 @@ QVariantHash ModelPropertiesDialog::getMapping() const {
// update the joint indices
QVariantHash jointIndices;
for (size_t i = 0; i < _hfmModel.joints.size(); i++) {
for (int i = 0; i < _hfmModel.joints.size(); i++) {
jointIndices.insert(_hfmModel.joints.at(i).name, QString::number(i));
}
mapping.insert(JOINT_INDEX_FIELD, jointIndices);

View file

@ -55,7 +55,7 @@ static QStringList HAND_MAPPING_SUFFIXES = {
"HandThumb1",
};
const QUrl PACKAGE_AVATAR_DOCS_BASE_URL = QUrl("https://docs.projectathena.dev/create/avatars/package-avatar.html");
const QUrl PACKAGE_AVATAR_DOCS_BASE_URL = QUrl("https://docs.vircadia.dev/create/avatars/package-avatar.html");
AvatarDoctor::AvatarDoctor(const QUrl& avatarFSTFileUrl) :
_avatarFSTFileUrl(avatarFSTFileUrl) {
@ -79,7 +79,7 @@ void AvatarDoctor::startDiagnosing() {
_missingTextureCount = 0;
_unsupportedTextureCount = 0;
const auto resource = DependencyManager::get<ModelCache>()->getModelResource(_avatarFSTFileUrl);
const auto resource = DependencyManager::get<ModelCache>()->getGeometryResource(_avatarFSTFileUrl);
resource->refresh();
const auto resourceLoaded = [this, resource](bool success) {
@ -99,12 +99,12 @@ void AvatarDoctor::startDiagnosing() {
}
// RIG
if (avatarModel.joints.empty()) {
if (avatarModel.joints.isEmpty()) {
addError("Avatar has no rig.", "no-rig");
} else {
auto jointNames = avatarModel.getJointNames();
if (avatarModel.joints.size() > NETWORKED_JOINTS_LIMIT) {
if (avatarModel.joints.length() > NETWORKED_JOINTS_LIMIT) {
addError(tr( "Avatar has over %n bones.", "", NETWORKED_JOINTS_LIMIT), "maximum-bone-limit");
}
// Avatar does not have Hips bone mapped
@ -297,7 +297,7 @@ void AvatarDoctor::startDiagnosing() {
if (resource->isLoaded()) {
resourceLoaded(!resource->isFailed());
} else {
connect(resource.data(), &ModelResource::finished, this, resourceLoaded);
connect(resource.data(), &GeometryResource::finished, this, resourceLoaded);
}
} else {
addError("Model file cannot be opened", "missing-file");

View file

@ -53,7 +53,7 @@ private:
int _materialMappingCount = 0;
int _materialMappingLoadedCount = 0;
ModelResource::Pointer _model;
GeometryResource::Pointer _model;
bool _isDiagnosing = false;
};

View file

@ -972,7 +972,7 @@ void MyAvatar::simulate(float deltaTime, bool inView) {
recorder->recordFrame(FRAME_TYPE, toFrame(*this));
}
locationChanged(true, false);
locationChanged(true, true);
// if a entity-child of this avatar has moved outside of its queryAACube, update the cube and tell the entity server.
auto entityTreeRenderer = qApp->getEntities();
EntityTreePointer entityTree = entityTreeRenderer ? entityTreeRenderer->getTree() : nullptr;
@ -981,16 +981,7 @@ void MyAvatar::simulate(float deltaTime, bool inView) {
entityTree->withWriteLock([&] {
zoneInteractionProperties = entityTreeRenderer->getZoneInteractionProperties();
EntityEditPacketSender* packetSender = qApp->getEntityEditPacketSender();
forEachDescendant([&](SpatiallyNestablePointer object) {
locationChanged(true, false);
// we need to update attached queryAACubes in our own local tree so point-select always works
// however we don't want to flood the update pipeline with AvatarEntity updates, so we assume
// others have all info required to properly update queryAACube of AvatarEntities on their end
EntityItemPointer entity = std::dynamic_pointer_cast<EntityItem>(object);
bool iShouldTellServer = !(entity && entity->isAvatarEntity());
const bool force = false;
entityTree->updateEntityQueryAACube(object, packetSender, force, iShouldTellServer);
});
entityTree->updateEntityQueryAACube(shared_from_this(), packetSender, false, true);
});
bool isPhysicsEnabled = qApp->isPhysicsEnabled();
bool zoneAllowsFlying = zoneInteractionProperties.first;
@ -1988,7 +1979,7 @@ void MyAvatar::loadData() {
// Flying preferences must be loaded before calling setFlyingEnabled()
Setting::Handle<bool> firstRunVal { Settings::firstRun, true };
setFlyingHMDPref(firstRunVal.get() ? false : _flyingHMDSetting.get());
setFlyingHMDPref(firstRunVal.get() ? true : _flyingHMDSetting.get());
setMovementReference(firstRunVal.get() ? false : _movementReferenceSetting.get());
setDriveGear1(firstRunVal.get() ? DEFAULT_GEAR_1 : _driveGear1Setting.get());
setDriveGear2(firstRunVal.get() ? DEFAULT_GEAR_2 : _driveGear2Setting.get());
@ -2483,7 +2474,7 @@ void MyAvatar::setSkeletonModelURL(const QUrl& skeletonModelURL) {
if (_fullAvatarModelName.isEmpty()) {
// Store the FST file name into preferences
const auto& mapping = _skeletonModel->getNetworkModel()->getMapping();
const auto& mapping = _skeletonModel->getGeometry()->getMapping();
if (mapping.value("name").isValid()) {
_fullAvatarModelName = mapping.value("name").toString();
}
@ -2491,7 +2482,7 @@ void MyAvatar::setSkeletonModelURL(const QUrl& skeletonModelURL) {
initHeadBones();
_skeletonModel->setCauterizeBoneSet(_headBoneSet);
_fstAnimGraphOverrideUrl = _skeletonModel->getNetworkModel()->getAnimGraphOverrideUrl();
_fstAnimGraphOverrideUrl = _skeletonModel->getGeometry()->getAnimGraphOverrideUrl();
initAnimGraph();
initFlowFromFST();
}

View file

@ -762,7 +762,7 @@ public:
* <p>Note: When using pre-built animation data, it's critical that the joint orientation of the source animation and target
* rig are equivalent, since the animation data applies absolute values onto the joints. If the orientations are different,
* the avatar will move in unpredictable ways. For more information about avatar joint orientation standards, see
* <a href="https://docs.projectathena.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.
* <a href="https://docs.vircadia.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.
* @function MyAvatar.overrideRoleAnimation
* @param {string} role - The animation role to override
* @param {string} url - The URL to the animation file. Animation files need to be in glTF or FBX format, but only need to
@ -1920,7 +1920,7 @@ public:
/**jsdoc
* Enables and disables flow simulation of physics on the avatar's hair, clothes, and body parts. See
* {@link https://docs.projectathena.dev/create/avatars/add-flow.html|Add Flow to Your Avatar} for more
* {@link https://docs.vircadia.dev/create/avatars/add-flow.html|Add Flow to Your Avatar} for more
* information.
* @function MyAvatar.useFlow
* @param {boolean} isActive - <code>true</code> if flow simulation is enabled on the joint, <code>false</code> if it isn't.
@ -2285,7 +2285,7 @@ public slots:
/**jsdoc
* Gets the URL of the override animation graph.
* <p>See {@link https://docs.projectathena.dev/create/avatars/custom-animations.html|Custom Avatar Animations} for
* <p>See {@link https://docs.vircadia.dev/create/avatars/custom-animations.html|Custom Avatar Animations} for
* information on animation graphs.</p>
* @function MyAvatar.getAnimGraphOverrideUrl
* @returns {string} The URL of the override animation graph JSON file. <code>""</code> if there is no override animation
@ -2295,7 +2295,7 @@ public slots:
/**jsdoc
* Sets the animation graph to use in preference to the default animation graph.
* <p>See {@link https://docs.projectathena.dev/create/avatars/custom-animations.html|Custom Avatar Animations} for
* <p>See {@link https://docs.vircadia.dev/create/avatars/custom-animations.html|Custom Avatar Animations} for
* information on animation graphs.</p>
* @function MyAvatar.setAnimGraphOverrideUrl
* @param {string} url - The URL of the animation graph JSON file to use. Set to <code>""</code> to clear an override.
@ -2304,7 +2304,7 @@ public slots:
/**jsdoc
* Gets the URL of animation graph (i.e., the avatar animation JSON) that's currently being used for avatar animations.
* <p>See {@link https://docs.projectathena.dev/create/avatars/custom-animations.html|Custom Avatar Animations} for
* <p>See {@link https://docs.vircadia.dev/create/avatars/custom-animations.html|Custom Avatar Animations} for
* information on animation graphs.</p>
* @function MyAvatar.getAnimGraphUrl
* @returns {string} The URL of the current animation graph JSON file.
@ -2315,7 +2315,7 @@ public slots:
/**jsdoc
* Sets the current animation graph (i.e., the avatar animation JSON) to use for avatar animations and makes it the default.
* <p>See {@link https://docs.projectathena.dev/create/avatars/custom-animations.html|Custom Avatar Animations} for
* <p>See {@link https://docs.vircadia.dev/create/avatars/custom-animations.html|Custom Avatar Animations} for
* information on animation graphs.</p>
* @function MyAvatar.setAnimGraphUrl
* @param {string} url - The URL of the animation graph JSON file to use.
@ -2702,7 +2702,7 @@ private:
bool _enableFlying { false };
bool _flyingPrefDesktop { true };
bool _flyingPrefHMD { false };
bool _flyingPrefHMD { true };
bool _wasPushing { false };
bool _isPushing { false };
bool _isBeingPushed { false };

View file

@ -72,7 +72,7 @@ int main(int argc, const char* argv[]) {
}
QCommandLineParser parser;
parser.setApplicationDescription("High Fidelity");
parser.setApplicationDescription("Vircadia");
QCommandLineOption versionOption = parser.addVersionOption();
QCommandLineOption helpOption = parser.addHelpOption();
@ -218,12 +218,12 @@ int main(int argc, const char* argv[]) {
}
qDebug() << "UserActivityLogger is enabled:" << ual.isEnabled();
if (ual.isEnabled()) {
qDebug() << "Crash handler logger is enabled:" << ual.isCrashMonitorEnabled();
if (ual.isCrashMonitorEnabled()) {
auto crashHandlerStarted = startCrashHandler(argv[0]);
qDebug() << "Crash handler started:" << crashHandlerStarted;
}
const QString& applicationName = getInterfaceSharedMemoryName();
bool instanceMightBeRunning = true;
#ifdef Q_OS_WIN

View file

@ -121,9 +121,8 @@ bool CollisionPick::isLoaded() const {
bool CollisionPick::getShapeInfoReady(const CollisionRegion& pick) {
if (_mathPick.shouldComputeShapeInfo()) {
if (_cachedResource && _cachedResource->isLoaded()) {
// TODO: Model CollisionPick support
//computeShapeInfo(pick, *_mathPick.shapeInfo, _cachedResource);
//_mathPick.loaded = true;
computeShapeInfo(pick, *_mathPick.shapeInfo, _cachedResource);
_mathPick.loaded = true;
} else {
_mathPick.loaded = false;
}
@ -135,7 +134,7 @@ bool CollisionPick::getShapeInfoReady(const CollisionRegion& pick) {
return _mathPick.loaded;
}
void CollisionPick::computeShapeInfoDimensionsOnly(const CollisionRegion& pick, ShapeInfo& shapeInfo, QSharedPointer<ModelResource> resource) {
void CollisionPick::computeShapeInfoDimensionsOnly(const CollisionRegion& pick, ShapeInfo& shapeInfo, QSharedPointer<GeometryResource> resource) {
ShapeType type = shapeInfo.getType();
glm::vec3 dimensions = pick.transform.getScale();
QString modelURL = (resource ? resource->getURL().toString() : "");
@ -148,12 +147,241 @@ void CollisionPick::computeShapeInfoDimensionsOnly(const CollisionRegion& pick,
}
}
void CollisionPick::computeShapeInfo(const CollisionRegion& pick, ShapeInfo& shapeInfo, QSharedPointer<GeometryResource> resource) {
// This code was copied and modified from RenderableModelEntityItem::computeShapeInfo
// TODO: Move to some shared code area (in entities-renderer? model-networking?)
// after we verify this is working and do a diff comparison with RenderableModelEntityItem::computeShapeInfo
// to consolidate the code.
// We may also want to make computeShapeInfo always abstract away from the gpu model mesh, like it does here.
const uint32_t TRIANGLE_STRIDE = 3;
const uint32_t QUAD_STRIDE = 4;
ShapeType type = shapeInfo.getType();
glm::vec3 dimensions = pick.transform.getScale();
if (type == SHAPE_TYPE_COMPOUND) {
// should never fall in here when collision model not fully loaded
// TODO: assert that all geometries exist and are loaded
//assert(_model && _model->isLoaded() && _compoundShapeResource && _compoundShapeResource->isLoaded());
const HFMModel& collisionModel = resource->getHFMModel();
ShapeInfo::PointCollection& pointCollection = shapeInfo.getPointCollection();
pointCollection.clear();
uint32_t i = 0;
// the way OBJ files get read, each section under a "g" line is its own meshPart. We only expect
// to find one actual "mesh" (with one or more meshParts in it), but we loop over the meshes, just in case.
foreach (const HFMMesh& mesh, collisionModel.meshes) {
// each meshPart is a convex hull
foreach (const HFMMeshPart &meshPart, mesh.parts) {
pointCollection.push_back(QVector<glm::vec3>());
ShapeInfo::PointList& pointsInPart = pointCollection[i];
// run through all the triangles and (uniquely) add each point to the hull
uint32_t numIndices = (uint32_t)meshPart.triangleIndices.size();
// TODO: assert rather than workaround after we start sanitizing HFMMesh higher up
//assert(numIndices % TRIANGLE_STRIDE == 0);
numIndices -= numIndices % TRIANGLE_STRIDE; // WORKAROUND lack of sanity checking in FBXSerializer
for (uint32_t j = 0; j < numIndices; j += TRIANGLE_STRIDE) {
glm::vec3 p0 = mesh.vertices[meshPart.triangleIndices[j]];
glm::vec3 p1 = mesh.vertices[meshPart.triangleIndices[j + 1]];
glm::vec3 p2 = mesh.vertices[meshPart.triangleIndices[j + 2]];
if (!pointsInPart.contains(p0)) {
pointsInPart << p0;
}
if (!pointsInPart.contains(p1)) {
pointsInPart << p1;
}
if (!pointsInPart.contains(p2)) {
pointsInPart << p2;
}
}
// run through all the quads and (uniquely) add each point to the hull
numIndices = (uint32_t)meshPart.quadIndices.size();
// TODO: assert rather than workaround after we start sanitizing HFMMesh higher up
//assert(numIndices % QUAD_STRIDE == 0);
numIndices -= numIndices % QUAD_STRIDE; // WORKAROUND lack of sanity checking in FBXSerializer
for (uint32_t j = 0; j < numIndices; j += QUAD_STRIDE) {
glm::vec3 p0 = mesh.vertices[meshPart.quadIndices[j]];
glm::vec3 p1 = mesh.vertices[meshPart.quadIndices[j + 1]];
glm::vec3 p2 = mesh.vertices[meshPart.quadIndices[j + 2]];
glm::vec3 p3 = mesh.vertices[meshPart.quadIndices[j + 3]];
if (!pointsInPart.contains(p0)) {
pointsInPart << p0;
}
if (!pointsInPart.contains(p1)) {
pointsInPart << p1;
}
if (!pointsInPart.contains(p2)) {
pointsInPart << p2;
}
if (!pointsInPart.contains(p3)) {
pointsInPart << p3;
}
}
if (pointsInPart.size() == 0) {
qCDebug(scriptengine) << "Warning -- meshPart has no faces";
pointCollection.pop_back();
continue;
}
++i;
}
}
// We expect that the collision model will have the same units and will be displaced
// from its origin in the same way the visual model is. The visual model has
// been centered and probably scaled. We take the scaling and offset which were applied
// to the visual model and apply them to the collision model (without regard for the
// collision model's extents).
glm::vec3 scaleToFit = dimensions / resource->getHFMModel().getUnscaledMeshExtents().size();
// multiply each point by scale
for (int32_t i = 0; i < pointCollection.size(); i++) {
for (int32_t j = 0; j < pointCollection[i].size(); j++) {
// back compensate for registration so we can apply that offset to the shapeInfo later
pointCollection[i][j] = scaleToFit * pointCollection[i][j];
}
}
shapeInfo.setParams(type, dimensions, resource->getURL().toString());
} else if (type >= SHAPE_TYPE_SIMPLE_HULL && type <= SHAPE_TYPE_STATIC_MESH) {
const HFMModel& hfmModel = resource->getHFMModel();
int numHFMMeshes = hfmModel.meshes.size();
int totalNumVertices = 0;
for (int i = 0; i < numHFMMeshes; i++) {
const HFMMesh& mesh = hfmModel.meshes.at(i);
totalNumVertices += mesh.vertices.size();
}
const int32_t MAX_VERTICES_PER_STATIC_MESH = 1e6;
if (totalNumVertices > MAX_VERTICES_PER_STATIC_MESH) {
qWarning() << "model" << "has too many vertices" << totalNumVertices << "and will collide as a box.";
shapeInfo.setParams(SHAPE_TYPE_BOX, 0.5f * dimensions);
return;
}
auto& meshes = resource->getHFMModel().meshes;
int32_t numMeshes = (int32_t)(meshes.size());
const int MAX_ALLOWED_MESH_COUNT = 1000;
if (numMeshes > MAX_ALLOWED_MESH_COUNT) {
// too many will cause the deadlock timer to throw...
shapeInfo.setParams(SHAPE_TYPE_BOX, 0.5f * dimensions);
return;
}
ShapeInfo::PointCollection& pointCollection = shapeInfo.getPointCollection();
pointCollection.clear();
if (type == SHAPE_TYPE_SIMPLE_COMPOUND) {
pointCollection.resize(numMeshes);
} else {
pointCollection.resize(1);
}
ShapeInfo::TriangleIndices& triangleIndices = shapeInfo.getTriangleIndices();
triangleIndices.clear();
Extents extents;
int32_t meshCount = 0;
int32_t pointListIndex = 0;
for (auto& mesh : meshes) {
if (!mesh.vertices.size()) {
continue;
}
QVector<glm::vec3> vertices = mesh.vertices;
ShapeInfo::PointList& points = pointCollection[pointListIndex];
// reserve room
int32_t sizeToReserve = (int32_t)(vertices.count());
if (type == SHAPE_TYPE_SIMPLE_COMPOUND) {
// a list of points for each mesh
pointListIndex++;
} else {
// only one list of points
sizeToReserve += (int32_t)points.size();
}
points.reserve(sizeToReserve);
// copy points
const glm::vec3* vertexItr = vertices.cbegin();
while (vertexItr != vertices.cend()) {
glm::vec3 point = *vertexItr;
points.push_back(point);
extents.addPoint(point);
++vertexItr;
}
if (type == SHAPE_TYPE_STATIC_MESH) {
// copy into triangleIndices
size_t triangleIndicesCount = 0;
for (const HFMMeshPart& meshPart : mesh.parts) {
triangleIndicesCount += meshPart.triangleIndices.count();
}
triangleIndices.reserve((int)triangleIndicesCount);
for (const HFMMeshPart& meshPart : mesh.parts) {
const int* indexItr = meshPart.triangleIndices.cbegin();
while (indexItr != meshPart.triangleIndices.cend()) {
triangleIndices.push_back(*indexItr);
++indexItr;
}
}
} else if (type == SHAPE_TYPE_SIMPLE_COMPOUND) {
// for each mesh copy unique part indices, separated by special bogus (flag) index values
for (const HFMMeshPart& meshPart : mesh.parts) {
// collect unique list of indices for this part
std::set<int32_t> uniqueIndices;
auto numIndices = meshPart.triangleIndices.count();
// TODO: assert rather than workaround after we start sanitizing HFMMesh higher up
//assert(numIndices% TRIANGLE_STRIDE == 0);
numIndices -= numIndices % TRIANGLE_STRIDE; // WORKAROUND lack of sanity checking in FBXSerializer
auto indexItr = meshPart.triangleIndices.cbegin();
while (indexItr != meshPart.triangleIndices.cend()) {
uniqueIndices.insert(*indexItr);
++indexItr;
}
// store uniqueIndices in triangleIndices
triangleIndices.reserve(triangleIndices.size() + (int32_t)uniqueIndices.size());
for (auto index : uniqueIndices) {
triangleIndices.push_back(index);
}
// flag end of part
triangleIndices.push_back(END_OF_MESH_PART);
}
// flag end of mesh
triangleIndices.push_back(END_OF_MESH);
}
++meshCount;
}
// scale and shift
glm::vec3 extentsSize = extents.size();
glm::vec3 scaleToFit = dimensions / extentsSize;
for (int32_t i = 0; i < 3; ++i) {
if (extentsSize[i] < 1.0e-6f) {
scaleToFit[i] = 1.0f;
}
}
for (auto points : pointCollection) {
for (int32_t i = 0; i < points.size(); ++i) {
points[i] = (points[i] * scaleToFit);
}
}
shapeInfo.setParams(type, 0.5f * dimensions, resource->getURL().toString());
}
}
CollisionPick::CollisionPick(const PickFilter& filter, float maxDistance, bool enabled, bool scaleWithParent, CollisionRegion collisionRegion, PhysicsEnginePointer physicsEngine) :
Pick(collisionRegion, filter, maxDistance, enabled),
_scaleWithParent(scaleWithParent),
_physicsEngine(physicsEngine) {
if (collisionRegion.shouldComputeShapeInfo()) {
_cachedResource = DependencyManager::get<ModelCache>()->getCollisionModelResource(collisionRegion.modelURL);
_cachedResource = DependencyManager::get<ModelCache>()->getCollisionGeometryResource(collisionRegion.modelURL);
}
_mathPick.loaded = isLoaded();
}

View file

@ -63,13 +63,14 @@ protected:
bool isLoaded() const;
// Returns true if _mathPick.shapeInfo is valid. Otherwise, attempts to get the _mathPick ready for use.
bool getShapeInfoReady(const CollisionRegion& pick);
void computeShapeInfoDimensionsOnly(const CollisionRegion& pick, ShapeInfo& shapeInfo, QSharedPointer<ModelResource> resource);
void computeShapeInfo(const CollisionRegion& pick, ShapeInfo& shapeInfo, QSharedPointer<GeometryResource> resource);
void computeShapeInfoDimensionsOnly(const CollisionRegion& pick, ShapeInfo& shapeInfo, QSharedPointer<GeometryResource> resource);
void filterIntersections(std::vector<ContactTestResult>& intersections) const;
bool _scaleWithParent;
PhysicsEnginePointer _physicsEngine;
QSharedPointer<ModelResource> _cachedResource;
QSharedPointer<GeometryResource> _cachedResource;
// Options for what information to get from collision results
bool _includeNormals;

View file

@ -254,7 +254,15 @@ void setupPreferences() {
auto setter = [](bool value) { Menu::getInstance()->setIsOptionChecked(MenuOption::DisableActivityLogger, !value); };
preferences->addPreference(new CheckPreference("Privacy", "Send data - High Fidelity uses information provided by your "
"client to improve the product through the logging of errors, tracking of usage patterns, "
"installation and system details, and crash events. By allowing High Fidelity to collect "
"installation and system details. By allowing High Fidelity to collect this information "
"you are helping to improve the product. ", getter, setter));
}
{
auto getter = []()->bool { return !Menu::getInstance()->isOptionChecked(MenuOption::DisableCrashLogger); };
auto setter = [](bool value) { Menu::getInstance()->setIsOptionChecked(MenuOption::DisableCrashLogger, !value); };
preferences->addPreference(new CheckPreference("Privacy", "Send crashes - Vircadia uses information provided by your "
"client to improve the product through crash reports. By allowing Vircadia to collect "
"this information you are helping to improve the product. ", getter, setter));
}

View file

@ -20,17 +20,24 @@ AnimSkeleton::AnimSkeleton(const HFMModel& hfmModel) {
_geometryOffset = hfmModel.offset;
buildSkeletonFromJoints(hfmModel.joints, hfmModel.jointRotationOffsets);
// convert to std::vector of joints
std::vector<HFMJoint> joints;
joints.reserve(hfmModel.joints.size());
for (auto& joint : hfmModel.joints) {
joints.push_back(joint);
}
buildSkeletonFromJoints(joints, hfmModel.jointRotationOffsets);
// we make a copy of the inverseBindMatrices in order to prevent mutating the model bind pose
// when we are dealing with a joint offset in the model
for (uint32_t i = 0; i < (uint32_t)hfmModel.skinDeformers.size(); i++) {
const auto& deformer = hfmModel.skinDeformers[i];
for (int i = 0; i < (int)hfmModel.meshes.size(); i++) {
const HFMMesh& mesh = hfmModel.meshes.at(i);
std::vector<HFMCluster> dummyClustersList;
for (uint32_t j = 0; j < (uint32_t)deformer.clusters.size(); j++) {
for (int j = 0; j < mesh.clusters.size(); j++) {
std::vector<glm::mat4> bindMatrices;
// cast into a non-const reference, so we can mutate the FBXCluster
HFMCluster& cluster = const_cast<HFMCluster&>(deformer.clusters.at(j));
HFMCluster& cluster = const_cast<HFMCluster&>(mesh.clusters.at(j));
HFMCluster localCluster;
localCluster.jointIndex = cluster.jointIndex;

View file

@ -68,7 +68,7 @@ public:
void dump(const AnimPoseVec& poses) const;
std::vector<int> lookUpJointIndices(const std::vector<QString>& jointNames) const;
const HFMCluster getClusterBindMatricesOriginalValues(int skinDeformerIndex, int clusterIndex) const { return _clusterBindMatrixOriginalValues[skinDeformerIndex][clusterIndex]; }
const HFMCluster getClusterBindMatricesOriginalValues(const int meshIndex, const int clusterIndex) const { return _clusterBindMatrixOriginalValues[meshIndex][clusterIndex]; }
protected:
void buildSkeletonFromJoints(const std::vector<HFMJoint>& joints, const QMap<int, glm::quat> jointOffsets);

View file

@ -943,7 +943,7 @@ void Avatar::simulateAttachments(float deltaTime) {
bool texturesLoaded = _attachmentModelsTexturesLoaded.at(i);
// Watch for texture loading
if (!texturesLoaded && model->getNetworkModel() && model->getNetworkModel()->areTexturesLoaded()) {
if (!texturesLoaded && model->getGeometry() && model->getGeometry()->areTexturesLoaded()) {
_attachmentModelsTexturesLoaded[i] = true;
model->updateRenderItems();
}

View file

@ -207,7 +207,7 @@ public:
/**jsdoc
* Gets the default rotation of a joint (in the current avatar) relative to its parent.
* <p>For information on the joint hierarchy used, see
* <a href="https://docs.projectathena.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.</p>
* <a href="https://docs.vircadia.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.</p>
* @function MyAvatar.getDefaultJointRotation
* @param {number} index - The joint index.
* @returns {Quat} The default rotation of the joint if the joint index is valid, otherwise {@link Quat(0)|Quat.IDENTITY}.
@ -218,7 +218,7 @@ public:
* Gets the default translation of a joint (in the current avatar) relative to its parent, in model coordinates.
* <p><strong>Warning:</strong> These coordinates are not necessarily in meters.</p>
* <p>For information on the joint hierarchy used, see
* <a href="https://docs.projectathena.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.</p>
* <a href="https://docs.vircadia.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.</p>
* @function MyAvatar.getDefaultJointTranslation
* @param {number} index - The joint index.
* @returns {Vec3} The default translation of the joint (in model coordinates) if the joint index is valid, otherwise

View file

@ -171,7 +171,7 @@ void SkeletonModel::simulate(float deltaTime, bool fullUpdate) {
// FIXME: This texture loading logic should probably live in Avatar, to mirror RenderableModelEntityItem,
// but Avatars don't get updates in the same way
if (!_texturesLoaded && getNetworkModel() && getNetworkModel()->areTexturesLoaded()) {
if (!_texturesLoaded && getGeometry() && getGeometry()->areTexturesLoaded()) {
_texturesLoaded = true;
updateRenderItems();
}
@ -326,7 +326,7 @@ void SkeletonModel::computeBoundingShape() {
}
const HFMModel& hfmModel = getHFMModel();
if (hfmModel.joints.empty() || _rig.indexOfJoint("Hips") == -1) {
if (hfmModel.joints.isEmpty() || _rig.indexOfJoint("Hips") == -1) {
// rootJointIndex == -1 if the avatar model has no skeleton
return;
}

View file

@ -796,7 +796,7 @@ public:
* @param {Quat} rotation - The rotation of the joint relative to its parent.
* @param {Vec3} translation - The translation of the joint relative to its parent, in model coordinates.
* @example <caption>Set your avatar to it's default T-pose for a while.<br />
* <img alt="Avatar in T-pose" src="https://apidocs.projectathena.dev/images/t-pose.png" /></caption>
* <img alt="Avatar in T-pose" src="https://apidocs.vircadia.dev/images/t-pose.png" /></caption>
* // Set all joint translations and rotations to defaults.
* var i, length, rotation, translation;
* for (i = 0, length = MyAvatar.getJointNames().length; i < length; i++) {
@ -860,7 +860,7 @@ public:
/**jsdoc
* Gets the rotation of a joint relative to its parent. For information on the joint hierarchy used, see
* <a href="https://docs.projectathena.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.
* <a href="https://docs.vircadia.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.
* @function Avatar.getJointRotation
* @param {number} index - The index of the joint.
* @returns {Quat} The rotation of the joint relative to its parent.
@ -871,7 +871,7 @@ public:
* Gets the translation of a joint relative to its parent, in model coordinates.
* <p><strong>Warning:</strong> These coordinates are not necessarily in meters.</p>
* <p>For information on the joint hierarchy used, see
* <a href="https://docs.projectathena.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.</p>
* <a href="https://docs.vircadia.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.</p>
* @function Avatar.getJointTranslation
* @param {number} index - The index of the joint.
* @returns {Vec3} The translation of the joint relative to its parent, in model coordinates.
@ -904,7 +904,7 @@ public:
* @param {string} name - The name of the joint.
* @param {Quat} rotation - The rotation of the joint relative to its parent.
* @example <caption>Set your avatar to its default T-pose then rotate its right arm.<br />
* <img alt="Avatar in T-pose with arm rotated" src="https://apidocs.projectathena.dev/images/armpose.png" /></caption>
* <img alt="Avatar in T-pose with arm rotated" src="https://apidocs.vircadia.dev/images/armpose.png" /></caption>
* // Set all joint translations and rotations to defaults.
* var i, length, rotation, translation;
* for (i = 0, length = MyAvatar.getJointNames().length; i < length; i++) {
@ -939,7 +939,7 @@ public:
* @param {Vec3} translation - The translation of the joint relative to its parent, in model coordinates.
* @example <caption>Stretch your avatar's neck. Depending on the avatar you are using, you will either see a gap between
* the head and body or you will see the neck stretched.<br />
* <img alt="Avatar with neck stretched" src="https://apidocs.projectathena.dev/images/stretched-neck.png" /></caption>
* <img alt="Avatar with neck stretched" src="https://apidocs.vircadia.dev/images/stretched-neck.png" /></caption>
* // Stretch your avatar's neck.
* MyAvatar.setJointTranslation("Neck", Vec3.multiply(2, MyAvatar.getJointTranslation("Neck")));
*
@ -981,7 +981,7 @@ public:
/**jsdoc
* Gets the rotation of a joint relative to its parent. For information on the joint hierarchy used, see
* <a href="https://docs.projectathena.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.
* <a href="https://docs.vircadia.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.
* @function Avatar.getJointRotation
* @param {string} name - The name of the joint.
* @returns {Quat} The rotation of the joint relative to its parent.
@ -996,7 +996,7 @@ public:
* Gets the translation of a joint relative to its parent, in model coordinates.
* <p><strong>Warning:</strong> These coordinates are not necessarily in meters.</p>
* <p>For information on the joint hierarchy used, see
* <a href="https://docs.projectathena.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.</p>
* <a href="https://docs.vircadia.dev/create/avatars/avatar-standards.html">Avatar Standards</a>.</p>
* @function Avatar.getJointTranslation
* @param {number} name - The name of the joint.
* @returns {Vec3} The translation of the joint relative to its parent, in model coordinates.
@ -1041,7 +1041,7 @@ public:
* @param {Quat[]} jointRotations - The rotations for all joints in the avatar. The values are in the same order as the
* array returned by {@link MyAvatar.getJointNames}, or {@link Avatar.getJointNames} if using the <code>Avatar</code> API.
* @example <caption>Set your avatar to its default T-pose then rotate its right arm.<br />
* <img alt="Avatar in T-pose" src="https://apidocs.projectathena.dev/images/armpose.png" /></caption>
* <img alt="Avatar in T-pose" src="https://apidocs.vircadia.dev/images/armpose.png" /></caption>
* // Set all joint translations and rotations to defaults.
* var i, length, rotation, translation;
* for (i = 0, length = MyAvatar.getJointNames().length; i < length; i++) {
@ -1138,7 +1138,7 @@ public:
* set <code>hasScriptedBlendshapes</code> back to <code>false</code> when the animation is complete.
* @function Avatar.setBlendshape
* @param {string} name - The name of the blendshape, per the
* {@link https://docs.projectathena.dev/create/avatars/avatar-standards.html#blendshapes Avatar Standards}.
* {@link https://docs.vircadia.dev/create/avatars/avatar-standards.html#blendshapes Avatar Standards}.
* @param {number} value - A value between <code>0.0</code> and <code>1.0</code>.
* @example <caption>Open your avatar's mouth wide.</caption>
* MyAvatar.hasScriptedBlendshapes = true;

View file

@ -34,7 +34,6 @@ HeadData::HeadData(AvatarData* owningAvatar) :
{
_userProceduralAnimationFlags.assign((size_t)ProceduralAnimaitonTypeCount, true);
_suppressProceduralAnimationFlags.assign((size_t)ProceduralAnimaitonTypeCount, false);
computeBlendshapesLookupMap();
}
glm::quat HeadData::getRawOrientation() const {
@ -72,12 +71,6 @@ void HeadData::setOrientation(const glm::quat& orientation) {
setHeadOrientation(orientation);
}
void HeadData::computeBlendshapesLookupMap(){
for (int i = 0; i < (int)Blendshapes::BlendshapeCount; i++) {
_blendshapeLookupMap[FACESHIFT_BLENDSHAPES[i]] = i;
}
}
int HeadData::getNumSummedBlendshapeCoefficients() const {
int maxSize = std::max(_blendshapeCoefficients.size(), _transientBlendshapeCoefficients.size());
return maxSize;
@ -109,8 +102,8 @@ const QVector<float>& HeadData::getSummedBlendshapeCoefficients() {
void HeadData::setBlendshape(QString name, float val) {
// Check to see if the named blendshape exists, and then set its value if it does
auto it = _blendshapeLookupMap.find(name);
if (it != _blendshapeLookupMap.end()) {
auto it = BLENDSHAPE_LOOKUP_MAP.find(name);
if (it != BLENDSHAPE_LOOKUP_MAP.end()) {
if (_blendshapeCoefficients.size() <= it.value()) {
_blendshapeCoefficients.resize(it.value() + 1);
}
@ -135,8 +128,8 @@ void HeadData::setBlendshape(QString name, float val) {
}
int HeadData::getBlendshapeIndex(const QString& name) {
auto it = _blendshapeLookupMap.find(name);
int index = it != _blendshapeLookupMap.end() ? it.value() : -1;
auto it = BLENDSHAPE_LOOKUP_MAP.find(name);
int index = it != BLENDSHAPE_LOOKUP_MAP.end() ? it.value() : -1;
return index;
}
@ -155,8 +148,8 @@ static const QString JSON_AVATAR_HEAD_LOOKAT = QStringLiteral("lookAt");
QJsonObject HeadData::toJson() const {
QJsonObject headJson;
QJsonObject blendshapesJson;
for (auto name : _blendshapeLookupMap.keys()) {
auto index = _blendshapeLookupMap[name];
for (auto name : BLENDSHAPE_LOOKUP_MAP.keys()) {
auto index = BLENDSHAPE_LOOKUP_MAP[name];
float value = 0.0f;
if (index < _blendshapeCoefficients.size()) {
value += _blendshapeCoefficients[index];

View file

@ -125,7 +125,6 @@ protected:
QVector<float> _blendshapeCoefficients;
QVector<float> _transientBlendshapeCoefficients;
QVector<float> _summedBlendshapeCoefficients;
QMap<QString, int> _blendshapeLookupMap;
AvatarData* _owningAvatar;
private:
@ -134,7 +133,6 @@ private:
HeadData& operator= (const HeadData&);
void setHeadOrientation(const glm::quat& orientation);
void computeBlendshapesLookupMap();
};
#endif // hifi_HeadData_h

View file

@ -90,11 +90,11 @@ void FBXBaker::replaceMeshNodeWithDraco(FBXNode& meshNode, const QByteArray& dra
}
}
void FBXBaker::rewriteAndBakeSceneModels(const std::vector<hfm::Mesh>& meshes, const std::vector<hifi::ByteArray>& dracoMeshes, const std::vector<std::vector<hifi::ByteArray>>& dracoMaterialLists) {
void FBXBaker::rewriteAndBakeSceneModels(const QVector<hfm::Mesh>& meshes, const std::vector<hifi::ByteArray>& dracoMeshes, const std::vector<std::vector<hifi::ByteArray>>& dracoMaterialLists) {
std::vector<int> meshIndexToRuntimeOrder;
auto meshCount = (uint32_t)meshes.size();
auto meshCount = (int)meshes.size();
meshIndexToRuntimeOrder.resize(meshCount);
for (uint32_t i = 0; i < meshCount; i++) {
for (int i = 0; i < meshCount; i++) {
meshIndexToRuntimeOrder[meshes[i].meshIndex] = i;
}

View file

@ -33,7 +33,7 @@ protected:
virtual void bakeProcessedSource(const hfm::Model::Pointer& hfmModel, const std::vector<hifi::ByteArray>& dracoMeshes, const std::vector<std::vector<hifi::ByteArray>>& dracoMaterialLists) override;
private:
void rewriteAndBakeSceneModels(const std::vector<hfm::Mesh>& meshes, const std::vector<hifi::ByteArray>& dracoMeshes, const std::vector<std::vector<hifi::ByteArray>>& dracoMaterialLists);
void rewriteAndBakeSceneModels(const QVector<hfm::Mesh>& meshes, const std::vector<hifi::ByteArray>& dracoMeshes, const std::vector<std::vector<hifi::ByteArray>>& dracoMaterialLists);
void replaceMeshNodeWithDraco(FBXNode& meshNode, const QByteArray& dracoMeshBytes, const std::vector<hifi::ByteArray>& dracoMaterialList);
};

View file

@ -258,9 +258,9 @@ void MaterialBaker::addTexture(const QString& materialName, image::TextureUsage:
}
};
void MaterialBaker::setMaterials(const std::vector<hfm::Material>& materials, const QString& baseURL) {
void MaterialBaker::setMaterials(const QHash<QString, hfm::Material>& materials, const QString& baseURL) {
_materialResource = NetworkMaterialResourcePointer(new NetworkMaterialResource(), [](NetworkMaterialResource* ptr) { ptr->deleteLater(); });
for (const auto& material : materials) {
for (auto& material : materials) {
_materialResource->parsedMaterials.names.push_back(material.name.toStdString());
_materialResource->parsedMaterials.networkMaterials[material.name.toStdString()] = std::make_shared<NetworkMaterial>(material, baseURL);

View file

@ -32,7 +32,7 @@ public:
bool isURL() const { return _isURL; }
QString getBakedMaterialData() const { return _bakedMaterialData; }
void setMaterials(const std::vector<hfm::Material>& materials, const QString& baseURL);
void setMaterials(const QHash<QString, hfm::Material>& materials, const QString& baseURL);
void setMaterials(const NetworkMaterialResourcePointer& materialResource);
NetworkMaterialResourcePointer getNetworkMaterialResource() const { return _materialResource; }

View file

@ -265,7 +265,7 @@ void ModelBaker::bakeSourceCopy() {
return;
}
if (!_hfmModel->materials.empty()) {
if (!_hfmModel->materials.isEmpty()) {
_materialBaker = QSharedPointer<MaterialBaker>(
new MaterialBaker(_modelURL.fileName(), true, _bakedOutputDir),
&MaterialBaker::deleteLater

View file

@ -37,10 +37,10 @@ const QByteArray MESH = "Mesh";
void OBJBaker::bakeProcessedSource(const hfm::Model::Pointer& hfmModel, const std::vector<hifi::ByteArray>& dracoMeshes, const std::vector<std::vector<hifi::ByteArray>>& dracoMaterialLists) {
// Write OBJ Data as FBX tree nodes
createFBXNodeTree(_rootNode, hfmModel, dracoMeshes[0], dracoMaterialLists[0]);
createFBXNodeTree(_rootNode, hfmModel, dracoMeshes[0]);
}
void OBJBaker::createFBXNodeTree(FBXNode& rootNode, const hfm::Model::Pointer& hfmModel, const hifi::ByteArray& dracoMesh, const std::vector<hifi::ByteArray>& dracoMaterialList) {
void OBJBaker::createFBXNodeTree(FBXNode& rootNode, const hfm::Model::Pointer& hfmModel, const hifi::ByteArray& dracoMesh) {
// Make all generated nodes children of rootNode
rootNode.children = { FBXNode(), FBXNode(), FBXNode() };
FBXNode& globalSettingsNode = rootNode.children[0];
@ -100,22 +100,19 @@ void OBJBaker::createFBXNodeTree(FBXNode& rootNode, const hfm::Model::Pointer& h
}
// Generating Objects node's child - Material node
// Each material ID should only appear once thanks to deduplication in BuildDracoMeshTask, but we want to make sure they are created in the right order
std::unordered_map<QString, uint32_t> materialIDToIndex;
for (uint32_t materialIndex = 0; materialIndex < hfmModel->materials.size(); ++materialIndex) {
const auto& material = hfmModel->materials[materialIndex];
materialIDToIndex[material.materialID] = materialIndex;
}
// Create nodes for each material in the material list
for (const auto& dracoMaterial : dracoMaterialList) {
const QString materialID = QString(dracoMaterial);
const uint32_t materialIndex = materialIDToIndex[materialID];
const auto& material = hfmModel->materials[materialIndex];
auto& meshParts = hfmModel->meshes[0].parts;
for (auto& meshPart : meshParts) {
FBXNode materialNode;
materialNode.name = MATERIAL_NODE_NAME;
setMaterialNodeProperties(materialNode, material.materialID, material, hfmModel);
if (hfmModel->materials.size() == 1) {
// case when no material information is provided, OBJSerializer considers it as a single default material
for (auto& materialID : hfmModel->materials.keys()) {
setMaterialNodeProperties(materialNode, materialID, hfmModel);
}
} else {
setMaterialNodeProperties(materialNode, meshPart.materialID, hfmModel);
}
objectNode.children.append(materialNode);
}
@ -156,10 +153,12 @@ void OBJBaker::createFBXNodeTree(FBXNode& rootNode, const hfm::Model::Pointer& h
}
// Set properties for material nodes
void OBJBaker::setMaterialNodeProperties(FBXNode& materialNode, const QString& materialName, const hfm::Material& material, const hfm::Model::Pointer& hfmModel) {
void OBJBaker::setMaterialNodeProperties(FBXNode& materialNode, QString material, const hfm::Model::Pointer& hfmModel) {
auto materialID = nextNodeID();
_materialIDs.push_back(materialID);
materialNode.properties = { materialID, materialName, MESH };
materialNode.properties = { materialID, material, MESH };
HFMMaterial currentMaterial = hfmModel->materials[material];
// Setting the hierarchy: Material -> Properties70 -> P -> Properties
FBXNode properties70Node;
@ -171,7 +170,7 @@ void OBJBaker::setMaterialNodeProperties(FBXNode& materialNode, const QString& m
pNodeDiffuseColor.name = P_NODE_NAME;
pNodeDiffuseColor.properties.append({
"DiffuseColor", "Color", "", "A",
material.diffuseColor[0], material.diffuseColor[1], material.diffuseColor[2]
currentMaterial.diffuseColor[0], currentMaterial.diffuseColor[1], currentMaterial.diffuseColor[2]
});
}
properties70Node.children.append(pNodeDiffuseColor);
@ -182,7 +181,7 @@ void OBJBaker::setMaterialNodeProperties(FBXNode& materialNode, const QString& m
pNodeSpecularColor.name = P_NODE_NAME;
pNodeSpecularColor.properties.append({
"SpecularColor", "Color", "", "A",
material.specularColor[0], material.specularColor[1], material.specularColor[2]
currentMaterial.specularColor[0], currentMaterial.specularColor[1], currentMaterial.specularColor[2]
});
}
properties70Node.children.append(pNodeSpecularColor);
@ -193,7 +192,7 @@ void OBJBaker::setMaterialNodeProperties(FBXNode& materialNode, const QString& m
pNodeShininess.name = P_NODE_NAME;
pNodeShininess.properties.append({
"Shininess", "Number", "", "A",
material.shininess
currentMaterial.shininess
});
}
properties70Node.children.append(pNodeShininess);
@ -204,7 +203,7 @@ void OBJBaker::setMaterialNodeProperties(FBXNode& materialNode, const QString& m
pNodeOpacity.name = P_NODE_NAME;
pNodeOpacity.properties.append({
"Opacity", "Number", "", "A",
material.opacity
currentMaterial.opacity
});
}
properties70Node.children.append(pNodeOpacity);

View file

@ -27,8 +27,8 @@ protected:
virtual void bakeProcessedSource(const hfm::Model::Pointer& hfmModel, const std::vector<hifi::ByteArray>& dracoMeshes, const std::vector<std::vector<hifi::ByteArray>>& dracoMaterialLists) override;
private:
void createFBXNodeTree(FBXNode& rootNode, const hfm::Model::Pointer& hfmModel, const hifi::ByteArray& dracoMesh, const std::vector<hifi::ByteArray>& dracoMaterialList);
void setMaterialNodeProperties(FBXNode& materialNode, const QString& materialName, const hfm::Material& material, const hfm::Model::Pointer& hfmModel);
void createFBXNodeTree(FBXNode& rootNode, const hfm::Model::Pointer& hfmModel, const hifi::ByteArray& dracoMesh);
void setMaterialNodeProperties(FBXNode& materialNode, QString material, const hfm::Model::Pointer& hfmModel);
NodeID nextNodeID() { return _nodeID++; }
NodeID _nodeID { 0 };

View file

@ -248,7 +248,7 @@ void EntityTreeRenderer::clearDomainAndNonOwnedEntities() {
for (const auto& entry : _entitiesInScene) {
const auto& renderer = entry.second;
const EntityItemPointer& entityItem = renderer->getEntity();
if (!(entityItem->isLocalEntity() || entityItem->isMyAvatarEntity())) {
if (entityItem && !(entityItem->isLocalEntity() || entityItem->isMyAvatarEntity())) {
fadeOutRenderable(renderer);
} else {
savedEntities[entry.first] = entry.second;
@ -682,7 +682,7 @@ void EntityTreeRenderer::leaveDomainAndNonOwnedEntities() {
QSet<EntityItemID> currentEntitiesInsideToSave;
foreach (const EntityItemID& entityID, _currentEntitiesInside) {
EntityItemPointer entityItem = getTree()->findEntityByEntityItemID(entityID);
if (!(entityItem->isLocalEntity() || entityItem->isMyAvatarEntity())) {
if (entityItem && !(entityItem->isLocalEntity() || entityItem->isMyAvatarEntity())) {
emit leaveEntity(entityID);
if (_entitiesScriptEngine) {
_entitiesScriptEngine->callEntityScriptMethod(entityID, "leaveEntity");

View file

@ -282,7 +282,7 @@ bool RenderableModelEntityItem::findDetailedParabolaIntersection(const glm::vec3
}
void RenderableModelEntityItem::fetchCollisionGeometryResource() {
_collisionGeometryResource = DependencyManager::get<ModelCache>()->getCollisionModelResource(getCollisionShapeURL());
_collisionGeometryResource = DependencyManager::get<ModelCache>()->getCollisionGeometryResource(getCollisionShapeURL());
}
bool RenderableModelEntityItem::unableToLoadCollisionShape() {
@ -357,6 +357,7 @@ bool RenderableModelEntityItem::isReadyToComputeShape() const {
void RenderableModelEntityItem::computeShapeInfo(ShapeInfo& shapeInfo) {
const uint32_t TRIANGLE_STRIDE = 3;
const uint32_t QUAD_STRIDE = 4;
ShapeType type = getShapeType();
@ -379,35 +380,59 @@ void RenderableModelEntityItem::computeShapeInfo(ShapeInfo& shapeInfo) {
ShapeInfo::PointCollection& pointCollection = shapeInfo.getPointCollection();
pointCollection.clear();
size_t numParts = 0;
for (const HFMMesh& mesh : collisionGeometry.meshes) {
numParts += mesh.triangleListMesh.parts.size();
}
pointCollection.reserve(numParts);
uint32_t i = 0;
// the way OBJ files get read, each section under a "g" line is its own meshPart. We only expect
// to find one actual "mesh" (with one or more meshParts in it), but we loop over the meshes, just in case.
for (const HFMMesh& mesh : collisionGeometry.meshes) {
const hfm::TriangleListMesh& triangleListMesh = mesh.triangleListMesh;
foreach (const HFMMesh& mesh, collisionGeometry.meshes) {
// each meshPart is a convex hull
for (const glm::ivec2& part : triangleListMesh.parts) {
foreach (const HFMMeshPart &meshPart, mesh.parts) {
pointCollection.push_back(QVector<glm::vec3>());
ShapeInfo::PointList& pointsInPart = pointCollection[i];
// run through all the triangles and (uniquely) add each point to the hull
pointCollection.emplace_back();
ShapeInfo::PointList& pointsInPart = pointCollection.back();
uint32_t numIndices = (uint32_t)part.y;
uint32_t numIndices = (uint32_t)meshPart.triangleIndices.size();
// TODO: assert rather than workaround after we start sanitizing HFMMesh higher up
//assert(numIndices % TRIANGLE_STRIDE == 0);
numIndices -= numIndices % TRIANGLE_STRIDE; // WORKAROUND lack of sanity checking in FBXSerializer
uint32_t indexStart = (uint32_t)part.x;
uint32_t indexEnd = indexStart + numIndices;
for (uint32_t j = indexStart; j < indexEnd; ++j) {
// NOTE: It seems odd to skip vertices when initializing a btConvexHullShape, but let's keep the behavior similar to the old behavior for now
glm::vec3 point = triangleListMesh.vertices[triangleListMesh.indices[j]];
if (std::find(pointsInPart.cbegin(), pointsInPart.cend(), point) == pointsInPart.cend()) {
pointsInPart.push_back(point);
for (uint32_t j = 0; j < numIndices; j += TRIANGLE_STRIDE) {
glm::vec3 p0 = mesh.vertices[meshPart.triangleIndices[j]];
glm::vec3 p1 = mesh.vertices[meshPart.triangleIndices[j + 1]];
glm::vec3 p2 = mesh.vertices[meshPart.triangleIndices[j + 2]];
if (!pointsInPart.contains(p0)) {
pointsInPart << p0;
}
if (!pointsInPart.contains(p1)) {
pointsInPart << p1;
}
if (!pointsInPart.contains(p2)) {
pointsInPart << p2;
}
}
// run through all the quads and (uniquely) add each point to the hull
numIndices = (uint32_t)meshPart.quadIndices.size();
// TODO: assert rather than workaround after we start sanitizing HFMMesh higher up
//assert(numIndices % QUAD_STRIDE == 0);
numIndices -= numIndices % QUAD_STRIDE; // WORKAROUND lack of sanity checking in FBXSerializer
for (uint32_t j = 0; j < numIndices; j += QUAD_STRIDE) {
glm::vec3 p0 = mesh.vertices[meshPart.quadIndices[j]];
glm::vec3 p1 = mesh.vertices[meshPart.quadIndices[j + 1]];
glm::vec3 p2 = mesh.vertices[meshPart.quadIndices[j + 2]];
glm::vec3 p3 = mesh.vertices[meshPart.quadIndices[j + 3]];
if (!pointsInPart.contains(p0)) {
pointsInPart << p0;
}
if (!pointsInPart.contains(p1)) {
pointsInPart << p1;
}
if (!pointsInPart.contains(p2)) {
pointsInPart << p2;
}
if (!pointsInPart.contains(p3)) {
pointsInPart << p3;
}
}
@ -416,6 +441,7 @@ void RenderableModelEntityItem::computeShapeInfo(ShapeInfo& shapeInfo) {
pointCollection.pop_back();
continue;
}
++i;
}
}
@ -430,8 +456,8 @@ void RenderableModelEntityItem::computeShapeInfo(ShapeInfo& shapeInfo) {
// multiply each point by scale before handing the point-set off to the physics engine.
// also determine the extents of the collision model.
glm::vec3 registrationOffset = dimensions * (ENTITY_ITEM_DEFAULT_REGISTRATION_POINT - getRegistrationPoint());
for (size_t i = 0; i < pointCollection.size(); i++) {
for (size_t j = 0; j < pointCollection[i].size(); j++) {
for (int32_t i = 0; i < pointCollection.size(); i++) {
for (int32_t j = 0; j < pointCollection[i].size(); j++) {
// back compensate for registration so we can apply that offset to the shapeInfo later
pointCollection[i][j] = scaleToFit * (pointCollection[i][j] + model->getOffset()) - registrationOffset;
}
@ -445,63 +471,46 @@ void RenderableModelEntityItem::computeShapeInfo(ShapeInfo& shapeInfo) {
model->updateGeometry();
// compute meshPart local transforms
QVector<glm::mat4> localTransforms;
const HFMModel& hfmModel = model->getHFMModel();
int numHFMMeshes = hfmModel.meshes.size();
int totalNumVertices = 0;
glm::vec3 dimensions = getScaledDimensions();
glm::mat4 invRegistraionOffset = glm::translate(dimensions * (getRegistrationPoint() - ENTITY_ITEM_DEFAULT_REGISTRATION_POINT));
ShapeInfo::TriangleIndices& triangleIndices = shapeInfo.getTriangleIndices();
triangleIndices.clear();
Extents extents;
int32_t shapeCount = 0;
int32_t instanceIndex = 0;
// NOTE: Each pointCollection corresponds to a mesh. Therefore, we should have one pointCollection per mesh instance
// A mesh instance is a unique combination of mesh/transform. For every mesh instance, there are as many shapes as there are parts for that mesh.
// We assume the shapes are grouped by mesh instance, and the group contains one of each mesh part.
uint32_t numInstances = 0;
std::vector<std::vector<std::vector<uint32_t>>> shapesPerInstancePerMesh;
shapesPerInstancePerMesh.resize(hfmModel.meshes.size());
for (uint32_t shapeIndex = 0; shapeIndex < hfmModel.shapes.size();) {
const auto& shape = hfmModel.shapes[shapeIndex];
uint32_t meshIndex = shape.mesh;
const auto& mesh = hfmModel.meshes[meshIndex];
uint32_t numMeshParts = (uint32_t)mesh.parts.size();
assert(numMeshParts != 0);
auto& shapesPerInstance = shapesPerInstancePerMesh[meshIndex];
shapesPerInstance.emplace_back();
auto& shapes = shapesPerInstance.back();
shapes.resize(numMeshParts);
std::iota(shapes.begin(), shapes.end(), shapeIndex);
shapeIndex += numMeshParts;
++numInstances;
for (int i = 0; i < numHFMMeshes; i++) {
const HFMMesh& mesh = hfmModel.meshes.at(i);
if (mesh.clusters.size() > 0) {
const HFMCluster& cluster = mesh.clusters.at(0);
auto jointMatrix = model->getRig().getJointTransform(cluster.jointIndex);
// we backtranslate by the registration offset so we can apply that offset to the shapeInfo later
localTransforms.push_back(invRegistraionOffset * jointMatrix * cluster.inverseBindMatrix);
} else {
localTransforms.push_back(invRegistraionOffset);
}
totalNumVertices += mesh.vertices.size();
}
const uint32_t MAX_ALLOWED_MESH_COUNT = 1000;
if (numInstances > MAX_ALLOWED_MESH_COUNT) {
// too many will cause the deadlock timer to throw...
qWarning() << "model" << getModelURL() << "has too many collision meshes" << numInstances << "and will collide as a box.";
const int32_t MAX_VERTICES_PER_STATIC_MESH = 1e6;
if (totalNumVertices > MAX_VERTICES_PER_STATIC_MESH) {
qWarning() << "model" << getModelURL() << "has too many vertices" << totalNumVertices << "and will collide as a box.";
shapeInfo.setParams(SHAPE_TYPE_BOX, 0.5f * dimensions);
return;
}
size_t totalNumVertices = 0;
for (const auto& shapesPerInstance : shapesPerInstancePerMesh) {
for (const auto& instanceShapes : shapesPerInstance) {
const uint32_t firstShapeIndex = instanceShapes.front();
const auto& firstShape = hfmModel.shapes[firstShapeIndex];
const auto& mesh = hfmModel.meshes[firstShape.mesh];
const auto& triangleListMesh = mesh.triangleListMesh;
// Added once per instance per mesh
totalNumVertices += triangleListMesh.vertices.size();
std::vector<std::shared_ptr<const graphics::Mesh>> meshes;
if (type == SHAPE_TYPE_SIMPLE_COMPOUND) {
auto& hfmMeshes = _collisionGeometryResource->getHFMModel().meshes;
meshes.reserve(hfmMeshes.size());
for (auto& hfmMesh : hfmMeshes) {
meshes.push_back(hfmMesh._mesh);
}
} else {
meshes = model->getGeometry()->getMeshes();
}
const size_t MAX_VERTICES_PER_STATIC_MESH = 1e6;
if (totalNumVertices > MAX_VERTICES_PER_STATIC_MESH) {
qWarning() << "model" << getModelURL() << "has too many vertices" << totalNumVertices << "and will collide as a box.";
int32_t numMeshes = (int32_t)(meshes.size());
const int MAX_ALLOWED_MESH_COUNT = 1000;
if (numMeshes > MAX_ALLOWED_MESH_COUNT) {
// too many will cause the deadlock timer to throw...
shapeInfo.setParams(SHAPE_TYPE_BOX, 0.5f * dimensions);
return;
}
@ -509,118 +518,169 @@ void RenderableModelEntityItem::computeShapeInfo(ShapeInfo& shapeInfo) {
ShapeInfo::PointCollection& pointCollection = shapeInfo.getPointCollection();
pointCollection.clear();
if (type == SHAPE_TYPE_SIMPLE_COMPOUND) {
pointCollection.resize(numInstances);
pointCollection.resize(numMeshes);
} else {
pointCollection.resize(1);
}
for (uint32_t meshIndex = 0; meshIndex < hfmModel.meshes.size(); ++meshIndex) {
const auto& mesh = hfmModel.meshes[meshIndex];
const auto& triangleListMesh = mesh.triangleListMesh;
const auto& vertices = triangleListMesh.vertices;
const auto& indices = triangleListMesh.indices;
const std::vector<glm::ivec2>& parts = triangleListMesh.parts;
ShapeInfo::TriangleIndices& triangleIndices = shapeInfo.getTriangleIndices();
triangleIndices.clear();
const auto& shapesPerInstance = shapesPerInstancePerMesh[meshIndex];
for (const std::vector<uint32_t>& instanceShapes : shapesPerInstance) {
ShapeInfo::PointList& points = pointCollection[instanceIndex];
Extents extents;
int32_t meshCount = 0;
int32_t pointListIndex = 0;
for (auto& mesh : meshes) {
if (!mesh) {
continue;
}
const gpu::BufferView& vertices = mesh->getVertexBuffer();
const gpu::BufferView& indices = mesh->getIndexBuffer();
const gpu::BufferView& parts = mesh->getPartBuffer();
// reserve room
int32_t sizeToReserve = (int32_t)(vertices.size());
if (type == SHAPE_TYPE_SIMPLE_COMPOUND) {
// a list of points for each instance
instanceIndex++;
} else {
// only one list of points
sizeToReserve += (int32_t)((gpu::Size)points.size());
}
points.reserve(sizeToReserve);
// get mesh instance transform
const uint32_t meshIndexOffset = (uint32_t)points.size();
const uint32_t instanceShapeIndexForTransform = instanceShapes.front();
const auto& instanceShapeForTransform = hfmModel.shapes[instanceShapeIndexForTransform];
glm::mat4 localTransform;
if (instanceShapeForTransform.joint != hfm::UNDEFINED_KEY) {
auto jointMatrix = model->getRig().getJointTransform(instanceShapeForTransform.joint);
// we backtranslate by the registration offset so we can apply that offset to the shapeInfo later
if (instanceShapeForTransform.skinDeformer != hfm::UNDEFINED_KEY) {
const auto& skinDeformer = hfmModel.skinDeformers[instanceShapeForTransform.skinDeformer];
glm::mat4 inverseBindMatrix;
if (!skinDeformer.clusters.empty()) {
const auto& cluster = skinDeformer.clusters.back();
inverseBindMatrix = cluster.inverseBindMatrix;
}
localTransform = invRegistraionOffset * jointMatrix * inverseBindMatrix;
} else {
localTransform = invRegistraionOffset * jointMatrix;
}
} else {
localTransform = invRegistraionOffset;
}
ShapeInfo::PointList& points = pointCollection[pointListIndex];
// copy points
auto vertexItr = vertices.cbegin();
while (vertexItr != vertices.cend()) {
glm::vec3 point = extractTranslation(localTransform * glm::translate(*vertexItr));
points.push_back(point);
++vertexItr;
}
for (const auto& instanceShapeIndex : instanceShapes) {
const auto& instanceShape = hfmModel.shapes[instanceShapeIndex];
extents.addExtents(instanceShape.transformedExtents);
}
// reserve room
int32_t sizeToReserve = (int32_t)(vertices.getNumElements());
if (type == SHAPE_TYPE_SIMPLE_COMPOUND) {
// a list of points for each mesh
pointListIndex++;
} else {
// only one list of points
sizeToReserve += (int32_t)((gpu::Size)points.size());
}
points.reserve(sizeToReserve);
if (type == SHAPE_TYPE_STATIC_MESH) {
// copy into triangleIndices
triangleIndices.reserve((int32_t)((gpu::Size)(triangleIndices.size()) + indices.size()));
auto partItr = parts.cbegin();
while (partItr != parts.cend()) {
auto numIndices = partItr->y;
// copy points
uint32_t meshIndexOffset = (uint32_t)points.size();
const glm::mat4& localTransform = localTransforms[meshCount];
gpu::BufferView::Iterator<const glm::vec3> vertexItr = vertices.cbegin<const glm::vec3>();
while (vertexItr != vertices.cend<const glm::vec3>()) {
glm::vec3 point = extractTranslation(localTransform * glm::translate(*vertexItr));
points.push_back(point);
extents.addPoint(point);
++vertexItr;
}
if (type == SHAPE_TYPE_STATIC_MESH) {
// copy into triangleIndices
triangleIndices.reserve((int32_t)((gpu::Size)(triangleIndices.size()) + indices.getNumElements()));
gpu::BufferView::Iterator<const graphics::Mesh::Part> partItr = parts.cbegin<const graphics::Mesh::Part>();
while (partItr != parts.cend<const graphics::Mesh::Part>()) {
auto numIndices = partItr->_numIndices;
if (partItr->_topology == graphics::Mesh::TRIANGLES) {
// TODO: assert rather than workaround after we start sanitizing HFMMesh higher up
//assert(numIndices % TRIANGLE_STRIDE == 0);
numIndices -= numIndices % TRIANGLE_STRIDE; // WORKAROUND lack of sanity checking in FBXSerializer
auto indexItr = indices.cbegin() + partItr->x;
auto indexItr = indices.cbegin<const gpu::BufferView::Index>() + partItr->_startIndex;
auto indexEnd = indexItr + numIndices;
while (indexItr != indexEnd) {
triangleIndices.push_back(*indexItr + meshIndexOffset);
++indexItr;
}
++partItr;
} else if (partItr->_topology == graphics::Mesh::TRIANGLE_STRIP) {
// TODO: resurrect assert after we start sanitizing HFMMesh higher up
//assert(numIndices > 2);
uint32_t approxNumIndices = TRIANGLE_STRIDE * numIndices;
if (approxNumIndices > (uint32_t)(triangleIndices.capacity() - triangleIndices.size())) {
// we underestimated the final size of triangleIndices so we pre-emptively expand it
triangleIndices.reserve(triangleIndices.size() + approxNumIndices);
}
auto indexItr = indices.cbegin<const gpu::BufferView::Index>() + partItr->_startIndex;
auto indexEnd = indexItr + (numIndices - 2);
// first triangle uses the first three indices
triangleIndices.push_back(*(indexItr++) + meshIndexOffset);
triangleIndices.push_back(*(indexItr++) + meshIndexOffset);
triangleIndices.push_back(*(indexItr++) + meshIndexOffset);
// the rest use previous and next index
uint32_t triangleCount = 1;
while (indexItr != indexEnd) {
if ((*indexItr) != graphics::Mesh::PRIMITIVE_RESTART_INDEX) {
if (triangleCount % 2 == 0) {
// even triangles use first two indices in order
triangleIndices.push_back(*(indexItr - 2) + meshIndexOffset);
triangleIndices.push_back(*(indexItr - 1) + meshIndexOffset);
} else {
// odd triangles swap order of first two indices
triangleIndices.push_back(*(indexItr - 1) + meshIndexOffset);
triangleIndices.push_back(*(indexItr - 2) + meshIndexOffset);
}
triangleIndices.push_back(*indexItr + meshIndexOffset);
++triangleCount;
}
++indexItr;
}
}
} else if (type == SHAPE_TYPE_SIMPLE_COMPOUND) {
// for each mesh copy unique part indices, separated by special bogus (flag) index values
auto partItr = parts.cbegin();
while (partItr != parts.cend()) {
// collect unique list of indices for this part
std::set<int32_t> uniqueIndices;
auto numIndices = partItr->y;
++partItr;
}
} else if (type == SHAPE_TYPE_SIMPLE_COMPOUND) {
// for each mesh copy unique part indices, separated by special bogus (flag) index values
gpu::BufferView::Iterator<const graphics::Mesh::Part> partItr = parts.cbegin<const graphics::Mesh::Part>();
while (partItr != parts.cend<const graphics::Mesh::Part>()) {
// collect unique list of indices for this part
std::set<int32_t> uniqueIndices;
auto numIndices = partItr->_numIndices;
if (partItr->_topology == graphics::Mesh::TRIANGLES) {
// TODO: assert rather than workaround after we start sanitizing HFMMesh higher up
//assert(numIndices% TRIANGLE_STRIDE == 0);
numIndices -= numIndices % TRIANGLE_STRIDE; // WORKAROUND lack of sanity checking in FBXSerializer
auto indexItr = indices.cbegin() + partItr->x;
auto indexItr = indices.cbegin<const gpu::BufferView::Index>() + partItr->_startIndex;
auto indexEnd = indexItr + numIndices;
while (indexItr != indexEnd) {
uniqueIndices.insert(*indexItr);
++indexItr;
}
} else if (partItr->_topology == graphics::Mesh::TRIANGLE_STRIP) {
// TODO: resurrect assert after we start sanitizing HFMMesh higher up
//assert(numIndices > TRIANGLE_STRIDE - 1);
// store uniqueIndices in triangleIndices
triangleIndices.reserve(triangleIndices.size() + (int32_t)uniqueIndices.size());
for (auto index : uniqueIndices) {
triangleIndices.push_back(index);
auto indexItr = indices.cbegin<const gpu::BufferView::Index>() + partItr->_startIndex;
auto indexEnd = indexItr + (numIndices - 2);
// first triangle uses the first three indices
uniqueIndices.insert(*(indexItr++));
uniqueIndices.insert(*(indexItr++));
uniqueIndices.insert(*(indexItr++));
// the rest use previous and next index
uint32_t triangleCount = 1;
while (indexItr != indexEnd) {
if ((*indexItr) != graphics::Mesh::PRIMITIVE_RESTART_INDEX) {
if (triangleCount % 2 == 0) {
// EVEN triangles use first two indices in order
uniqueIndices.insert(*(indexItr - 2));
uniqueIndices.insert(*(indexItr - 1));
} else {
// ODD triangles swap order of first two indices
uniqueIndices.insert(*(indexItr - 1));
uniqueIndices.insert(*(indexItr - 2));
}
uniqueIndices.insert(*indexItr);
++triangleCount;
}
++indexItr;
}
// flag end of part
triangleIndices.push_back(END_OF_MESH_PART);
++partItr;
}
// flag end of mesh
triangleIndices.push_back(END_OF_MESH);
}
}
++shapeCount;
// store uniqueIndices in triangleIndices
triangleIndices.reserve(triangleIndices.size() + (int32_t)uniqueIndices.size());
for (auto index : uniqueIndices) {
triangleIndices.push_back(index);
}
// flag end of part
triangleIndices.push_back(END_OF_MESH_PART);
++partItr;
}
// flag end of mesh
triangleIndices.push_back(END_OF_MESH);
}
++meshCount;
}
// scale and shift
@ -632,7 +692,7 @@ void RenderableModelEntityItem::computeShapeInfo(ShapeInfo& shapeInfo) {
}
}
for (auto points : pointCollection) {
for (size_t i = 0; i < points.size(); ++i) {
for (int32_t i = 0; i < points.size(); ++i) {
points[i] = (points[i] * scaleToFit);
}
}
@ -1023,6 +1083,11 @@ uint32_t ModelEntityRenderer::metaFetchMetaSubItems(ItemIDs& subItems) const {
return 0;
}
void ModelEntityRenderer::handleBlendedVertices(int blendshapeNumber, const QVector<BlendshapeOffset>& blendshapeOffsets,
const QVector<int>& blendedMeshSizes, const render::ItemIDs& subItemIDs) {
setBlendedVertices(blendshapeNumber, blendshapeOffsets, blendedMeshSizes, subItemIDs);
}
void ModelEntityRenderer::removeFromScene(const ScenePointer& scene, Transaction& transaction) {
if (_model) {
_model->removeFromScene(scene, transaction);
@ -1191,7 +1256,11 @@ bool ModelEntityRenderer::needsRenderUpdateFromTypedEntity(const TypedEntityPoin
if (model && model->isLoaded()) {
if (!entity->_dimensionsInitialized || entity->_needsInitialSimulation || !entity->_originalTexturesRead) {
return true;
}
}
if (entity->blendshapesChanged()) {
return true;
}
// Check to see if we need to update the model bounds
if (entity->needsUpdateModelBounds()) {
@ -1350,6 +1419,11 @@ void ModelEntityRenderer::doRenderUpdateSynchronousTyped(const ScenePointer& sce
model->setTagMask(tagMask, scene);
}
if (entity->blendshapesChanged()) {
model->setBlendshapeCoefficients(entity->getBlendshapeCoefficientVector());
model->updateBlendshapes();
}
// TODO? early exit here when not visible?
if (model->canCastShadow() != _canCastShadow) {
@ -1370,13 +1444,14 @@ void ModelEntityRenderer::doRenderUpdateSynchronousTyped(const ScenePointer& sce
model->removeFromScene(scene, transaction);
render::Item::Status::Getters statusGetters;
makeStatusGetters(entity, statusGetters);
model->addToScene(scene, transaction, statusGetters);
using namespace std::placeholders;
model->addToScene(scene, transaction, statusGetters, std::bind(&ModelEntityRenderer::metaBlendshapeOperator, _renderItemID, _1, _2, _3, _4));
entity->bumpAncestorChainRenderableVersion();
processMaterials();
}
}
if (!_texturesLoaded && model->getNetworkModel() && model->getNetworkModel()->areTexturesLoaded()) {
if (!_texturesLoaded && model->getGeometry() && model->getGeometry()->areTexturesLoaded()) {
withWriteLock([&] {
_texturesLoaded = true;
});
@ -1529,3 +1604,12 @@ void ModelEntityRenderer::processMaterials() {
}
}
}
void ModelEntityRenderer::metaBlendshapeOperator(render::ItemID renderItemID, int blendshapeNumber, const QVector<BlendshapeOffset>& blendshapeOffsets,
const QVector<int>& blendedMeshSizes, const render::ItemIDs& subItemIDs) {
render::Transaction transaction;
transaction.updateItem<PayloadProxyInterface>(renderItemID, [blendshapeNumber, blendshapeOffsets, blendedMeshSizes, subItemIDs](PayloadProxyInterface& self) {
self.handleBlendedVertices(blendshapeNumber, blendshapeOffsets, blendedMeshSizes, subItemIDs);
});
AbstractViewStateInterface::instance()->getMain3DScene()->enqueueTransaction(transaction);
}

View file

@ -21,6 +21,7 @@
#include <AnimationCache.h>
#include <Model.h>
#include <model-networking/ModelCache.h>
#include <MetaModelPayload.h>
#include "RenderableEntityItem.h"
@ -120,7 +121,7 @@ private:
bool readyToAnimate() const;
void fetchCollisionGeometryResource();
ModelResource::Pointer _collisionGeometryResource;
GeometryResource::Pointer _collisionGeometryResource;
std::vector<int> _jointMap;
QVariantMap _originalTextures;
bool _jointMapCompleted { false };
@ -131,7 +132,7 @@ private:
namespace render { namespace entities {
class ModelEntityRenderer : public TypedEntityRenderer<RenderableModelEntityItem> {
class ModelEntityRenderer : public TypedEntityRenderer<RenderableModelEntityItem>, public MetaModelPayload {
using Parent = TypedEntityRenderer<RenderableModelEntityItem>;
friend class EntityRenderer;
Q_OBJECT
@ -155,6 +156,8 @@ protected:
void setKey(bool didVisualGeometryRequestSucceed);
virtual ItemKey getKey() override;
virtual uint32_t metaFetchMetaSubItems(ItemIDs& subItems) const override;
virtual void handleBlendedVertices(int blendshapeNumber, const QVector<BlendshapeOffset>& blendshapeOffsets,
const QVector<int>& blendedMeshSizes, const render::ItemIDs& subItemIDs) override;
virtual bool needsRenderUpdateFromTypedEntity(const TypedEntityPointer& entity) const override;
virtual bool needsRenderUpdate() const override;
@ -199,6 +202,10 @@ private:
bool _prevModelLoaded { false };
void processMaterials();
static void metaBlendshapeOperator(render::ItemID renderItemID, int blendshapeNumber, const QVector<BlendshapeOffset>& blendshapeOffsets,
const QVector<int>& blendedMeshSizes, const render::ItemIDs& subItemIDs);
};
} } // namespace

View file

@ -200,7 +200,7 @@ float importanceSample3DDimension(float startDim) {
}
ParticleEffectEntityRenderer::CpuParticle ParticleEffectEntityRenderer::createParticle(uint64_t now, const Transform& baseTransform, const particle::Properties& particleProperties,
const ShapeType& shapeType, const ModelResource::Pointer& geometryResource,
const ShapeType& shapeType, const GeometryResource::Pointer& geometryResource,
const TriangleInfo& triangleInfo) {
CpuParticle particle;
@ -385,7 +385,7 @@ void ParticleEffectEntityRenderer::stepSimulation() {
particle::Properties particleProperties;
ShapeType shapeType;
ModelResource::Pointer geometryResource;
GeometryResource::Pointer geometryResource;
withReadLock([&] {
particleProperties = _particleProperties;
shapeType = _shapeType;
@ -488,7 +488,7 @@ void ParticleEffectEntityRenderer::fetchGeometryResource() {
if (hullURL.isEmpty()) {
_geometryResource.reset();
} else {
_geometryResource = DependencyManager::get<ModelCache>()->getCollisionModelResource(hullURL);
_geometryResource = DependencyManager::get<ModelCache>()->getCollisionGeometryResource(hullURL);
}
}
@ -496,7 +496,7 @@ void ParticleEffectEntityRenderer::fetchGeometryResource() {
void ParticleEffectEntityRenderer::computeTriangles(const hfm::Model& hfmModel) {
PROFILE_RANGE(render, __FUNCTION__);
uint32_t numberOfMeshes = (uint32_t)hfmModel.meshes.size();
int numberOfMeshes = hfmModel.meshes.size();
_hasComputedTriangles = true;
_triangleInfo.triangles.clear();
@ -506,11 +506,11 @@ void ParticleEffectEntityRenderer::computeTriangles(const hfm::Model& hfmModel)
float minArea = FLT_MAX;
AABox bounds;
for (uint32_t i = 0; i < numberOfMeshes; i++) {
for (int i = 0; i < numberOfMeshes; i++) {
const HFMMesh& mesh = hfmModel.meshes.at(i);
const uint32_t numberOfParts = (uint32_t)mesh.parts.size();
for (uint32_t j = 0; j < numberOfParts; j++) {
const int numberOfParts = mesh.parts.size();
for (int j = 0; j < numberOfParts; j++) {
const HFMMeshPart& part = mesh.parts.at(j);
const int INDICES_PER_TRIANGLE = 3;

View file

@ -89,7 +89,7 @@ private:
} _triangleInfo;
static CpuParticle createParticle(uint64_t now, const Transform& baseTransform, const particle::Properties& particleProperties,
const ShapeType& shapeType, const ModelResource::Pointer& geometryResource,
const ShapeType& shapeType, const GeometryResource::Pointer& geometryResource,
const TriangleInfo& triangleInfo);
void stepSimulation();
@ -108,7 +108,7 @@ private:
QString _compoundShapeURL;
void fetchGeometryResource();
ModelResource::Pointer _geometryResource;
GeometryResource::Pointer _geometryResource;
NetworkTexturePointer _networkTexture;
ScenePointer _scene;

View file

@ -1429,13 +1429,14 @@ void RenderablePolyVoxEntityItem::computeShapeInfoWorker() {
QtConcurrent::run([entity, voxelSurfaceStyle, voxelVolumeSize, mesh] {
auto polyVoxEntity = std::static_pointer_cast<RenderablePolyVoxEntityItem>(entity);
ShapeInfo::PointCollection pointCollection;
QVector<QVector<glm::vec3>> pointCollection;
AABox box;
glm::mat4 vtoM = std::static_pointer_cast<RenderablePolyVoxEntityItem>(entity)->voxelToLocalMatrix();
if (voxelSurfaceStyle == PolyVoxEntityItem::SURFACE_MARCHING_CUBES ||
voxelSurfaceStyle == PolyVoxEntityItem::SURFACE_EDGED_MARCHING_CUBES) {
// pull each triangle in the mesh into a polyhedron which can be collided with
unsigned int i = 0;
const gpu::BufferView& vertexBufferView = mesh->getVertexBuffer();
const gpu::BufferView& indexBufferView = mesh->getIndexBuffer();
@ -1464,16 +1465,19 @@ void RenderablePolyVoxEntityItem::computeShapeInfoWorker() {
box += p2Model;
box += p3Model;
ShapeInfo::PointList pointsInPart;
pointsInPart.push_back(p0Model);
pointsInPart.push_back(p1Model);
pointsInPart.push_back(p2Model);
pointsInPart.push_back(p3Model);
// add points to a new convex hull
pointCollection.push_back(pointsInPart);
QVector<glm::vec3> pointsInPart;
pointsInPart << p0Model;
pointsInPart << p1Model;
pointsInPart << p2Model;
pointsInPart << p3Model;
// add next convex hull
QVector<glm::vec3> newMeshPoints;
pointCollection << newMeshPoints;
// add points to the new convex hull
pointCollection[i++] << pointsInPart;
}
} else {
unsigned int i = 0;
polyVoxEntity->forEachVoxelValue(voxelVolumeSize, [&](const ivec3& v, uint8_t value) {
if (value > 0) {
const auto& x = v.x;
@ -1492,7 +1496,7 @@ void RenderablePolyVoxEntityItem::computeShapeInfoWorker() {
return;
}
ShapeInfo::PointList pointsInPart;
QVector<glm::vec3> pointsInPart;
float offL = -0.5f;
float offH = 0.5f;
@ -1519,17 +1523,20 @@ void RenderablePolyVoxEntityItem::computeShapeInfoWorker() {
box += p110;
box += p111;
pointsInPart.push_back(p000);
pointsInPart.push_back(p001);
pointsInPart.push_back(p010);
pointsInPart.push_back(p011);
pointsInPart.push_back(p100);
pointsInPart.push_back(p101);
pointsInPart.push_back(p110);
pointsInPart.push_back(p111);
pointsInPart << p000;
pointsInPart << p001;
pointsInPart << p010;
pointsInPart << p011;
pointsInPart << p100;
pointsInPart << p101;
pointsInPart << p110;
pointsInPart << p111;
// add points to a new convex hull
pointCollection.push_back(pointsInPart);
// add next convex hull
QVector<glm::vec3> newMeshPoints;
pointCollection << newMeshPoints;
// add points to the new convex hull
pointCollection[i++] << pointsInPart;
}
});
}
@ -1539,7 +1546,7 @@ void RenderablePolyVoxEntityItem::computeShapeInfoWorker() {
void RenderablePolyVoxEntityItem::setCollisionPoints(ShapeInfo::PointCollection pointCollection, AABox box) {
// this catches the payload from computeShapeInfoWorker
if (pointCollection.empty()) {
if (pointCollection.isEmpty()) {
EntityItem::computeShapeInfo(_shapeInfo);
withWriteLock([&] {
_shapeReady = true;

File diff suppressed because it is too large Load diff

View file

@ -300,6 +300,7 @@ public:
DEFINE_PROPERTY_REF(PROP_JOINT_TRANSLATIONS, JointTranslations, jointTranslations, QVector<glm::vec3>, ENTITY_ITEM_DEFAULT_EMPTY_VEC3_QVEC);
DEFINE_PROPERTY(PROP_RELAY_PARENT_JOINTS, RelayParentJoints, relayParentJoints, bool, ENTITY_ITEM_DEFAULT_RELAY_PARENT_JOINTS);
DEFINE_PROPERTY_REF(PROP_GROUP_CULLED, GroupCulled, groupCulled, bool, false);
DEFINE_PROPERTY_REF(PROP_BLENDSHAPE_COEFFICIENTS, BlendshapeCoefficients, blendshapeCoefficients, QString, "");
DEFINE_PROPERTY_GROUP(Animation, animation, AnimationPropertyGroup);
// Light

View file

@ -216,16 +216,17 @@ enum EntityPropertyList {
PROP_JOINT_TRANSLATIONS = PROP_DERIVED_5,
PROP_RELAY_PARENT_JOINTS = PROP_DERIVED_6,
PROP_GROUP_CULLED = PROP_DERIVED_7,
PROP_BLENDSHAPE_COEFFICIENTS = PROP_DERIVED_8,
// Animation
PROP_ANIMATION_URL = PROP_DERIVED_8,
PROP_ANIMATION_ALLOW_TRANSLATION = PROP_DERIVED_9,
PROP_ANIMATION_FPS = PROP_DERIVED_10,
PROP_ANIMATION_FRAME_INDEX = PROP_DERIVED_11,
PROP_ANIMATION_PLAYING = PROP_DERIVED_12,
PROP_ANIMATION_LOOP = PROP_DERIVED_13,
PROP_ANIMATION_FIRST_FRAME = PROP_DERIVED_14,
PROP_ANIMATION_LAST_FRAME = PROP_DERIVED_15,
PROP_ANIMATION_HOLD = PROP_DERIVED_16,
PROP_ANIMATION_URL = PROP_DERIVED_9,
PROP_ANIMATION_ALLOW_TRANSLATION = PROP_DERIVED_10,
PROP_ANIMATION_FPS = PROP_DERIVED_11,
PROP_ANIMATION_FRAME_INDEX = PROP_DERIVED_12,
PROP_ANIMATION_PLAYING = PROP_DERIVED_13,
PROP_ANIMATION_LOOP = PROP_DERIVED_14,
PROP_ANIMATION_FIRST_FRAME = PROP_DERIVED_15,
PROP_ANIMATION_LAST_FRAME = PROP_DERIVED_16,
PROP_ANIMATION_HOLD = PROP_DERIVED_17,
// Light
PROP_IS_SPOTLIGHT = PROP_DERIVED_0,

View file

@ -3194,21 +3194,30 @@ glm::vec3 EntityTree::getUnscaledDimensionsForID(const QUuid& id) {
return glm::vec3(1.0f);
}
void EntityTree::updateEntityQueryAACubeWorker(SpatiallyNestablePointer object, EntityEditPacketSender* packetSender,
AACube EntityTree::updateEntityQueryAACubeWorker(SpatiallyNestablePointer object, EntityEditPacketSender* packetSender,
MovingEntitiesOperator& moveOperator, bool force, bool tellServer) {
glm::vec3 min(FLT_MAX);
glm::vec3 max(-FLT_MAX);
// if the queryBox has changed, tell the entity-server
EntityItemPointer entity = std::dynamic_pointer_cast<EntityItem>(object);
if (entity) {
bool queryAACubeChanged = false;
if (!entity->hasChildren()) {
// updateQueryAACube will also update all ancestors' AACubes, so we only need to call this for leaf nodes
queryAACubeChanged = entity->updateQueryAACube();
queryAACubeChanged = entity->updateQueryAACube(false);
AACube entityAACube = entity->getQueryAACube();
min = glm::min(min, entityAACube.getMinimumPoint());
max = glm::max(max, entityAACube.getMaximumPoint());
} else {
AACube oldCube = entity->getQueryAACube();
object->forEachChild([&](SpatiallyNestablePointer descendant) {
updateEntityQueryAACubeWorker(descendant, packetSender, moveOperator, force, tellServer);
AACube entityAACube = updateEntityQueryAACubeWorker(descendant, packetSender, moveOperator, force, tellServer);
min = glm::min(min, entityAACube.getMinimumPoint());
max = glm::max(max, entityAACube.getMaximumPoint());
});
queryAACubeChanged = oldCube != entity->getQueryAACube();
queryAACubeChanged = entity->updateQueryAACubeWithDescendantAACube(AACube(Extents(min, max)), false);
AACube newCube = entity->getQueryAACube();
min = glm::min(min, newCube.getMinimumPoint());
max = glm::max(max, newCube.getMaximumPoint());
}
if (queryAACubeChanged || force) {
@ -3217,9 +3226,10 @@ void EntityTree::updateEntityQueryAACubeWorker(SpatiallyNestablePointer object,
if (success) {
moveOperator.addEntityToMoveList(entity, newCube);
}
// send an edit packet to update the entity-server about the queryAABox. We do this for domain-hosted
// entities as well as for avatar-entities; the packet-sender will route the update accordingly
if (tellServer && packetSender && (entity->isDomainEntity() || entity->isAvatarEntity())) {
// send an edit packet to update the entity-server about the queryAABox. We only do this for domain-hosted
// entities, as we don't want to flood the update pipeline with AvatarEntity updates, so we assume
// others have all info required to properly update queryAACube of AvatarEntities on their end
if (tellServer && packetSender && entity->isDomainEntity()) {
quint64 now = usecTimestampNow();
EntityItemProperties properties = entity->getProperties();
properties.setQueryAACubeDirty();
@ -3234,7 +3244,16 @@ void EntityTree::updateEntityQueryAACubeWorker(SpatiallyNestablePointer object,
entity->markDirtyFlags(Simulation::DIRTY_POSITION);
entityChanged(entity);
}
} else {
// if we're called on a non-entity, we might still have entity descendants
object->forEachChild([&](SpatiallyNestablePointer descendant) {
AACube entityAACube = updateEntityQueryAACubeWorker(descendant, packetSender, moveOperator, force, tellServer);
min = glm::min(min, entityAACube.getMinimumPoint());
max = glm::max(max, entityAACube.getMaximumPoint());
});
}
return AACube(Extents(min, max));
}
void EntityTree::updateEntityQueryAACube(SpatiallyNestablePointer object, EntityEditPacketSender* packetSender,

View file

@ -400,8 +400,9 @@ private:
std::map<QString, QString> _namedPaths;
void updateEntityQueryAACubeWorker(SpatiallyNestablePointer object, EntityEditPacketSender* packetSender,
MovingEntitiesOperator& moveOperator, bool force, bool tellServer);
// Return an AACube containing object and all its entity descendants
AACube updateEntityQueryAACubeWorker(SpatiallyNestablePointer object, EntityEditPacketSender* packetSender,
MovingEntitiesOperator& moveOperator, bool force, bool tellServer);
};
void convertGrabUserDataToProperties(EntityItemProperties& properties);

View file

@ -33,7 +33,8 @@ EntityItemPointer ModelEntityItem::factory(const EntityItemID& entityID, const E
return entity;
}
ModelEntityItem::ModelEntityItem(const EntityItemID& entityItemID) : EntityItem(entityItemID)
ModelEntityItem::ModelEntityItem(const EntityItemID& entityItemID) : EntityItem(entityItemID),
_blendshapeCoefficientsVector((int)Blendshapes::BlendshapeCount, 0.0f)
{
_lastAnimated = usecTimestampNow();
// set the last animated when interface (re)starts
@ -71,6 +72,7 @@ EntityItemProperties ModelEntityItem::getProperties(const EntityPropertyFlags& d
COPY_ENTITY_PROPERTY_TO_PROPERTIES(jointTranslations, getJointTranslations);
COPY_ENTITY_PROPERTY_TO_PROPERTIES(relayParentJoints, getRelayParentJoints);
COPY_ENTITY_PROPERTY_TO_PROPERTIES(groupCulled, getGroupCulled);
COPY_ENTITY_PROPERTY_TO_PROPERTIES(blendshapeCoefficients, getBlendshapeCoefficients);
withReadLock([&] {
_animationProperties.getProperties(properties);
});
@ -94,6 +96,7 @@ bool ModelEntityItem::setProperties(const EntityItemProperties& properties) {
SET_ENTITY_PROPERTY_FROM_PROPERTIES(jointTranslations, setJointTranslations);
SET_ENTITY_PROPERTY_FROM_PROPERTIES(relayParentJoints, setRelayParentJoints);
SET_ENTITY_PROPERTY_FROM_PROPERTIES(groupCulled, setGroupCulled);
SET_ENTITY_PROPERTY_FROM_PROPERTIES(blendshapeCoefficients, setBlendshapeCoefficients);
withWriteLock([&] {
AnimationPropertyGroup animationProperties = _animationProperties;
@ -138,6 +141,7 @@ int ModelEntityItem::readEntitySubclassDataFromBuffer(const unsigned char* data,
READ_ENTITY_PROPERTY(PROP_JOINT_TRANSLATIONS, QVector<glm::vec3>, setJointTranslations);
READ_ENTITY_PROPERTY(PROP_RELAY_PARENT_JOINTS, bool, setRelayParentJoints);
READ_ENTITY_PROPERTY(PROP_GROUP_CULLED, bool, setGroupCulled);
READ_ENTITY_PROPERTY(PROP_BLENDSHAPE_COEFFICIENTS, QString, setBlendshapeCoefficients);
// grab a local copy of _animationProperties to avoid multiple locks
int bytesFromAnimation;
@ -176,6 +180,7 @@ EntityPropertyFlags ModelEntityItem::getEntityProperties(EncodeBitstreamParams&
requestedProperties += PROP_JOINT_TRANSLATIONS;
requestedProperties += PROP_RELAY_PARENT_JOINTS;
requestedProperties += PROP_GROUP_CULLED;
requestedProperties += PROP_BLENDSHAPE_COEFFICIENTS;
requestedProperties += _animationProperties.getEntityProperties(params);
return requestedProperties;
@ -204,6 +209,7 @@ void ModelEntityItem::appendSubclassData(OctreePacketData* packetData, EncodeBit
APPEND_ENTITY_PROPERTY(PROP_JOINT_TRANSLATIONS, getJointTranslations());
APPEND_ENTITY_PROPERTY(PROP_RELAY_PARENT_JOINTS, getRelayParentJoints());
APPEND_ENTITY_PROPERTY(PROP_GROUP_CULLED, getGroupCulled());
APPEND_ENTITY_PROPERTY(PROP_BLENDSHAPE_COEFFICIENTS, getBlendshapeCoefficients());
withReadLock([&] {
_animationProperties.appendSubclassData(packetData, params, entityTreeElementExtraEncodeData, requestedProperties,
@ -256,6 +262,7 @@ void ModelEntityItem::debugDump() const {
qCDebug(entities) << " dimensions:" << getScaledDimensions();
qCDebug(entities) << " model URL:" << getModelURL();
qCDebug(entities) << " compound shape URL:" << getCompoundShapeURL();
qCDebug(entities) << " blendshapeCoefficients:" << getBlendshapeCoefficients();
}
void ModelEntityItem::setShapeType(ShapeType type) {
@ -743,3 +750,39 @@ void ModelEntityItem::setModelScale(const glm::vec3& modelScale) {
_modelScale = modelScale;
});
}
QString ModelEntityItem::getBlendshapeCoefficients() const {
return resultWithReadLock<QString>([&] {
return QJsonDocument::fromVariant(_blendshapeCoefficientsMap).toJson();
});
}
void ModelEntityItem::setBlendshapeCoefficients(const QString& blendshapeCoefficients) {
QJsonParseError error;
QJsonDocument newCoefficientsJSON = QJsonDocument::fromJson(blendshapeCoefficients.toUtf8(), &error);
if (error.error != QJsonParseError::NoError) {
qWarning() << "Could not evaluate blendshapeCoefficients property value:" << newCoefficientsJSON;
return;
}
QVariantMap newCoefficientsMap = newCoefficientsJSON.toVariant().toMap();
withWriteLock([&] {
for (auto& blendshape : newCoefficientsMap.keys()) {
auto newCoefficient = newCoefficientsMap[blendshape];
auto blendshapeIter = BLENDSHAPE_LOOKUP_MAP.find(blendshape);
if (newCoefficient.canConvert<float>() && blendshapeIter != BLENDSHAPE_LOOKUP_MAP.end()) {
float newCoefficientValue = newCoefficient.toFloat();
_blendshapeCoefficientsVector[blendshapeIter.value()] = newCoefficientValue;
_blendshapeCoefficientsMap[blendshape] = newCoefficientValue;
_blendshapesChanged = true;
}
}
});
}
QVector<float> ModelEntityItem::getBlendshapeCoefficientVector() {
return resultWithReadLock<QVector<float>>([&] {
_blendshapesChanged = false; // ok to change this within read lock here
return _blendshapeCoefficientsVector;
});
}

View file

@ -17,6 +17,7 @@
#include <ThreadSafeValueCache.h>
#include "AnimationPropertyGroup.h"
#include "BlendshapeConstants.h"
class ModelEntityItem : public EntityItem {
public:
@ -133,6 +134,11 @@ public:
glm::vec3 getModelScale() const;
void setModelScale(const glm::vec3& modelScale);
QString getBlendshapeCoefficients() const;
void setBlendshapeCoefficients(const QString& blendshapeCoefficients);
bool blendshapesChanged() const { return _blendshapesChanged; }
QVector<float> getBlendshapeCoefficientVector();
private:
void setAnimationSettings(const QString& value); // only called for old bitstream format
bool applyNewAnimationProperties(AnimationPropertyGroup newProperties);
@ -166,6 +172,7 @@ protected:
QString _modelURL;
bool _relayParentJoints;
bool _groupCulled { false };
QVariantMap _blendshapeCoefficientsMap;
ThreadSafeValueCache<QString> _compoundShapeURL;
@ -178,6 +185,9 @@ protected:
private:
uint64_t _lastAnimated{ 0 };
float _currentFrame{ -1.0f };
QVector<float> _blendshapeCoefficientsVector;
bool _blendshapesChanged { false };
};
#endif // hifi_ModelEntityItem_h

View file

@ -350,7 +350,7 @@ bool ZoneEntityItem::findDetailedParabolaIntersection(const glm::vec3& origin, c
}
bool ZoneEntityItem::contains(const glm::vec3& point) const {
ModelResource::Pointer resource = _shapeResource;
GeometryResource::Pointer resource = _shapeResource;
if (_shapeType == SHAPE_TYPE_COMPOUND && resource) {
if (resource->isLoaded()) {
const HFMModel& hfmModel = resource->getHFMModel();
@ -467,7 +467,7 @@ void ZoneEntityItem::fetchCollisionGeometryResource() {
if (hullURL.isEmpty()) {
_shapeResource.reset();
} else {
_shapeResource = DependencyManager::get<ModelCache>()->getCollisionModelResource(hullURL);
_shapeResource = DependencyManager::get<ModelCache>()->getCollisionGeometryResource(hullURL);
}
}

View file

@ -173,7 +173,7 @@ protected:
static bool _zonesArePickable;
void fetchCollisionGeometryResource();
ModelResource::Pointer _shapeResource;
GeometryResource::Pointer _shapeResource;
};

View file

@ -20,7 +20,6 @@
#include <BlendshapeConstants.h>
#include <hfm/ModelFormatLogging.h>
#include <hfm/HFMModelMath.h>
// TOOL: Uncomment the following line to enable the filtering of all the unkwnon fields of a node so we can break point easily while loading a model with problems...
//#define DEBUG_FBXSERIALIZER
@ -146,9 +145,8 @@ public:
bool isLimbNode; // is this FBXModel transform is a "LimbNode" i.e. a joint
};
glm::mat4 getGlobalTransform(const QMultiMap<QString, QString>& _connectionParentMap,
const QHash<QString, FBXModel>& fbxModels, QString nodeID, bool mixamoHack, const QString& url) {
const QHash<QString, FBXModel>& fbxModels, QString nodeID, bool mixamoHack, const QString& url) {
glm::mat4 globalTransform;
QVector<QString> visitedNodes; // Used to prevent following a cycle
while (!nodeID.isNull()) {
@ -168,11 +166,12 @@ glm::mat4 getGlobalTransform(const QMultiMap<QString, QString>& _connectionParen
}
QList<QString> parentIDs = _connectionParentMap.values(nodeID);
nodeID = QString();
foreach(const QString& parentID, parentIDs) {
foreach (const QString& parentID, parentIDs) {
if (visitedNodes.contains(parentID)) {
qCWarning(modelformat) << "Ignoring loop detected in FBX connection map for" << url;
continue;
}
if (fbxModels.contains(parentID)) {
nodeID = parentID;
break;
@ -182,21 +181,6 @@ glm::mat4 getGlobalTransform(const QMultiMap<QString, QString>& _connectionParen
return globalTransform;
}
std::vector<QString> getModelIDsForMeshID(const QString& meshID, const QHash<QString, FBXModel>& fbxModels, const QMultiMap<QString, QString>& _connectionParentMap) {
std::vector<QString> modelsForMesh;
if (fbxModels.contains(meshID)) {
modelsForMesh.push_back(meshID);
} else {
// This mesh may have more than one parent model, with different material and transform, representing a different instance of the mesh
for (const auto& parentID : _connectionParentMap.values(meshID)) {
if (fbxModels.contains(parentID)) {
modelsForMesh.push_back(parentID);
}
}
}
return modelsForMesh;
}
class ExtractedBlendshape {
public:
QString id;
@ -420,7 +404,7 @@ HFMModel* FBXSerializer::extractHFMModel(const hifi::VariantHash& mapping, const
QVector<ExtractedBlendshape> blendshapes;
QHash<QString, FBXModel> fbxModels;
QHash<QString, Cluster> fbxClusters;
QHash<QString, Cluster> clusters;
QHash<QString, AnimationCurve> animationCurves;
QHash<QString, QString> typeFlags;
@ -531,8 +515,8 @@ HFMModel* FBXSerializer::extractHFMModel(const hifi::VariantHash& mapping, const
if (object.properties.at(2) == "Mesh") {
meshes.insert(getID(object.properties), extractMesh(object, meshIndex, deduplicateIndices));
} else { // object.properties.at(2) == "Shape"
ExtractedBlendshape blendshape = { getID(object.properties), extractBlendshape(object) };
blendshapes.append(blendshape);
ExtractedBlendshape extracted = { getID(object.properties), extractBlendshape(object) };
blendshapes.append(extracted);
}
} else if (object.name == "Model") {
QString name = getModelName(object.properties);
@ -706,8 +690,8 @@ HFMModel* FBXSerializer::extractHFMModel(const hifi::VariantHash& mapping, const
// add the blendshapes included in the model, if any
if (mesh) {
foreach (const ExtractedBlendshape& blendshape, blendshapes) {
addBlendshapes(blendshape, blendshapeIndices.values(blendshape.id.toLatin1()), *mesh);
foreach (const ExtractedBlendshape& extracted, blendshapes) {
addBlendshapes(extracted, blendshapeIndices.values(extracted.id.toLatin1()), *mesh);
}
}
@ -1074,9 +1058,9 @@ HFMModel* FBXSerializer::extractHFMModel(const hifi::VariantHash& mapping, const
}
}
// skip empty fbxClusters
// skip empty clusters
if (cluster.indices.size() > 0 && cluster.weights.size() > 0) {
fbxClusters.insert(getID(object.properties), cluster);
clusters.insert(getID(object.properties), cluster);
}
} else if (object.properties.last() == "BlendShapeChannel") {
@ -1230,11 +1214,11 @@ HFMModel* FBXSerializer::extractHFMModel(const hifi::VariantHash& mapping, const
}
// assign the blendshapes to their corresponding meshes
foreach (const ExtractedBlendshape& blendshape, blendshapes) {
QString blendshapeChannelID = _connectionParentMap.value(blendshape.id);
foreach (const ExtractedBlendshape& extracted, blendshapes) {
QString blendshapeChannelID = _connectionParentMap.value(extracted.id);
QString blendshapeID = _connectionParentMap.value(blendshapeChannelID);
QString meshID = _connectionParentMap.value(blendshapeID);
addBlendshapes(blendshape, blendshapeChannelIndices.values(blendshapeChannelID), meshes[meshID]);
addBlendshapes(extracted, blendshapeChannelIndices.values(blendshapeChannelID), meshes[meshID]);
}
// get offset transform from mapping
@ -1249,13 +1233,13 @@ HFMModel* FBXSerializer::extractHFMModel(const hifi::VariantHash& mapping, const
QVector<QString> modelIDs;
QSet<QString> remainingFBXModels;
for (QHash<QString, FBXModel>::const_iterator fbxModel = fbxModels.constBegin(); fbxModel != fbxModels.constEnd(); fbxModel++) {
// models with fbxClusters must be parented to the cluster top
// models with clusters must be parented to the cluster top
// Unless the model is a root node.
bool isARootNode = !modelIDs.contains(_connectionParentMap.value(fbxModel.key()));
if (!isARootNode) {
foreach(const QString& deformerID, _connectionChildMap.values(fbxModel.key())) {
foreach(const QString& clusterID, _connectionChildMap.values(deformerID)) {
if (!fbxClusters.contains(clusterID)) {
if (!clusters.contains(clusterID)) {
continue;
}
QString topID = getTopModelID(_connectionParentMap, fbxModels, _connectionChildMap.value(clusterID), url);
@ -1299,18 +1283,12 @@ HFMModel* FBXSerializer::extractHFMModel(const hifi::VariantHash& mapping, const
// convert the models to joints
hfmModel.hasSkeletonJoints = false;
bool needMixamoHack = hfmModel.applicationName == "mixamo.com";
std::vector<glm::mat4> transformForClusters;
transformForClusters.reserve((size_t)modelIDs.size());
for (const QString& modelID : modelIDs) {
foreach (const QString& modelID, modelIDs) {
const FBXModel& fbxModel = fbxModels[modelID];
HFMJoint joint;
joint.parentIndex = fbxModel.parentIndex;
uint32_t jointIndex = (uint32_t)hfmModel.joints.size();
// Copy default joint parameters from model
int jointIndex = hfmModel.joints.size();
joint.translation = fbxModel.translation; // these are usually in centimeters
joint.preTransform = fbxModel.preTransform;
@ -1321,62 +1299,35 @@ HFMModel* FBXSerializer::extractHFMModel(const hifi::VariantHash& mapping, const
joint.rotationMin = fbxModel.rotationMin;
joint.rotationMax = fbxModel.rotationMax;
if (fbxModel.hasGeometricOffset) {
joint.geometricOffset = createMatFromScaleQuatAndPos(fbxModel.geometricScaling, fbxModel.geometricRotation, fbxModel.geometricTranslation);
}
joint.hasGeometricOffset = fbxModel.hasGeometricOffset;
joint.geometricTranslation = fbxModel.geometricTranslation;
joint.geometricRotation = fbxModel.geometricRotation;
joint.geometricScaling = fbxModel.geometricScaling;
joint.isSkeletonJoint = fbxModel.isLimbNode;
hfmModel.hasSkeletonJoints = (hfmModel.hasSkeletonJoints || joint.isSkeletonJoint);
joint.name = fbxModel.name;
joint.bindTransformFoundInCluster = false;
// With the basic joint information, we can start to calculate compound transform information
// modelIDs is ordered from parent to children, so we can safely get parent transforms from earlier joints as we iterate
// Make adjustments to the static joint properties, and pre-calculate static transforms
if (applyUpAxisZRotation && joint.parentIndex == -1) {
joint.rotation *= upAxisZRotation;
joint.translation = upAxisZRotation * joint.translation;
}
glm::quat combinedRotation = joint.preRotation * joint.rotation * joint.postRotation;
joint.localTransform = glm::translate(joint.translation) * joint.preTransform * glm::mat4_cast(combinedRotation) * joint.postTransform;
if (joint.parentIndex == -1) {
joint.transform = joint.localTransform;
joint.globalTransform = hfmModel.offset * joint.localTransform;
joint.transform = hfmModel.offset * glm::translate(joint.translation) * joint.preTransform *
glm::mat4_cast(combinedRotation) * joint.postTransform;
joint.inverseDefaultRotation = glm::inverse(combinedRotation);
joint.distanceToParent = 0.0f;
} else {
const HFMJoint& parentJoint = hfmModel.joints.at(joint.parentIndex);
joint.transform = parentJoint.transform * joint.localTransform;
joint.globalTransform = parentJoint.globalTransform * joint.localTransform;
joint.transform = parentJoint.transform * glm::translate(joint.translation) *
joint.preTransform * glm::mat4_cast(combinedRotation) * joint.postTransform;
joint.inverseDefaultRotation = glm::inverse(combinedRotation) * parentJoint.inverseDefaultRotation;
joint.distanceToParent = glm::distance(extractTranslation(parentJoint.transform), extractTranslation(joint.transform));
joint.distanceToParent = glm::distance(extractTranslation(parentJoint.transform),
extractTranslation(joint.transform));
}
joint.inverseBindRotation = joint.inverseDefaultRotation;
joint.name = fbxModel.name;
// If needed, separately calculate the FBX-specific transform used for inverse bind transform calculations
glm::mat4 transformForCluster;
if (applyUpAxisZRotation) {
const glm::quat jointBindCombinedRotation = fbxModel.preRotation * fbxModel.rotation * fbxModel.postRotation;
const glm::mat4 localTransformForCluster = glm::translate(fbxModel.translation) * fbxModel.preTransform * glm::mat4_cast(jointBindCombinedRotation) * fbxModel.postTransform;
if (fbxModel.parentIndex != -1 && fbxModel.parentIndex < (int)jointIndex && !needMixamoHack) {
const glm::mat4& parenttransformForCluster = transformForClusters[fbxModel.parentIndex];
transformForCluster = parenttransformForCluster * localTransformForCluster;
} else {
transformForCluster = localTransformForCluster;
}
} else {
transformForCluster = joint.transform;
}
transformForClusters.push_back(transformForCluster);
// Initialize animation information next
// And also get the joint poses from the first frame of the animation, if present
joint.bindTransformFoundInCluster = false;
QString rotationID = localRotations.value(modelID);
AnimationCurve xRotCurve = animationCurves.value(xComponents.value(rotationID));
@ -1404,11 +1355,14 @@ HFMModel* FBXSerializer::extractHFMModel(const hifi::VariantHash& mapping, const
joint.translation = hfmModel.animationFrames[i].translations[jointIndex];
joint.rotation = hfmModel.animationFrames[i].rotations[jointIndex];
}
}
hfmModel.joints.push_back(joint);
}
hfmModel.joints.append(joint);
}
// NOTE: shapeVertices are in joint-frame
hfmModel.shapeVertices.resize(std::max(1, hfmModel.joints.size()) );
hfmModel.bindExtents.reset();
hfmModel.meshExtents.reset();
@ -1446,202 +1400,233 @@ HFMModel* FBXSerializer::extractHFMModel(const hifi::VariantHash& mapping, const
}
}
#endif
std::unordered_map<std::string, uint32_t> materialNameToID;
for (auto materialIt = _hfmMaterials.cbegin(); materialIt != _hfmMaterials.cend(); ++materialIt) {
materialNameToID[materialIt.key().toStdString()] = (uint32_t)hfmModel.materials.size();
hfmModel.materials.push_back(materialIt.value());
}
hfmModel.materials = _hfmMaterials;
// see if any materials have texture children
bool materialsHaveTextures = checkMaterialsHaveTextures(_hfmMaterials, _textureFilenames, _connectionChildMap);
for (QMap<QString, ExtractedMesh>::iterator it = meshes.begin(); it != meshes.end(); it++) {
const QString& meshID = it.key();
const ExtractedMesh& extracted = it.value();
const auto& partMaterialTextures = extracted.partMaterialTextures;
ExtractedMesh& extracted = it.value();
uint32_t meshIndex = (uint32_t)hfmModel.meshes.size();
meshIDsToMeshIndices.insert(meshID, meshIndex);
hfmModel.meshes.push_back(extracted.mesh);
hfm::Mesh& mesh = hfmModel.meshes.back();
extracted.mesh.meshExtents.reset();
std::vector<QString> instanceModelIDs = getModelIDsForMeshID(meshID, fbxModels, _connectionParentMap);
// meshShapes will be added to hfmModel at the very end
std::vector<hfm::Shape> meshShapes;
meshShapes.reserve(instanceModelIDs.size() * mesh.parts.size());
for (const QString& modelID : instanceModelIDs) {
// The transform node has the same indexing order as the joints
int indexOfModelID = modelIDs.indexOf(modelID);
if (indexOfModelID == -1) {
qCDebug(modelformat) << "Model not in model list: " << modelID;
}
const uint32_t transformIndex = (indexOfModelID == -1) ? 0 : (uint32_t)indexOfModelID;
// accumulate local transforms
QString modelID = fbxModels.contains(it.key()) ? it.key() : _connectionParentMap.value(it.key());
glm::mat4 modelTransform = getGlobalTransform(_connectionParentMap, fbxModels, modelID, hfmModel.applicationName == "mixamo.com", url);
// partShapes will be added to meshShapes at the very end
std::vector<hfm::Shape> partShapes { mesh.parts.size() };
for (uint32_t i = 0; i < (uint32_t)partShapes.size(); ++i) {
hfm::Shape& shape = partShapes[i];
shape.mesh = meshIndex;
shape.meshPart = i;
shape.joint = transformIndex;
}
// compute the mesh extents from the transformed vertices
foreach (const glm::vec3& vertex, extracted.mesh.vertices) {
glm::vec3 transformedVertex = glm::vec3(modelTransform * glm::vec4(vertex, 1.0f));
hfmModel.meshExtents.minimum = glm::min(hfmModel.meshExtents.minimum, transformedVertex);
hfmModel.meshExtents.maximum = glm::max(hfmModel.meshExtents.maximum, transformedVertex);
// For FBX_DRACO_MESH_VERSION < 2, or unbaked models, get materials from the partMaterialTextures
if (!partMaterialTextures.empty()) {
int materialIndex = 0;
int textureIndex = 0;
QList<QString> children = _connectionChildMap.values(modelID);
for (int i = children.size() - 1; i >= 0; i--) {
const QString& childID = children.at(i);
if (_hfmMaterials.contains(childID)) {
// the pure material associated with this part
const HFMMaterial& material = _hfmMaterials.value(childID);
for (int j = 0; j < partMaterialTextures.size(); j++) {
if (partMaterialTextures.at(j).first == materialIndex) {
hfm::Shape& shape = partShapes[j];
shape.material = materialNameToID[material.materialID.toStdString()];
}
}
materialIndex++;
} else if (_textureFilenames.contains(childID)) {
// NOTE (Sabrina 2019/01/11): getTextures now takes in the materialID as a second parameter, because FBX material nodes can sometimes have uv transform information (ex: "Maya|uv_scale")
// I'm leaving the second parameter blank right now as this code may never be used.
HFMTexture texture = getTexture(childID, "");
for (int j = 0; j < partMaterialTextures.size(); j++) {
int partTexture = partMaterialTextures.at(j).second;
if (partTexture == textureIndex && !(partTexture == 0 && materialsHaveTextures)) {
// TODO: DO something here that replaces this legacy code
// Maybe create a material just for this part with the correct textures?
// material.albedoTexture = texture;
// partShapes[j].material = materialIndex;
}
}
textureIndex++;
}
}
}
// For baked models with FBX_DRACO_MESH_VERSION >= 2, get materials from extracted.materialIDPerMeshPart
if (!extracted.materialIDPerMeshPart.empty()) {
assert(partShapes.size() == extracted.materialIDPerMeshPart.size());
for (uint32_t i = 0; i < (uint32_t)extracted.materialIDPerMeshPart.size(); ++i) {
hfm::Shape& shape = partShapes[i];
const std::string& materialID = extracted.materialIDPerMeshPart[i];
auto materialIt = materialNameToID.find(materialID);
if (materialIt != materialNameToID.end()) {
shape.material = materialIt->second;
}
}
}
// find the clusters with which the mesh is associated
QVector<QString> clusterIDs;
for (const QString& childID : _connectionChildMap.values(meshID)) {
for (const QString& clusterID : _connectionChildMap.values(childID)) {
if (!fbxClusters.contains(clusterID)) {
continue;
}
clusterIDs.append(clusterID);
}
}
// whether we're skinned depends on how many clusters are attached
if (clusterIDs.size() > 0) {
hfm::SkinDeformer skinDeformer;
auto& clusters = skinDeformer.clusters;
for (const auto& clusterID : clusterIDs) {
HFMCluster hfmCluster;
const Cluster& fbxCluster = fbxClusters[clusterID];
// see http://stackoverflow.com/questions/13566608/loading-skinning-information-from-fbx for a discussion
// of skinning information in FBX
QString jointID = _connectionChildMap.value(clusterID);
int indexOfJointID = modelIDs.indexOf(jointID);
if (indexOfJointID == -1) {
qCDebug(modelformat) << "Joint not in model list: " << jointID;
hfmCluster.jointIndex = 0;
} else {
hfmCluster.jointIndex = (uint32_t)indexOfJointID;
}
const glm::mat4& transformForCluster = transformForClusters[transformIndex];
hfmCluster.inverseBindMatrix = glm::inverse(fbxCluster.transformLink) * transformForCluster;
// slam bottom row to (0, 0, 0, 1), we KNOW this is not a perspective matrix and
// sometimes floating point fuzz can be introduced after the inverse.
hfmCluster.inverseBindMatrix[0][3] = 0.0f;
hfmCluster.inverseBindMatrix[1][3] = 0.0f;
hfmCluster.inverseBindMatrix[2][3] = 0.0f;
hfmCluster.inverseBindMatrix[3][3] = 1.0f;
hfmCluster.inverseBindTransform = Transform(hfmCluster.inverseBindMatrix);
clusters.push_back(hfmCluster);
// override the bind rotation with the transform link
HFMJoint& joint = hfmModel.joints[hfmCluster.jointIndex];
joint.inverseBindRotation = glm::inverse(extractRotation(fbxCluster.transformLink));
joint.bindTransform = fbxCluster.transformLink;
joint.bindTransformFoundInCluster = true;
// update the bind pose extents
glm::vec3 bindTranslation = extractTranslation(hfmModel.offset * joint.bindTransform);
hfmModel.bindExtents.addPoint(bindTranslation);
}
// the last cluster is the root cluster
HFMCluster cluster;
cluster.jointIndex = transformIndex;
clusters.push_back(cluster);
// Skinned mesh instances have an hfm::SkinDeformer
std::vector<hfm::SkinCluster> skinClusters;
for (const auto& clusterID : clusterIDs) {
const Cluster& fbxCluster = fbxClusters[clusterID];
skinClusters.emplace_back();
hfm::SkinCluster& skinCluster = skinClusters.back();
size_t indexWeightPairs = (size_t)std::min(fbxCluster.indices.size(), fbxCluster.weights.size());
skinCluster.indices.reserve(indexWeightPairs);
skinCluster.weights.reserve(indexWeightPairs);
for (int j = 0; j < fbxCluster.indices.size(); j++) {
int oldIndex = fbxCluster.indices.at(j);
float weight = fbxCluster.weights.at(j);
for (QMultiHash<int, int>::const_iterator it = extracted.newIndices.constFind(oldIndex);
it != extracted.newIndices.end() && it.key() == oldIndex; it++) {
int newIndex = it.value();
skinCluster.indices.push_back(newIndex);
skinCluster.weights.push_back(weight);
}
}
}
// It seems odd that this mesh-related code should be inside of the for loop for instanced model IDs.
// However, in practice, skinned FBX models appear to not be instanced, as the skinning includes both the weights and joints.
{
hfm::ReweightedDeformers reweightedDeformers = hfm::getReweightedDeformers(mesh.vertices.size(), skinClusters);
if (reweightedDeformers.trimmedToMatch) {
qDebug(modelformat) << "FBXSerializer -- The number of indices and weights for a skinning deformer had different sizes and have been trimmed to match";
}
mesh.clusterIndices = std::move(reweightedDeformers.indices);
mesh.clusterWeights = std::move(reweightedDeformers.weights);
mesh.clusterWeightsPerVertex = reweightedDeformers.weightsPerVertex;
}
// Store the model's dynamic transform, and put its ID in the shapes
uint32_t skinDeformerID = (uint32_t)hfmModel.skinDeformers.size();
hfmModel.skinDeformers.push_back(skinDeformer);
for (hfm::Shape& shape : partShapes) {
shape.skinDeformer = skinDeformerID;
}
}
// Store the parts for this mesh (or instance of this mesh, as the case may be)
meshShapes.insert(meshShapes.cend(), partShapes.cbegin(), partShapes.cend());
extracted.mesh.meshExtents.minimum = glm::min(extracted.mesh.meshExtents.minimum, transformedVertex);
extracted.mesh.meshExtents.maximum = glm::max(extracted.mesh.meshExtents.maximum, transformedVertex);
extracted.mesh.modelTransform = modelTransform;
}
// Store the shapes for the mesh (or multiple instances of the mesh, as the case may be)
hfmModel.shapes.insert(hfmModel.shapes.cend(), meshShapes.cbegin(), meshShapes.cend());
// look for textures, material properties
// allocate the Part material library
// NOTE: extracted.partMaterialTextures is empty for FBX_DRACO_MESH_VERSION >= 2. In that case, the mesh part's materialID string is already defined.
int materialIndex = 0;
int textureIndex = 0;
QList<QString> children = _connectionChildMap.values(modelID);
for (int i = children.size() - 1; i >= 0; i--) {
const QString& childID = children.at(i);
if (_hfmMaterials.contains(childID)) {
// the pure material associated with this part
HFMMaterial material = _hfmMaterials.value(childID);
for (int j = 0; j < extracted.partMaterialTextures.size(); j++) {
if (extracted.partMaterialTextures.at(j).first == materialIndex) {
HFMMeshPart& part = extracted.mesh.parts[j];
part.materialID = material.materialID;
}
}
materialIndex++;
} else if (_textureFilenames.contains(childID)) {
// NOTE (Sabrina 2019/01/11): getTextures now takes in the materialID as a second parameter, because FBX material nodes can sometimes have uv transform information (ex: "Maya|uv_scale")
// I'm leaving the second parameter blank right now as this code may never be used.
HFMTexture texture = getTexture(childID, "");
for (int j = 0; j < extracted.partMaterialTextures.size(); j++) {
int partTexture = extracted.partMaterialTextures.at(j).second;
if (partTexture == textureIndex && !(partTexture == 0 && materialsHaveTextures)) {
// TODO: DO something here that replaces this legacy code
// Maybe create a material just for this part with the correct textures?
// extracted.mesh.parts[j].diffuseTexture = texture;
}
}
textureIndex++;
}
}
// find the clusters with which the mesh is associated
QVector<QString> clusterIDs;
foreach (const QString& childID, _connectionChildMap.values(it.key())) {
foreach (const QString& clusterID, _connectionChildMap.values(childID)) {
if (!clusters.contains(clusterID)) {
continue;
}
HFMCluster hfmCluster;
const Cluster& cluster = clusters[clusterID];
clusterIDs.append(clusterID);
// see http://stackoverflow.com/questions/13566608/loading-skinning-information-from-fbx for a discussion
// of skinning information in FBX
QString jointID = _connectionChildMap.value(clusterID);
hfmCluster.jointIndex = modelIDs.indexOf(jointID);
if (hfmCluster.jointIndex == -1) {
qCDebug(modelformat) << "Joint not in model list: " << jointID;
hfmCluster.jointIndex = 0;
}
hfmCluster.inverseBindMatrix = glm::inverse(cluster.transformLink) * modelTransform;
// slam bottom row to (0, 0, 0, 1), we KNOW this is not a perspective matrix and
// sometimes floating point fuzz can be introduced after the inverse.
hfmCluster.inverseBindMatrix[0][3] = 0.0f;
hfmCluster.inverseBindMatrix[1][3] = 0.0f;
hfmCluster.inverseBindMatrix[2][3] = 0.0f;
hfmCluster.inverseBindMatrix[3][3] = 1.0f;
hfmCluster.inverseBindTransform = Transform(hfmCluster.inverseBindMatrix);
extracted.mesh.clusters.append(hfmCluster);
// override the bind rotation with the transform link
HFMJoint& joint = hfmModel.joints[hfmCluster.jointIndex];
joint.inverseBindRotation = glm::inverse(extractRotation(cluster.transformLink));
joint.bindTransform = cluster.transformLink;
joint.bindTransformFoundInCluster = true;
// update the bind pose extents
glm::vec3 bindTranslation = extractTranslation(hfmModel.offset * joint.bindTransform);
hfmModel.bindExtents.addPoint(bindTranslation);
}
}
// the last cluster is the root cluster
{
HFMCluster cluster;
cluster.jointIndex = modelIDs.indexOf(modelID);
if (cluster.jointIndex == -1) {
qCDebug(modelformat) << "Model not in model list: " << modelID;
cluster.jointIndex = 0;
}
extracted.mesh.clusters.append(cluster);
}
// whether we're skinned depends on how many clusters are attached
if (clusterIDs.size() > 1) {
// this is a multi-mesh joint
const int WEIGHTS_PER_VERTEX = 4;
int numClusterIndices = extracted.mesh.vertices.size() * WEIGHTS_PER_VERTEX;
extracted.mesh.clusterIndices.fill(extracted.mesh.clusters.size() - 1, numClusterIndices);
QVector<float> weightAccumulators;
weightAccumulators.fill(0.0f, numClusterIndices);
for (int i = 0; i < clusterIDs.size(); i++) {
QString clusterID = clusterIDs.at(i);
const Cluster& cluster = clusters[clusterID];
const HFMCluster& hfmCluster = extracted.mesh.clusters.at(i);
int jointIndex = hfmCluster.jointIndex;
HFMJoint& joint = hfmModel.joints[jointIndex];
glm::mat4 meshToJoint = glm::inverse(joint.bindTransform) * modelTransform;
ShapeVertices& points = hfmModel.shapeVertices.at(jointIndex);
for (int j = 0; j < cluster.indices.size(); j++) {
int oldIndex = cluster.indices.at(j);
float weight = cluster.weights.at(j);
for (QMultiHash<int, int>::const_iterator it = extracted.newIndices.constFind(oldIndex);
it != extracted.newIndices.end() && it.key() == oldIndex; it++) {
int newIndex = it.value();
// remember vertices with at least 1/4 weight
// FIXME: vertices with no weightpainting won't get recorded here
const float EXPANSION_WEIGHT_THRESHOLD = 0.25f;
if (weight >= EXPANSION_WEIGHT_THRESHOLD) {
// transform to joint-frame and save for later
const glm::mat4 vertexTransform = meshToJoint * glm::translate(extracted.mesh.vertices.at(newIndex));
points.push_back(extractTranslation(vertexTransform));
}
// look for an unused slot in the weights vector
int weightIndex = newIndex * WEIGHTS_PER_VERTEX;
int lowestIndex = -1;
float lowestWeight = FLT_MAX;
int k = 0;
for (; k < WEIGHTS_PER_VERTEX; k++) {
if (weightAccumulators[weightIndex + k] == 0.0f) {
extracted.mesh.clusterIndices[weightIndex + k] = i;
weightAccumulators[weightIndex + k] = weight;
break;
}
if (weightAccumulators[weightIndex + k] < lowestWeight) {
lowestIndex = k;
lowestWeight = weightAccumulators[weightIndex + k];
}
}
if (k == WEIGHTS_PER_VERTEX && weight > lowestWeight) {
// no space for an additional weight; we must replace the lowest
weightAccumulators[weightIndex + lowestIndex] = weight;
extracted.mesh.clusterIndices[weightIndex + lowestIndex] = i;
}
}
}
}
// now that we've accumulated the most relevant weights for each vertex
// normalize and compress to 16-bits
extracted.mesh.clusterWeights.fill(0, numClusterIndices);
int numVertices = extracted.mesh.vertices.size();
for (int i = 0; i < numVertices; ++i) {
int j = i * WEIGHTS_PER_VERTEX;
// normalize weights into uint16_t
float totalWeight = 0.0f;
for (int k = j; k < j + WEIGHTS_PER_VERTEX; ++k) {
totalWeight += weightAccumulators[k];
}
const float ALMOST_HALF = 0.499f;
if (totalWeight > 0.0f) {
float weightScalingFactor = (float)(UINT16_MAX) / totalWeight;
for (int k = j; k < j + WEIGHTS_PER_VERTEX; ++k) {
extracted.mesh.clusterWeights[k] = (uint16_t)(weightScalingFactor * weightAccumulators[k] + ALMOST_HALF);
}
} else {
extracted.mesh.clusterWeights[j] = (uint16_t)((float)(UINT16_MAX) + ALMOST_HALF);
}
}
} else {
// this is a single-joint mesh
const HFMCluster& firstHFMCluster = extracted.mesh.clusters.at(0);
int jointIndex = firstHFMCluster.jointIndex;
HFMJoint& joint = hfmModel.joints[jointIndex];
// transform cluster vertices to joint-frame and save for later
glm::mat4 meshToJoint = glm::inverse(joint.bindTransform) * modelTransform;
ShapeVertices& points = hfmModel.shapeVertices.at(jointIndex);
foreach (const glm::vec3& vertex, extracted.mesh.vertices) {
const glm::mat4 vertexTransform = meshToJoint * glm::translate(vertex);
points.push_back(extractTranslation(vertexTransform));
}
// Apply geometric offset, if present, by transforming the vertices directly
if (joint.hasGeometricOffset) {
glm::mat4 geometricOffset = createMatFromScaleQuatAndPos(joint.geometricScaling, joint.geometricRotation, joint.geometricTranslation);
for (int i = 0; i < extracted.mesh.vertices.size(); i++) {
extracted.mesh.vertices[i] = transformPoint(geometricOffset, extracted.mesh.vertices[i]);
}
}
}
hfmModel.meshes.append(extracted.mesh);
int meshIndex = hfmModel.meshes.size() - 1;
meshIDsToMeshIndices.insert(it.key(), meshIndex);
}
// attempt to map any meshes to a named model
@ -1660,6 +1645,14 @@ HFMModel* FBXSerializer::extractHFMModel(const hifi::VariantHash& mapping, const
}
}
if (applyUpAxisZRotation) {
hfmModelPtr->meshExtents.transform(glm::mat4_cast(upAxisZRotation));
hfmModelPtr->bindExtents.transform(glm::mat4_cast(upAxisZRotation));
for (auto &mesh : hfmModelPtr->meshes) {
mesh.modelTransform *= glm::mat4_cast(upAxisZRotation);
mesh.meshExtents.transform(glm::mat4_cast(upAxisZRotation));
}
}
return hfmModelPtr;
}

View file

@ -100,15 +100,7 @@ public:
{}
};
class ExtractedMesh {
public:
hfm::Mesh mesh;
std::vector<std::string> materialIDPerMeshPart;
QMultiHash<int, int> newIndices;
QVector<QHash<int, int> > blendshapeIndexMaps;
QVector<QPair<int, int> > partMaterialTextures;
QHash<QString, size_t> texcoordSetMap;
};
class ExtractedMesh;
class FBXSerializer : public HFMSerializer {
public:

View file

@ -355,7 +355,7 @@ ExtractedMesh FBXSerializer::extractMesh(const FBXNode& object, unsigned int& me
// Check for additional metadata
unsigned int dracoMeshNodeVersion = 1;
std::vector<std::string> dracoMaterialList;
std::vector<QString> dracoMaterialList;
for (const auto& dracoChild : child.children) {
if (dracoChild.name == "FBXDracoMeshVersion") {
if (!dracoChild.properties.isEmpty()) {
@ -364,7 +364,7 @@ ExtractedMesh FBXSerializer::extractMesh(const FBXNode& object, unsigned int& me
} else if (dracoChild.name == "MaterialList") {
dracoMaterialList.reserve(dracoChild.properties.size());
for (const auto& materialID : dracoChild.properties) {
dracoMaterialList.push_back(materialID.toString().toStdString());
dracoMaterialList.push_back(materialID.toString());
}
}
}
@ -486,20 +486,21 @@ ExtractedMesh FBXSerializer::extractMesh(const FBXNode& object, unsigned int& me
// grab or setup the HFMMeshPart for the part this face belongs to
int& partIndexPlusOne = materialTextureParts[materialTexture];
if (partIndexPlusOne == 0) {
data.extracted.mesh.parts.emplace_back();
data.extracted.mesh.parts.resize(data.extracted.mesh.parts.size() + 1);
HFMMeshPart& part = data.extracted.mesh.parts.back();
// Figure out if this is the older way of defining the per-part material for baked FBX
// Figure out what material this part is
if (dracoMeshNodeVersion >= 2) {
// Define the materialID for this mesh part index
uint16_t safeMaterialID = materialID < dracoMaterialList.size() ? materialID : 0;
data.extracted.materialIDPerMeshPart.push_back(dracoMaterialList[safeMaterialID].c_str());
// Define the materialID now
if (materialID < dracoMaterialList.size()) {
part.materialID = dracoMaterialList[materialID];
}
} else {
// Define the materialID later, based on the order of first appearance of the materials in the _connectionChildMap
data.extracted.partMaterialTextures.append(materialTexture);
}
// in dracoMeshNodeVersion >= 2, fbx meshes have their per-part materials already defined in data.extracted.materialIDPerMeshPart
partIndexPlusOne = (int)data.extracted.mesh.parts.size();
partIndexPlusOne = data.extracted.mesh.parts.size();
}
// give the mesh part this index
@ -534,7 +535,7 @@ ExtractedMesh FBXSerializer::extractMesh(const FBXNode& object, unsigned int& me
if (partIndex == 0) {
data.extracted.partMaterialTextures.append(materialTexture);
data.extracted.mesh.parts.resize(data.extracted.mesh.parts.size() + 1);
partIndex = (int)data.extracted.mesh.parts.size();
partIndex = data.extracted.mesh.parts.size();
}
HFMMeshPart& part = data.extracted.mesh.parts[partIndex - 1];

View file

@ -77,7 +77,7 @@ FST* FST::createFSTFromModel(const QString& fstPath, const QString& modelFilePat
mapping.insert(JOINT_FIELD, joints);
QVariantHash jointIndices;
for (size_t i = 0; i < (size_t)hfmModel.joints.size(); i++) {
for (int i = 0; i < hfmModel.joints.size(); i++) {
jointIndices.insert(hfmModel.joints.at(i).name, QString::number(i));
}
mapping.insert(JOINT_INDEX_FIELD, jointIndices);

File diff suppressed because it is too large Load diff

View file

@ -38,15 +38,15 @@ struct GLTFAsset {
struct GLTFNode {
QString name;
int camera{ -1 };
int mesh{ -1 };
int camera;
int mesh;
QVector<int> children;
QVector<double> translation;
QVector<double> rotation;
QVector<double> scale;
QVector<double> matrix;
glm::mat4 transform;
int skin { -1 };
QVector<glm::mat4> transforms;
int skin;
QVector<int> skeletons;
QString jointName;
QMap<QString, bool> defined;
@ -85,8 +85,6 @@ struct GLTFNode {
qCDebug(modelformat) << "skeletons: " << skeletons;
}
}
void normalizeTransform();
};
// Meshes
@ -460,56 +458,15 @@ struct GLTFMaterial {
// Accesors
namespace GLTFAccessorType {
enum Value {
SCALAR = 1,
VEC2 = 2,
VEC3 = 3,
VEC4 = 4,
MAT2 = 5,
MAT3 = 9,
MAT4 = 16
enum Values {
SCALAR = 0,
VEC2,
VEC3,
VEC4,
MAT2,
MAT3,
MAT4
};
inline int count(Value value) {
if (value == MAT2) {
return 4;
}
return (int)value;
}
}
namespace GLTFVertexAttribute {
enum Value {
UNKNOWN = -1,
POSITION = 0,
NORMAL,
TANGENT,
TEXCOORD_0,
TEXCOORD_1,
COLOR_0,
JOINTS_0,
WEIGHTS_0,
};
inline Value fromString(const QString& key) {
if (key == "POSITION") {
return POSITION;
} else if (key == "NORMAL") {
return NORMAL;
} else if (key == "TANGENT") {
return TANGENT;
} else if (key == "TEXCOORD_0") {
return TEXCOORD_0;
} else if (key == "TEXCOORD_1") {
return TEXCOORD_1;
} else if (key == "COLOR_0") {
return COLOR_0;
} else if (key == "JOINTS_0") {
return JOINTS_0;
} else if (key == "WEIGHTS_0") {
return WEIGHTS_0;
}
return UNKNOWN;
}
}
namespace GLTFAccessorComponentType {
enum Values {
@ -801,13 +758,6 @@ struct GLTFFile {
foreach(auto tex, textures) tex.dump();
}
}
void populateMaterialNames();
void sortNodes();
void normalizeNodeTransforms();
private:
void reorderNodes(const std::unordered_map<int, int>& reorderMap);
};
class GLTFSerializer : public QObject, public HFMSerializer {
@ -822,7 +772,7 @@ private:
hifi::URL _url;
hifi::ByteArray _glbBinary;
const glm::mat4& getModelTransform(const GLTFNode& node);
glm::mat4 getModelTransform(const GLTFNode& node);
void getSkinInverseBindMatrices(std::vector<std::vector<float>>& inverseBindMatrixValues);
void generateTargetData(int index, float weight, QVector<glm::vec3>& returnVector);
@ -891,9 +841,6 @@ private:
template <typename T>
bool addArrayFromAccessor(GLTFAccessor& accessor, QVector<T>& outarray);
template <typename T>
bool addArrayFromAttribute(GLTFVertexAttribute::Value vertexAttribute, GLTFAccessor& accessor, QVector<T>& outarray);
void retriangulate(const QVector<int>& in_indices, const QVector<glm::vec3>& in_vertices,
const QVector<glm::vec3>& in_normals, QVector<int>& out_indices,
QVector<glm::vec3>& out_vertices, QVector<glm::vec3>& out_normals);

View file

@ -174,6 +174,11 @@ glm::vec2 OBJTokenizer::getVec2() {
return v;
}
void setMeshPartDefaults(HFMMeshPart& meshPart, QString materialID) {
meshPart.materialID = materialID;
}
// OBJFace
// NOTE (trent, 7/20/17): The vertexColors vector being passed-in isn't necessary here, but I'm just
// pairing it with the vertices vector for consistency.
@ -487,7 +492,8 @@ bool OBJSerializer::parseOBJGroup(OBJTokenizer& tokenizer, const hifi::VariantHa
float& scaleGuess, bool combineParts) {
FaceGroup faces;
HFMMesh& mesh = hfmModel.meshes[0];
mesh.parts.push_back(HFMMeshPart());
mesh.parts.append(HFMMeshPart());
HFMMeshPart& meshPart = mesh.parts.last();
bool sawG = false;
bool result = true;
int originalFaceCountForDebugging = 0;
@ -495,6 +501,8 @@ bool OBJSerializer::parseOBJGroup(OBJTokenizer& tokenizer, const hifi::VariantHa
bool anyVertexColor { false };
int vertexCount { 0 };
setMeshPartDefaults(meshPart, QString("dontknow") + QString::number(mesh.parts.count()));
while (true) {
int tokenType = tokenizer.nextToken();
if (tokenType == OBJTokenizer::COMMENT_TOKEN) {
@ -667,19 +675,17 @@ HFMModel::Pointer OBJSerializer::read(const hifi::ByteArray& data, const hifi::V
_url = url;
bool combineParts = mapping.value("combineParts").toBool();
hfmModel.meshes.push_back(HFMMesh());
hfmModel.meshExtents.reset();
hfmModel.meshes.append(HFMMesh());
std::vector<QString> materialNamePerShape;
try {
// call parseOBJGroup as long as it's returning true. Each successful call will
// add a new meshPart to the model's single mesh.
while (parseOBJGroup(tokenizer, mapping, hfmModel, scaleGuess, combineParts)) {}
uint32_t meshIndex = 0;
HFMMesh& mesh = hfmModel.meshes[meshIndex];
mesh.meshIndex = meshIndex;
HFMMesh& mesh = hfmModel.meshes[0];
mesh.meshIndex = 0;
uint32_t jointIndex = 0;
hfmModel.joints.resize(1);
hfmModel.joints[0].parentIndex = -1;
hfmModel.joints[0].distanceToParent = 0;
@ -691,11 +697,19 @@ HFMModel::Pointer OBJSerializer::read(const hifi::ByteArray& data, const hifi::V
hfmModel.jointIndices["x"] = 1;
HFMCluster cluster;
cluster.jointIndex = 0;
cluster.inverseBindMatrix = glm::mat4(1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
mesh.clusters.append(cluster);
QMap<QString, int> materialMeshIdMap;
std::vector<HFMMeshPart> hfmMeshParts;
for (uint32_t meshPartIndex = 0; meshPartIndex < (uint32_t)mesh.parts.size(); ++meshPartIndex) {
HFMMeshPart& meshPart = mesh.parts[meshPartIndex];
FaceGroup faceGroup = faceGroups[meshPartIndex];
QVector<HFMMeshPart> hfmMeshParts;
for (int i = 0, meshPartCount = 0; i < mesh.parts.count(); i++, meshPartCount++) {
HFMMeshPart& meshPart = mesh.parts[i];
FaceGroup faceGroup = faceGroups[meshPartCount];
bool specifiesUV = false;
foreach(OBJFace face, faceGroup) {
// Go through all of the OBJ faces and determine the number of different materials necessary (each different material will be a unique mesh).
@ -704,13 +718,12 @@ HFMModel::Pointer OBJSerializer::read(const hifi::ByteArray& data, const hifi::V
// Create a new HFMMesh for this material mapping.
materialMeshIdMap.insert(face.materialName, materialMeshIdMap.count());
uint32_t partIndex = (int)hfmMeshParts.size();
hfmMeshParts.push_back(HFMMeshPart());
HFMMeshPart& meshPartNew = hfmMeshParts.back();
hfmMeshParts.append(HFMMeshPart());
HFMMeshPart& meshPartNew = hfmMeshParts.last();
meshPartNew.quadIndices = QVector<int>(meshPart.quadIndices); // Copy over quad indices [NOTE (trent/mittens, 4/3/17): Likely unnecessary since they go unused anyway].
meshPartNew.quadTrianglesIndices = QVector<int>(meshPart.quadTrianglesIndices); // Copy over quad triangulated indices [NOTE (trent/mittens, 4/3/17): Likely unnecessary since they go unused anyway].
meshPartNew.triangleIndices = QVector<int>(meshPart.triangleIndices); // Copy over triangle indices.
// Do some of the material logic (which previously lived below) now.
// All the faces in the same group will have the same name and material.
QString groupMaterialName = face.materialName;
@ -732,26 +745,19 @@ HFMModel::Pointer OBJSerializer::read(const hifi::ByteArray& data, const hifi::V
needsMaterialLibrary = groupMaterialName != SMART_DEFAULT_MATERIAL_NAME;
}
materials[groupMaterialName] = material;
meshPartNew.materialID = groupMaterialName;
}
materialNamePerShape.push_back(groupMaterialName);
hfm::Shape shape;
shape.mesh = meshIndex;
shape.joint = jointIndex;
shape.meshPart = partIndex;
hfmModel.shapes.push_back(shape);
}
}
}
// clean up old mesh parts.
auto unmodifiedMeshPartCount = (uint32_t)mesh.parts.size();
int unmodifiedMeshPartCount = mesh.parts.count();
mesh.parts.clear();
mesh.parts = hfmMeshParts;
mesh.parts = QVector<HFMMeshPart>(hfmMeshParts);
for (uint32_t meshPartIndex = 0; meshPartIndex < unmodifiedMeshPartCount; meshPartIndex++) {
FaceGroup faceGroup = faceGroups[meshPartIndex];
for (int i = 0, meshPartCount = 0; i < unmodifiedMeshPartCount; i++, meshPartCount++) {
FaceGroup faceGroup = faceGroups[meshPartCount];
// Now that each mesh has been created with its own unique material mappings, fill them with data (vertex data is duplicated, face data is not).
foreach(OBJFace face, faceGroup) {
@ -817,13 +823,18 @@ HFMModel::Pointer OBJSerializer::read(const hifi::ByteArray& data, const hifi::V
}
}
}
mesh.meshExtents.reset();
foreach(const glm::vec3& vertex, mesh.vertices) {
mesh.meshExtents.addPoint(vertex);
hfmModel.meshExtents.addPoint(vertex);
}
// hfmDebugDump(hfmModel);
} catch(const std::exception& e) {
qCDebug(modelformat) << "OBJSerializer fail: " << e.what();
}
// At this point, the hfmModel joint, mesh, parts and shpaes have been defined
// only no material assigned
QString queryPart = _url.query();
bool suppressMaterialsHack = queryPart.contains("hifiusemat"); // If this appears in query string, don't fetch mtl even if used.
OBJMaterial& preDefinedMaterial = materials[SMART_DEFAULT_MATERIAL_NAME];
@ -875,23 +886,17 @@ HFMModel::Pointer OBJSerializer::read(const hifi::ByteArray& data, const hifi::V
}
}
// As we are populating the material list in the hfmModel, let s also create the reverse map (from materialName to index)
QMap<QString, uint32_t> materialNameToIndex;
foreach (QString materialID, materials.keys()) {
OBJMaterial& objMaterial = materials[materialID];
if (!objMaterial.used) {
continue;
}
// capture the name to index map
materialNameToIndex[materialID] = (uint32_t) hfmModel.materials.size();
hfmModel.materials.emplace_back(objMaterial.diffuseColor,
objMaterial.specularColor,
objMaterial.emissiveColor,
objMaterial.shininess,
objMaterial.opacity);
HFMMaterial& hfmMaterial = hfmModel.materials.back();
HFMMaterial& hfmMaterial = hfmModel.materials[materialID] = HFMMaterial(objMaterial.diffuseColor,
objMaterial.specularColor,
objMaterial.emissiveColor,
objMaterial.shininess,
objMaterial.opacity);
hfmMaterial.name = materialID;
hfmMaterial.materialID = materialID;
@ -991,16 +996,77 @@ HFMModel::Pointer OBJSerializer::read(const hifi::ByteArray& data, const hifi::V
modelMaterial->setOpacity(hfmMaterial.opacity);
}
// GO over the shapes once more to assign the material index correctly
for (uint32_t i = 0; i < (uint32_t)hfmModel.shapes.size(); ++i) {
const auto& materialName = materialNamePerShape[i];
if (!materialName.isEmpty()) {
auto foundMaterialIndex = materialNameToIndex.find(materialName);
if (foundMaterialIndex != materialNameToIndex.end()) {
hfmModel.shapes[i].material = foundMaterialIndex.value();
return hfmModelPtr;
}
void hfmDebugDump(const HFMModel& hfmModel) {
qCDebug(modelformat) << "---------------- hfmModel ----------------";
qCDebug(modelformat) << " hasSkeletonJoints =" << hfmModel.hasSkeletonJoints;
qCDebug(modelformat) << " offset =" << hfmModel.offset;
qCDebug(modelformat) << " meshes.count() =" << hfmModel.meshes.count();
foreach (HFMMesh mesh, hfmModel.meshes) {
qCDebug(modelformat) << " vertices.count() =" << mesh.vertices.count();
qCDebug(modelformat) << " colors.count() =" << mesh.colors.count();
qCDebug(modelformat) << " normals.count() =" << mesh.normals.count();
/*if (mesh.normals.count() == mesh.vertices.count()) {
for (int i = 0; i < mesh.normals.count(); i++) {
qCDebug(modelformat) << " " << mesh.vertices[ i ] << mesh.normals[ i ];
}
}*/
qCDebug(modelformat) << " tangents.count() =" << mesh.tangents.count();
qCDebug(modelformat) << " colors.count() =" << mesh.colors.count();
qCDebug(modelformat) << " texCoords.count() =" << mesh.texCoords.count();
qCDebug(modelformat) << " texCoords1.count() =" << mesh.texCoords1.count();
qCDebug(modelformat) << " clusterIndices.count() =" << mesh.clusterIndices.count();
qCDebug(modelformat) << " clusterWeights.count() =" << mesh.clusterWeights.count();
qCDebug(modelformat) << " meshExtents =" << mesh.meshExtents;
qCDebug(modelformat) << " modelTransform =" << mesh.modelTransform;
qCDebug(modelformat) << " parts.count() =" << mesh.parts.count();
foreach (HFMMeshPart meshPart, mesh.parts) {
qCDebug(modelformat) << " quadIndices.count() =" << meshPart.quadIndices.count();
qCDebug(modelformat) << " triangleIndices.count() =" << meshPart.triangleIndices.count();
/*
qCDebug(modelformat) << " diffuseColor =" << meshPart.diffuseColor << "mat =" << meshPart._material->getDiffuse();
qCDebug(modelformat) << " specularColor =" << meshPart.specularColor << "mat =" << meshPart._material->getMetallic();
qCDebug(modelformat) << " emissiveColor =" << meshPart.emissiveColor << "mat =" << meshPart._material->getEmissive();
qCDebug(modelformat) << " emissiveParams =" << meshPart.emissiveParams;
qCDebug(modelformat) << " gloss =" << meshPart.shininess << "mat =" << meshPart._material->getRoughness();
qCDebug(modelformat) << " opacity =" << meshPart.opacity << "mat =" << meshPart._material->getOpacity();
*/
qCDebug(modelformat) << " materialID =" << meshPart.materialID;
/* qCDebug(modelformat) << " diffuse texture =" << meshPart.diffuseTexture.filename;
qCDebug(modelformat) << " specular texture =" << meshPart.specularTexture.filename;
*/
}
qCDebug(modelformat) << " clusters.count() =" << mesh.clusters.count();
foreach (HFMCluster cluster, mesh.clusters) {
qCDebug(modelformat) << " jointIndex =" << cluster.jointIndex;
qCDebug(modelformat) << " inverseBindMatrix =" << cluster.inverseBindMatrix;
}
}
return hfmModelPtr;
qCDebug(modelformat) << " jointIndices =" << hfmModel.jointIndices;
qCDebug(modelformat) << " joints.count() =" << hfmModel.joints.count();
foreach (HFMJoint joint, hfmModel.joints) {
qCDebug(modelformat) << " parentIndex" << joint.parentIndex;
qCDebug(modelformat) << " distanceToParent" << joint.distanceToParent;
qCDebug(modelformat) << " translation" << joint.translation;
qCDebug(modelformat) << " preTransform" << joint.preTransform;
qCDebug(modelformat) << " preRotation" << joint.preRotation;
qCDebug(modelformat) << " rotation" << joint.rotation;
qCDebug(modelformat) << " postRotation" << joint.postRotation;
qCDebug(modelformat) << " postTransform" << joint.postTransform;
qCDebug(modelformat) << " transform" << joint.transform;
qCDebug(modelformat) << " rotationMin" << joint.rotationMin;
qCDebug(modelformat) << " rotationMax" << joint.rotationMax;
qCDebug(modelformat) << " inverseDefaultRotation" << joint.inverseDefaultRotation;
qCDebug(modelformat) << " inverseBindRotation" << joint.inverseBindRotation;
qCDebug(modelformat) << " bindTransform" << joint.bindTransform;
qCDebug(modelformat) << " name" << joint.name;
qCDebug(modelformat) << " isSkeletonJoint" << joint.isSkeletonJoint;
}
qCDebug(modelformat) << "\n";
}

View file

@ -120,5 +120,6 @@ private:
// What are these utilities doing here? One is used by fbx loading code in VHACD Utils, and the other a general debugging utility.
void setMeshPartDefaults(HFMMeshPart& meshPart, QString materialID);
void hfmDebugDump(const HFMModel& hfmModel);
#endif // hifi_OBJSerializer_h

View file

@ -76,7 +76,7 @@ QStringList HFMModel::getJointNames() const {
}
bool HFMModel::hasBlendedMeshes() const {
if (!meshes.empty()) {
if (!meshes.isEmpty()) {
foreach (const HFMMesh& mesh, meshes) {
if (!mesh.blendshapes.isEmpty()) {
return true;
@ -166,16 +166,16 @@ void HFMModel::computeKdops() {
glm::vec3(INV_SQRT_3, INV_SQRT_3, -INV_SQRT_3),
glm::vec3(INV_SQRT_3, -INV_SQRT_3, -INV_SQRT_3)
};
if (joints.size() != shapeVertices.size()) {
if (joints.size() != (int)shapeVertices.size()) {
return;
}
// now that all joints have been scanned compute a k-Dop bounding volume of mesh
for (size_t i = 0; i < joints.size(); ++i) {
for (int i = 0; i < joints.size(); ++i) {
HFMJoint& joint = joints[i];
// NOTE: points are in joint-frame
ShapeVertices& points = shapeVertices.at(i);
glm::quat rotOffset = jointRotationOffsets.contains((int)i) ? glm::inverse(jointRotationOffsets[(int)i]) : quat();
glm::quat rotOffset = jointRotationOffsets.contains(i) ? glm::inverse(jointRotationOffsets[i]) : quat();
if (points.size() > 0) {
// compute average point
glm::vec3 avgPoint = glm::vec3(0.0f);
@ -208,164 +208,3 @@ void HFMModel::computeKdops() {
}
}
}
void hfm::Model::debugDump() const {
qCDebug(modelformat) << "---------------- hfmModel ----------------";
qCDebug(modelformat) << " hasSkeletonJoints =" << hasSkeletonJoints;
qCDebug(modelformat) << " offset =" << offset;
qCDebug(modelformat) << " neckPivot = " << neckPivot;
qCDebug(modelformat) << " bindExtents.size() = " << bindExtents.size();
qCDebug(modelformat) << " meshExtents.size() = " << meshExtents.size();
qCDebug(modelformat) << "---------------- Shapes ----------------";
qCDebug(modelformat) << " shapes.size() =" << shapes.size();
for (const hfm::Shape& shape : shapes) {
qCDebug(modelformat) << "\n";
qCDebug(modelformat) << " mesh =" << shape.mesh;
qCDebug(modelformat) << " meshPart =" << shape.meshPart;
qCDebug(modelformat) << " material =" << shape.material;
qCDebug(modelformat) << " joint =" << shape.joint;
qCDebug(modelformat) << " transformedExtents =" << shape.transformedExtents;
qCDebug(modelformat) << " skinDeformer =" << shape.skinDeformer;
}
qCDebug(modelformat) << " jointIndices.size() =" << jointIndices.size();
qCDebug(modelformat) << " joints.size() =" << joints.size();
qCDebug(modelformat) << "---------------- Meshes ----------------";
qCDebug(modelformat) << " meshes.size() =" << meshes.size();
qCDebug(modelformat) << " blendshapeChannelNames = " << blendshapeChannelNames;
for (const HFMMesh& mesh : meshes) {
qCDebug(modelformat) << "\n";
qCDebug(modelformat) << " meshpointer =" << mesh._mesh.get();
qCDebug(modelformat) << " meshindex =" << mesh.meshIndex;
qCDebug(modelformat) << " vertices.size() =" << mesh.vertices.size();
qCDebug(modelformat) << " colors.size() =" << mesh.colors.size();
qCDebug(modelformat) << " normals.size() =" << mesh.normals.size();
qCDebug(modelformat) << " tangents.size() =" << mesh.tangents.size();
qCDebug(modelformat) << " colors.size() =" << mesh.colors.size();
qCDebug(modelformat) << " texCoords.size() =" << mesh.texCoords.size();
qCDebug(modelformat) << " texCoords1.size() =" << mesh.texCoords1.size();
qCDebug(modelformat) << " clusterIndices.size() =" << mesh.clusterIndices.size();
qCDebug(modelformat) << " clusterWeights.size() =" << mesh.clusterWeights.size();
qCDebug(modelformat) << " modelTransform =" << mesh.modelTransform;
qCDebug(modelformat) << " parts.size() =" << mesh.parts.size();
qCDebug(modelformat) << "---------------- Meshes (blendshapes)--------";
for (HFMBlendshape bshape : mesh.blendshapes) {
qCDebug(modelformat) << "\n";
qCDebug(modelformat) << " bshape.indices.size() =" << bshape.indices.size();
qCDebug(modelformat) << " bshape.vertices.size() =" << bshape.vertices.size();
qCDebug(modelformat) << " bshape.normals.size() =" << bshape.normals.size();
qCDebug(modelformat) << "\n";
}
qCDebug(modelformat) << "---------------- Meshes (meshparts)--------";
for (HFMMeshPart meshPart : mesh.parts) {
qCDebug(modelformat) << "\n";
qCDebug(modelformat) << " quadIndices.size() =" << meshPart.quadIndices.size();
qCDebug(modelformat) << " triangleIndices.size() =" << meshPart.triangleIndices.size();
qCDebug(modelformat) << "\n";
}
}
qCDebug(modelformat) << "---------------- AnimationFrames ----------------";
for (HFMAnimationFrame anim : animationFrames) {
qCDebug(modelformat) << " anim.translations = " << anim.translations;
qCDebug(modelformat) << " anim.rotations = " << anim.rotations;
}
QList<int> mitomona_keys = meshIndicesToModelNames.keys();
for (int key : mitomona_keys) {
qCDebug(modelformat) << " meshIndicesToModelNames key =" << key
<< " val =" << meshIndicesToModelNames[key];
}
qCDebug(modelformat) << "---------------- Materials ----------------";
for (HFMMaterial mat : materials) {
qCDebug(modelformat) << "\n";
qCDebug(modelformat) << " mat.materialID =" << mat.materialID;
qCDebug(modelformat) << " diffuseColor =" << mat.diffuseColor;
qCDebug(modelformat) << " diffuseFactor =" << mat.diffuseFactor;
qCDebug(modelformat) << " specularColor =" << mat.specularColor;
qCDebug(modelformat) << " specularFactor =" << mat.specularFactor;
qCDebug(modelformat) << " emissiveColor =" << mat.emissiveColor;
qCDebug(modelformat) << " emissiveFactor =" << mat.emissiveFactor;
qCDebug(modelformat) << " shininess =" << mat.shininess;
qCDebug(modelformat) << " opacity =" << mat.opacity;
qCDebug(modelformat) << " metallic =" << mat.metallic;
qCDebug(modelformat) << " roughness =" << mat.roughness;
qCDebug(modelformat) << " emissiveIntensity =" << mat.emissiveIntensity;
qCDebug(modelformat) << " ambientFactor =" << mat.ambientFactor;
qCDebug(modelformat) << " materialID =" << mat.materialID;
qCDebug(modelformat) << " name =" << mat.name;
qCDebug(modelformat) << " shadingModel =" << mat.shadingModel;
qCDebug(modelformat) << " _material =" << mat._material.get();
qCDebug(modelformat) << " normalTexture =" << mat.normalTexture.filename;
qCDebug(modelformat) << " albedoTexture =" << mat.albedoTexture.filename;
qCDebug(modelformat) << " opacityTexture =" << mat.opacityTexture.filename;
qCDebug(modelformat) << " lightmapParams =" << mat.lightmapParams;
qCDebug(modelformat) << " isPBSMaterial =" << mat.isPBSMaterial;
qCDebug(modelformat) << " useNormalMap =" << mat.useNormalMap;
qCDebug(modelformat) << " useAlbedoMap =" << mat.useAlbedoMap;
qCDebug(modelformat) << " useOpacityMap =" << mat.useOpacityMap;
qCDebug(modelformat) << " useRoughnessMap =" << mat.useRoughnessMap;
qCDebug(modelformat) << " useSpecularMap =" << mat.useSpecularMap;
qCDebug(modelformat) << " useMetallicMap =" << mat.useMetallicMap;
qCDebug(modelformat) << " useEmissiveMap =" << mat.useEmissiveMap;
qCDebug(modelformat) << " useOcclusionMap =" << mat.useOcclusionMap;
qCDebug(modelformat) << "\n";
}
qCDebug(modelformat) << "---------------- Joints ----------------";
for (const HFMJoint& joint : joints) {
qCDebug(modelformat) << "\n";
qCDebug(modelformat) << " shapeInfo.avgPoint =" << joint.shapeInfo.avgPoint;
qCDebug(modelformat) << " shapeInfo.debugLines =" << joint.shapeInfo.debugLines;
qCDebug(modelformat) << " shapeInfo.dots =" << joint.shapeInfo.dots;
qCDebug(modelformat) << " shapeInfo.points =" << joint.shapeInfo.points;
qCDebug(modelformat) << " ---";
qCDebug(modelformat) << " parentIndex" << joint.parentIndex;
qCDebug(modelformat) << " distanceToParent" << joint.distanceToParent;
qCDebug(modelformat) << " localTransform" << joint.localTransform;
qCDebug(modelformat) << " transform" << joint.transform;
qCDebug(modelformat) << " globalTransform" << joint.globalTransform;
qCDebug(modelformat) << " ---";
qCDebug(modelformat) << " translation" << joint.translation;
qCDebug(modelformat) << " preTransform" << joint.preTransform;
qCDebug(modelformat) << " preRotation" << joint.preRotation;
qCDebug(modelformat) << " rotation" << joint.rotation;
qCDebug(modelformat) << " postRotation" << joint.postRotation;
qCDebug(modelformat) << " postTransform" << joint.postTransform;
qCDebug(modelformat) << " rotationMin" << joint.rotationMin;
qCDebug(modelformat) << " rotationMax" << joint.rotationMax;
qCDebug(modelformat) << " inverseDefaultRotation" << joint.inverseDefaultRotation;
qCDebug(modelformat) << " inverseBindRotation" << joint.inverseBindRotation;
qCDebug(modelformat) << " bindTransformFoundInCluster" << joint.bindTransformFoundInCluster;
qCDebug(modelformat) << " bindTransform" << joint.bindTransform;
qCDebug(modelformat) << " name" << joint.name;
qCDebug(modelformat) << " isSkeletonJoint" << joint.isSkeletonJoint;
qCDebug(modelformat) << " geometricOffset" << joint.geometricOffset;
qCDebug(modelformat) << "\n";
}
qCDebug(modelformat) << "------------- SkinDeformers ------------";
qCDebug(modelformat) << " skinDeformers.size() =" << skinDeformers.size();
for(const hfm::SkinDeformer& skinDeformer : skinDeformers) {
qCDebug(modelformat) << "------- SkinDeformers (Clusters) -------";
for (const hfm::Cluster& cluster : skinDeformer.clusters) {
qCDebug(modelformat) << "\n";
qCDebug(modelformat) << " jointIndex =" << cluster.jointIndex;
qCDebug(modelformat) << " inverseBindMatrix =" << cluster.inverseBindMatrix;
qCDebug(modelformat) << "\n";
}
}
qCDebug(modelformat) << "\n";
}

View file

@ -66,8 +66,6 @@ static const int DRACO_ATTRIBUTE_ORIGINAL_INDEX = DRACO_BEGIN_CUSTOM_HIFI_ATTRIB
// High Fidelity Model namespace
namespace hfm {
static const uint32_t UNDEFINED_KEY = (uint32_t)-1;
/// A single blendshape.
class Blendshape {
public:
@ -113,22 +111,19 @@ public:
bool isSkeletonJoint;
bool bindTransformFoundInCluster;
// geometric offset is applied in local space but does NOT affect children.
// TODO: Apply hfm::Joint.geometricOffset to transforms in the model preparation step
glm::mat4 geometricOffset;
// globalTransform is the transform of the joint with all parent transforms applied, plus the geometric offset
glm::mat4 localTransform;
glm::mat4 globalTransform;
bool hasGeometricOffset;
glm::vec3 geometricTranslation;
glm::quat geometricRotation;
glm::vec3 geometricScaling;
};
/// A single binding to a joint.
class Cluster {
public:
static const uint32_t INVALID_JOINT_INDEX { (uint32_t)-1 };
uint32_t jointIndex { INVALID_JOINT_INDEX };
int jointIndex;
glm::mat4 inverseBindMatrix;
Transform inverseBindTransform;
};
@ -160,6 +155,8 @@ public:
QVector<int> quadIndices; // original indices from the FBX mesh
QVector<int> quadTrianglesIndices; // original indices from the FBX mesh of the quad converted as triangles
QVector<int> triangleIndices; // original indices from the FBX mesh
QString materialID;
};
class Material {
@ -230,20 +227,11 @@ public:
bool needTangentSpace() const;
};
/// Simple Triangle List Mesh
struct TriangleListMesh {
std::vector<glm::vec3> vertices;
std::vector<uint32_t> indices;
std::vector<glm::ivec2> parts; // Offset in the indices, Number of indices
std::vector<Extents> partExtents; // Extents of each part with no transform applied. Same length as parts.
};
/// A single mesh (with optional blendshapes).
class Mesh {
public:
std::vector<MeshPart> parts;
QVector<MeshPart> parts;
QVector<glm::vec3> vertices;
QVector<glm::vec3> normals;
@ -251,27 +239,21 @@ public:
QVector<glm::vec3> colors;
QVector<glm::vec2> texCoords;
QVector<glm::vec2> texCoords1;
QVector<uint16_t> clusterIndices;
QVector<uint16_t> clusterWeights;
QVector<int32_t> originalIndices;
Extents meshExtents; // DEPRECATED (see hfm::Shape::transformedExtents)
glm::mat4 modelTransform; // DEPRECATED (see hfm::Joint::globalTransform, hfm::Shape::transform, hfm::Model::joints)
QVector<Cluster> clusters;
// Skinning cluster attributes
std::vector<uint16_t> clusterIndices;
std::vector<uint16_t> clusterWeights;
uint16_t clusterWeightsPerVertex { 0 };
Extents meshExtents;
glm::mat4 modelTransform;
// Blendshape attributes
QVector<Blendshape> blendshapes;
// Simple Triangle List Mesh generated during baking
hfm::TriangleListMesh triangleListMesh;
QVector<int32_t> originalIndices; // Original indices of the vertices
unsigned int meshIndex; // the order the meshes appeared in the object file
graphics::MeshPointer _mesh;
bool wasCompressed { false };
};
/// A single animation frame.
@ -308,30 +290,6 @@ public:
bool shouldInitCollisions() const { return _collisionsConfig.size() > 0; }
};
// A different skinning representation, used by FBXSerializer. We convert this to our graphics-optimized runtime representation contained within the mesh.
class SkinCluster {
public:
std::vector<uint32_t> indices;
std::vector<float> weights;
};
class SkinDeformer {
public:
std::vector<Cluster> clusters;
};
// The lightweight model part description.
class Shape {
public:
uint32_t mesh { UNDEFINED_KEY };
uint32_t meshPart { UNDEFINED_KEY };
uint32_t material { UNDEFINED_KEY };
uint32_t joint { UNDEFINED_KEY }; // The hfm::Joint associated with this shape, containing transform information
// TODO: Have all serializers calculate hfm::Shape::transformedExtents in world space where they previously calculated hfm::Mesh::meshExtents. Change all code that uses hfm::Mesh::meshExtents to use this instead.
Extents transformedExtents; // The precise extents of the meshPart vertices in world space, after transform information is applied, while not taking into account rigging/skinning
uint32_t skinDeformer { UNDEFINED_KEY };
};
/// The runtime model format.
class Model {
public:
@ -342,18 +300,15 @@ public:
QString author;
QString applicationName; ///< the name of the application that generated the model
std::vector<Shape> shapes;
std::vector<Mesh> meshes;
std::vector<Material> materials;
std::vector<SkinDeformer> skinDeformers;
std::vector<Joint> joints;
QVector<Joint> joints;
QHash<QString, int> jointIndices; ///< 1-based, so as to more easily detect missing indices
bool hasSkeletonJoints;
QVector<Mesh> meshes;
QVector<QString> scripts;
QHash<QString, Material> materials;
glm::mat4 offset; // This includes offset, rotation, and scale as specified by the FST file
glm::vec3 neckPivot;
@ -385,12 +340,19 @@ public:
QMap<int, glm::quat> jointRotationOffsets;
std::vector<ShapeVertices> shapeVertices;
FlowData flowData;
void debugDump() const;
};
};
class ExtractedMesh {
public:
hfm::Mesh mesh;
QMultiHash<int, int> newIndices;
QVector<QHash<int, int> > blendshapeIndexMaps;
QVector<QPair<int, int> > partMaterialTextures;
QHash<QString, size_t> texcoordSetMap;
};
typedef hfm::Blendshape HFMBlendshape;
typedef hfm::JointShapeInfo HFMJointShapeInfo;
typedef hfm::Joint HFMJoint;
@ -399,10 +361,8 @@ typedef hfm::Texture HFMTexture;
typedef hfm::MeshPart HFMMeshPart;
typedef hfm::Material HFMMaterial;
typedef hfm::Mesh HFMMesh;
typedef hfm::SkinDeformer HFMSkinDeformer;
typedef hfm::AnimationFrame HFMAnimationFrame;
typedef hfm::Light HFMLight;
typedef hfm::Shape HFMShape;
typedef hfm::Model HFMModel;
typedef hfm::FlowData FlowData;

View file

@ -1,212 +0,0 @@
//
// HFMModelMath.cpp
// model-baker/src/model-baker
//
// Created by Sabrina Shanman on 2019/10/04.
// Copyright 2019 High Fidelity, Inc.
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
#include "HFMModelMath.h"
#include <LogHandler.h>
#include <unordered_map>
#include <GLMHelpers.h>
#include <glm/gtx/hash.hpp>
namespace hfm {
void forEachIndex(const hfm::MeshPart& meshPart, std::function<void(uint32_t)> func) {
for (int i = 0; i <= meshPart.quadIndices.size() - 4; i += 4) {
func((uint32_t)meshPart.quadIndices[i]);
func((uint32_t)meshPart.quadIndices[i+1]);
func((uint32_t)meshPart.quadIndices[i+2]);
func((uint32_t)meshPart.quadIndices[i+3]);
}
for (int i = 0; i <= meshPart.triangleIndices.size() - 3; i += 3) {
func((uint32_t)meshPart.triangleIndices[i]);
func((uint32_t)meshPart.triangleIndices[i+1]);
func((uint32_t)meshPart.triangleIndices[i+2]);
}
}
void thickenFlatExtents(Extents& extents) {
// Add epsilon to extents to compensate for flat plane
extents.minimum -= glm::vec3(EPSILON, EPSILON, EPSILON);
extents.maximum += glm::vec3(EPSILON, EPSILON, EPSILON);
}
void calculateExtentsForTriangleListMesh(TriangleListMesh& triangleListMesh) {
triangleListMesh.partExtents.resize(triangleListMesh.parts.size());
for (size_t partIndex = 0; partIndex < triangleListMesh.parts.size(); ++partIndex) {
const auto& part = triangleListMesh.parts[partIndex];
auto& extents = triangleListMesh.partExtents[partIndex];
int partEnd = part.x + part.y;
for (int i = part.x; i < partEnd; ++i) {
auto index = triangleListMesh.indices[i];
const auto& position = triangleListMesh.vertices[index];
extents.addPoint(position);
}
}
}
void calculateExtentsForShape(hfm::Shape& shape, const std::vector<hfm::TriangleListMesh>& triangleListMeshes, const std::vector<hfm::Joint>& joints) {
auto& shapeExtents = shape.transformedExtents;
shapeExtents.reset();
const auto& triangleListMesh = triangleListMeshes[shape.mesh];
const auto& partExtent = triangleListMesh.partExtents[shape.meshPart];
const glm::mat4& transform = joints[shape.joint].transform;
shapeExtents = partExtent;
shapeExtents.transform(transform);
thickenFlatExtents(shapeExtents);
}
void calculateExtentsForModel(Extents& modelExtents, const std::vector<hfm::Shape>& shapes) {
modelExtents.reset();
for (size_t i = 0; i < shapes.size(); ++i) {
const auto& shape = shapes[i];
const auto& shapeExtents = shape.transformedExtents;
modelExtents.addExtents(shapeExtents);
}
}
ReweightedDeformers getReweightedDeformers(const size_t numMeshVertices, const std::vector<hfm::SkinCluster> skinClusters, const uint16_t weightsPerVertex) {
ReweightedDeformers reweightedDeformers;
if (skinClusters.size() == 0) {
return reweightedDeformers;
}
size_t numClusterIndices = numMeshVertices * weightsPerVertex;
reweightedDeformers.indices.resize(numClusterIndices, (uint16_t)(skinClusters.size() - 1));
reweightedDeformers.weights.resize(numClusterIndices, 0);
reweightedDeformers.weightsPerVertex = weightsPerVertex;
std::vector<float> weightAccumulators;
weightAccumulators.resize(numClusterIndices, 0.0f);
for (uint16_t i = 0; i < (uint16_t)skinClusters.size(); ++i) {
const hfm::SkinCluster& skinCluster = skinClusters[i];
if (skinCluster.indices.size() != skinCluster.weights.size()) {
reweightedDeformers.trimmedToMatch = true;
}
size_t numIndicesOrWeights = std::min(skinCluster.indices.size(), skinCluster.weights.size());
for (size_t j = 0; j < numIndicesOrWeights; ++j) {
uint32_t index = skinCluster.indices[j];
float weight = skinCluster.weights[j];
// look for an unused slot in the weights vector
uint32_t weightIndex = index * weightsPerVertex;
uint32_t lowestIndex = -1;
float lowestWeight = FLT_MAX;
uint16_t k = 0;
for (; k < weightsPerVertex; k++) {
if (weightAccumulators[weightIndex + k] == 0.0f) {
reweightedDeformers.indices[weightIndex + k] = i;
weightAccumulators[weightIndex + k] = weight;
break;
}
if (weightAccumulators[weightIndex + k] < lowestWeight) {
lowestIndex = k;
lowestWeight = weightAccumulators[weightIndex + k];
}
}
if (k == weightsPerVertex && weight > lowestWeight) {
// no space for an additional weight; we must replace the lowest
weightAccumulators[weightIndex + lowestIndex] = weight;
reweightedDeformers.indices[weightIndex + lowestIndex] = i;
}
}
}
// now that we've accumulated the most relevant weights for each vertex
// normalize and compress to 16-bits
for (size_t i = 0; i < numMeshVertices; ++i) {
size_t j = i * weightsPerVertex;
// normalize weights into uint16_t
float totalWeight = 0.0f;
for (size_t k = j; k < j + weightsPerVertex; ++k) {
totalWeight += weightAccumulators[k];
}
const float ALMOST_HALF = 0.499f;
if (totalWeight > 0.0f) {
float weightScalingFactor = (float)(UINT16_MAX) / totalWeight;
for (size_t k = j; k < j + weightsPerVertex; ++k) {
reweightedDeformers.weights[k] = (uint16_t)(weightScalingFactor * weightAccumulators[k] + ALMOST_HALF);
}
} else {
reweightedDeformers.weights[j] = (uint16_t)((float)(UINT16_MAX) + ALMOST_HALF);
}
}
return reweightedDeformers;
}
const TriangleListMesh generateTriangleListMesh(const std::vector<glm::vec3>& srcVertices, const std::vector<HFMMeshPart>& srcParts) {
TriangleListMesh dest;
// copy vertices for now
dest.vertices = srcVertices;
std::vector<uint32_t> oldToNewIndex(srcVertices.size());
{
std::unordered_map<glm::vec3, uint32_t> uniqueVertexToNewIndex;
int oldIndex = 0;
int newIndex = 0;
for (const auto& srcVertex : srcVertices) {
auto foundIndex = uniqueVertexToNewIndex.find(srcVertex);
if (foundIndex != uniqueVertexToNewIndex.end()) {
oldToNewIndex[oldIndex] = foundIndex->second;
} else {
uniqueVertexToNewIndex[srcVertex] = newIndex;
oldToNewIndex[oldIndex] = newIndex;
dest.vertices[newIndex] = srcVertex;
++newIndex;
}
++oldIndex;
}
if (uniqueVertexToNewIndex.size() < srcVertices.size()) {
dest.vertices.resize(uniqueVertexToNewIndex.size());
dest.vertices.shrink_to_fit();
}
}
auto newIndicesCount = 0;
for (const auto& part : srcParts) {
newIndicesCount += part.triangleIndices.size() + part.quadTrianglesIndices.size();
}
{
dest.indices.resize(newIndicesCount);
int i = 0;
for (const auto& part : srcParts) {
glm::ivec2 spart(i, 0);
for (const auto& qti : part.quadTrianglesIndices) {
dest.indices[i] = oldToNewIndex[qti];
++i;
}
for (const auto& ti : part.triangleIndices) {
dest.indices[i] = oldToNewIndex[ti];
++i;
}
spart.y = i - spart.x;
dest.parts.push_back(spart);
}
}
calculateExtentsForTriangleListMesh(dest);
return dest;
}
};

View file

@ -1,45 +0,0 @@
//
// HFMModelMath.h
// model-baker/src/model-baker
//
// Created by Sabrina Shanman on 2019/10/04.
// Copyright 2019 High Fidelity, Inc.
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
#ifndef hifi_hfm_ModelMath_h
#define hifi_hfm_ModelMath_h
#include "HFM.h"
namespace hfm {
void forEachIndex(const hfm::MeshPart& meshPart, std::function<void(uint32_t)> func);
void initializeExtents(Extents& extents);
void calculateExtentsForTriangleListMesh(TriangleListMesh& triangleListMesh);
// This can't be moved to model-baker until
void calculateExtentsForShape(hfm::Shape& shape, const std::vector<hfm::TriangleListMesh>& triangleListMeshes, const std::vector<hfm::Joint>& joints);
void calculateExtentsForModel(Extents& modelExtents, const std::vector<hfm::Shape>& shapes);
struct ReweightedDeformers {
std::vector<uint16_t> indices;
std::vector<uint16_t> weights;
uint16_t weightsPerVertex { 0 };
bool trimmedToMatch { false };
};
const uint16_t DEFAULT_SKINNING_WEIGHTS_PER_VERTEX = 4;
ReweightedDeformers getReweightedDeformers(const size_t numMeshVertices, const std::vector<hfm::SkinCluster> skinClusters, const uint16_t weightsPerVertex = DEFAULT_SKINNING_WEIGHTS_PER_VERTEX);
const TriangleListMesh generateTriangleListMesh(const std::vector<glm::vec3>& srcVertices, const std::vector<HFMMeshPart>& srcParts);
};
#endif // #define hifi_hfm_ModelMath_h

View file

@ -1,5 +1,5 @@
//
// HFMSerializer.h
// FBXSerializer.h
// libraries/hfm/src/hfm
//
// Created by Sabrina Shanman on 2018/11/07.

View file

@ -33,7 +33,7 @@ namespace TextureUsage {
/**jsdoc
* <p>Describes the type of texture.</p>
* <p>See also: {@link Material} and
* {@link https://docs.projectathena.dev/create/3d-models/pbr-materials-guide.html|PBR Materials Guide}.</p>
* {@link https://docs.vircadia.dev/create/3d-models/pbr-materials-guide.html|PBR Materials Guide}.</p>
* <table>
* <thead>
* <tr><th>Value</th><th>Name</th><th>Description</th></tr>

View file

@ -11,6 +11,7 @@
#define khronos_khr_hpp
#include <unordered_map>
#include <stdexcept>
namespace khronos {

Some files were not shown because too many files have changed in this diff Show more