Merge branch 'master' of https://github.com/highfidelity/hifi into animationFrameIndex

This commit is contained in:
ZappoMan 2014-12-29 14:01:35 -08:00
commit 49e350fab2
308 changed files with 6656 additions and 8131 deletions

235
BUILD.md
View file

@ -1,33 +1,29 @@
Dependencies
===
###Dependencies
* [cmake](http://www.cmake.org/cmake/resources/software.html) ~> 2.8.12.2
* [Qt](http://qt-project.org/downloads) ~> 5.2.0
* [glm](http://glm.g-truc.net/0.9.5/index.html) ~> 0.9.5.2
* [Qt](http://qt-project.org/downloads) ~> 5.3.0
* [glm](http://glm.g-truc.net/0.9.5/index.html) ~> 0.9.5.4
* [OpenSSL](https://www.openssl.org/related/binaries.html) ~> 1.0.1g
* IMPORTANT: OpenSSL 1.0.1g is critical to avoid a security vulnerability.
* [Intel Threading Building Blocks](https://www.threadingbuildingblocks.org/) ~> 4.3
#####Linux only
* [freeglut](http://freeglut.sourceforge.net/) ~> 2.8.0
* [zLib](http://www.zlib.net/) ~> 1.2.8
### OS Specific Build Guides
* [BUILD_OSX.md](BUILD_OSX.md) - additional instructions for OS X.
* [BUILD_LINUX.md](BUILD_LINUX.md) - additional instructions for Linux.
* [BUILD_WIN.md](BUILD_WIN.md) - additional instructions for Windows.
#####Windows only
* [GLEW](http://glew.sourceforge.net/) ~> 1.10.0
* [freeglut MSVC](http://www.transmissionzero.co.uk/software/freeglut-devel/) ~> 2.8.1
* [zLib](http://www.zlib.net/) ~> 1.2.8
CMake
===
###CMake
Hifi uses CMake to generate build files and project files for your platform.
####Qt
In order for CMake to find the Qt5 find modules, you will need to set an ENV variable pointing to your Qt installation.
For example, a Qt5 5.2.0 installation to /usr/local/qt5 would require that QT_CMAKE_PREFIX_PATH be set with the following command. This can either be entered directly into your shell session before you build or in your shell profile (e.g.: ~/.bash_profile, ~/.bashrc, ~/.zshrc - this depends on your shell and environment).
For example, a Qt5 5.3.2 installation to /usr/local/qt5 would require that QT_CMAKE_PREFIX_PATH be set with the following command. This can either be entered directly into your shell session before you build or in your shell profile (e.g.: ~/.bash_profile, ~/.bashrc, ~/.zshrc - this depends on your shell and environment).
The path it needs to be set to will depend on where and how Qt5 was installed. e.g.
export QT_CMAKE_PREFIX_PATH=/usr/local/qt/5.2.0/clang_64/lib/cmake/
export QT_CMAKE_PREFIX_PATH=/usr/local/Cellar/qt5/5.2.1/lib/cmake
export QT_CMAKE_PREFIX_PATH=/usr/local/qt/5.3.2/clang_64/lib/cmake/
export QT_CMAKE_PREFIX_PATH=/usr/local/Cellar/qt5/5.3.2/lib/cmake
export QT_CMAKE_PREFIX_PATH=/usr/local/opt/qt5/lib/cmake
####Generating build files
@ -42,7 +38,7 @@ Any variables that need to be set for CMake to find dependencies can be set as E
For example, to pass the QT_CMAKE_PREFIX_PATH variable during build file generation:
cmake .. -DQT_CMAKE_PREFIX_PATH=/usr/local/qt/5.2.1/lib/cmake
cmake .. -DQT_CMAKE_PREFIX_PATH=/usr/local/qt/5.3.2/lib/cmake
####Finding Dependencies
You can point our [Cmake find modules](cmake/modules/) to the correct version of dependencies by setting one of the three following variables to the location of the correct version of the dependency.
@ -53,206 +49,7 @@ In the examples below the variable $NAME would be replaced by the name of the de
* $NAME_ROOT_DIR - set this variable in your ENV
* HIFI_LIB_DIR - set this variable in your ENV to your High Fidelity lib folder, should contain a folder '$name'
UNIX
===
In general, as long as external dependencies are placed in OS standard locations, CMake will successfully find them during its run. When possible, you may choose to install depencies from your package manager of choice, or from source.
####Linux
Should you choose not to install Qt5 via a package manager that handles dependencies for you, you may be missing some Qt5 dependencies. On Ubuntu, for example, the following additional packages are required:
libasound2 libxmu-dev libxi-dev freeglut3-dev libasound2-dev libjack-dev
####OS X
#####Package Managers
[Homebrew](http://brew.sh/) is an excellent package manager for OS X. It makes install of all hifi dependencies very simple.
brew tap highfidelity/homebrew-formulas
brew install cmake glm openssl
brew install highfidelity/formulas/qt5
brew link qt5 --force
We have a [homebrew formulas repository](https://github.com/highfidelity/homebrew-formulas) that you can use/tap to install some of the dependencies. In the code block above qt5 is installed from a formula in this repository.
*Our [qt5 homebrew formula](https://raw.github.com/highfidelity/homebrew-formulas/master/qt5.rb) is for a patched version of Qt 5.2.0 stable that removes wireless network scanning that can reduce real-time audio performance. We recommended you use this formula to install Qt.*
#####Xcode
If Xcode is your editor of choice, you can ask CMake to generate Xcode project files instead of Unix Makefiles.
cmake .. -GXcode
After running cmake, you will have the make files or Xcode project file necessary to build all of the components. Open the hifi.xcodeproj file, choose ALL_BUILD from the Product > Scheme menu (or target drop down), and click Run.
If the build completes successfully, you will have built targets for all components located in the `build/${target_name}/Debug` directories.
Windows
===
####Visual Studio
Currently building on Windows has been tested using the following compilers:
* Visual Studio C++ 2010 Express
* Visual Studio 2013
(If anyone can test using Visual Studio 2013 Express then please update this document)
#####Windows SDK 7.1
If using Visual Studio 2010, or using Visual Studio 2013 but building as a Visual Studio 2010 project, you need [Microsoft Windows SDK for Windows 7 and .NET Framework 4](http://www.microsoft.com/en-us/download/details.aspx?id=8279).
NOTE: If using Visual Studio C++ 2010 Express, you need to follow a specific install order. See below before installing the Windows SDK.
######Windows SDK 8.1
If using Visual Studio 2013 and building as a Visual Studio 2013 project you need the Windows 8 SDK which you should already have as part of installing Visual Studio 2013. You should be able to see it at `C:\Program Files (x86)\Windows Kits\8.1\Lib\winv6.3\um\x86`.
#####Visual Studio C++ 2010 Express
Visual Studio C++ 2010 Express can be downloaded [here](http://www.visualstudio.com/en-us/downloads#d-2010-express).
The following patches/service packs are also required:
* [VS2010 SP1](http://www.microsoft.com/en-us/download/details.aspx?id=23691)
* [VS2010 SP1 Compiler Update](http://www.microsoft.com/en-us/download/details.aspx?id=4422)
IMPORTANT: Use the following install order:
Visual Studio C++ 2010 Express
Windows SDK 7.1
VS2010 SP1
VS2010 SP1 Compiler Update
If you get an error while installing the VS2010 SP1 Compiler update saying that you don't have the Windows SDK installed, then uninstall all of the above and start again in the correct order.
Some of the build instructions will ask you to start a Visual Studio Command Prompt. You should have a shortcut in your Start menu called "Open Visual Studio Command Prompt (2010)" which will do so.
#####Visual Studio 2013
You can use the Community or Professional editions of Visual Studio 2013.
You can start a Visual Studio 2013 command prompt using the shortcut provided in the Visual Studio Tools folder installed as part of Visual Studio 2013.
Or you can start a regular command prompt and then run:
"%VS120COMNTOOLS%\vsvars32.bat"
If you experience issues building interface on Visual Studio 2013, try generating the build files with Visual Studio 2010 instead. To do so, download Visual Studio 2010 and run `cmake .. -G "Visual Studio 10"` (Assuming running from %HIFI_DIR%\build).
####Qt
You can use the online installer or the offline installer. If you use the offline installer, be sure to select the "OpenGL" version.
NOTE: Qt does not support 64-bit builds on Windows 7, so you must use the 32-bit version of libraries for interface.exe to run. The 32-bit version of the static library is the one linked by our CMake find modules.
* Download the online installer [here](http://qt-project.org/downloads)
* When it asks you to select components, ONLY select the following:
* Qt > Qt 5.2.0 > **msvc2010 32-bit OpenGL**
* Download the offline installer [here](http://download.qt-project.org/official_releases/qt/5.2/5.2.0/qt-windows-opensource-5.2.0-msvc2010_opengl-x86-offline.exe)
Once Qt is installed, you need to manually configure the following:
* Make sure the Qt runtime DLLs are loadable. You must do this before you attempt to build because some tools for the build depend on Qt. E.g., add to the PATH: `Qt\5.2.0\msvc2010_opengl\bin\`.
* Set the QT_CMAKE_PREFIX_PATH environment variable to your `Qt\5.2.0\msvc2010_opengl` directory.
If building as a Visual Studio 2013 project, download and configure the msvc2013 version of Qt instead.
####External Libraries
CMake will need to know where the headers and libraries for required external dependencies are.
The recommended route for CMake to find the external dependencies is to place all of the dependencies in one folder and set one ENV variable - HIFI_LIB_DIR. That ENV variable should point to a directory with the following structure:
root_lib_dir
-> freeglut
-> bin
-> include
-> lib
-> glew
-> bin
-> include
-> lib
-> glm
-> glm
-> glm.hpp
-> openssl
-> bin
-> include
-> lib
-> zlib
-> include
-> lib
-> test
For many of the external libraries where precompiled binaries are readily available you should be able to simply copy the extracted folder that you get from the download links provided at the top of the guide. Otherwise you may need to build from source and install the built product to this directory. The `root_lib_dir` in the above example can be wherever you choose on your system - as long as the environment variable HIFI_LIB_DIR is set to it. From here on, whenever you see %HIFI_LIB_DIR% you should substitute the directory that you chose.
As with the Qt libraries, you will need to make sure that directories containing DLL'S are in your path. Where possible, you can use static builds of the external dependencies to avoid this requirement.
#### OpenSSL
QT will use OpenSSL if it's available, but it doesn't install it, so you must install it separately.
Your system may already have several versions of the OpenSSL DLL's (ssleay32.dll, libeay32.dll) lying around, but they may be the wrong version. If these DLL's are in the PATH then QT will try to use them, and if they're the wrong version then you will see the following errors in the console:
QSslSocket: cannot resolve TLSv1_1_client_method
QSslSocket: cannot resolve TLSv1_2_client_method
QSslSocket: cannot resolve TLSv1_1_server_method
QSslSocket: cannot resolve TLSv1_2_server_method
QSslSocket: cannot resolve SSL_select_next_proto
QSslSocket: cannot resolve SSL_CTX_set_next_proto_select_cb
QSslSocket: cannot resolve SSL_get0_next_proto_negotiated
To prevent these problems, install OpenSSL yourself. Download the following binary packages [from this website](http://slproweb.com/products/Win32OpenSSL.html):
* Visual C++ 2008 Redistributables
* Win32 OpenSSL v1.0.1h
Install OpenSSL into the Windows system directory, to make sure that QT uses the version that you've just installed, and not some other version.
#### Zlib
Download the compiled DLL from the [zlib website](http://www.zlib.net/). Extract to %HIFI_LIB_DIR%\zlib.
Add the following environment variables (remember to substitute your own directory for %HIFI_LIB_DIR%):
ZLIB_LIBRARY=%HIFI_LIB_DIR%\zlib\lib\zdll.lib
ZLIB_INCLUDE_DIR=%HIFI_LIB_DIR%\zlib\include
Add to the PATH: `%HIFI_LIB_DIR%\zlib`
Important! This should be added at the beginning of the path, not the end. That's because your
system likely has many copies of zlib1.dll, and you want High Fidelity to use the correct version. If High Fidelity picks up the wrong zlib1.dll then it might be unable to use it, and that would cause it to fail to start, showing only the cryptic error "The application was unable to start correctly: 0xc0000022".
#### freeglut
Download the binary package: `freeglut-MSVC-2.8.1-1.mp.zip`. Extract to %HIFI_LIB_DIR%\freeglut.
Add to the PATH: `%HIFI_LIB_DIR%\freeglut\bin`
#### GLEW
Download the binary package: `glew-1.10.0-win32.zip`. Extract to %HIFI_LIB_DIR%\glew (you'll need to rename the default directory name).
Add to the PATH: `%HIFI_LIB_DIR%\glew\bin\Release\Win32`
#### GLM
This package contains only headers, so there's nothing to add to the PATH.
Be careful with glm. For the folder other libraries would normally call 'include', the folder containing the headers, glm opts to use 'glm'. You will have a glm folder nested inside the top-level glm folder.
#### Build High Fidelity using Visual Studio
Follow the same build steps from the CMake section, but pass a different generator to CMake.
cmake .. -DZLIB_LIBRARY=%ZLIB_LIBRARY% -DZLIB_INCLUDE_DIR=%ZLIB_INCLUDE_DIR% -G "Visual Studio 10"
If you're using Visual Studio 2013 then pass "Visual Studio 12" instead of "Visual Studio 10" (yes, 12, not 13).
Open %HIFI_DIR%\build\hifi.sln and compile.
####Running Interface
If you need to debug Interface, you can run interface from within Visual Studio (see the section below). You can also run Interface by launching it from command line or File Explorer from %HIFI_DIR%\build\interface\Debug\interface.exe
####Debugging Interface
* In the Solution Explorer, right click interface and click Set as StartUp Project
* Set the "Working Directory" for the Interface debugging sessions to the Debug output directory so that your application can load resources. Do this: right click interface and click Properties, choose Debugging from Configuration Properties, set Working Directory to .\Debug
* Now you can run and debug interface through Visual Studio
Optional Components
===
###Optional Components
####QXmpp
@ -260,7 +57,7 @@ You can find QXmpp [here](https://github.com/qxmpp-project/qxmpp). The inclusion
OS X users who tap our [homebrew formulas repository](https://github.com/highfidelity/homebrew-formulas) can install QXmpp via homebrew - `brew install highfidelity/formulas/qxmpp`.
#### Devices
####Devices
You can support external input/output devices such as Leap Motion, Faceplus, Faceshift, PrioVR, MIDI, Razr Hydra and more by adding each individual SDK in the visible building path. Refer to the readme file available in each device folder in [interface/external/](interface/external) for the detailed explanation of the requirements to use the device.

12
BUILD_LINUX.md Normal file
View file

@ -0,0 +1,12 @@
Please read the [general build guide](BUILD.md) for information on dependencies required for all platforms. Only Linux specific instructions are found in this file.
###Linux Specific Dependencies
* [freeglut](http://freeglut.sourceforge.net/) ~> 2.8.0
* [zLib](http://www.zlib.net/) ~> 1.2.8
In general, as long as external dependencies are placed in OS standard locations, CMake will successfully find them during its run. When possible, you may choose to install depencies from your package manager of choice, or from source.
###Qt5 Dependencies
Should you choose not to install Qt5 via a package manager that handles dependencies for you, you may be missing some Qt5 dependencies. On Ubuntu, for example, the following additional packages are required:
libasound2 libxmu-dev libxi-dev freeglut3-dev libasound2-dev libjack-dev

22
BUILD_OSX.md Normal file
View file

@ -0,0 +1,22 @@
Please read the [general build guide](BUILD.md) for information on dependencies required for all platforms. Only OS X specific instructions are found in this file.
###Homebrew
[Homebrew](http://brew.sh/) is an excellent package manager for OS X. It makes install of all hifi dependencies very simple.
brew tap highfidelity/homebrew-formulas
brew install cmake glm openssl tbb
brew install highfidelity/formulas/qt5
brew link qt5 --force
We have a [homebrew formulas repository](https://github.com/highfidelity/homebrew-formulas) that you can use/tap to install some of the dependencies. In the code block above qt5 is installed from a formula in this repository.
*Our [qt5 homebrew formula](https://raw.github.com/highfidelity/homebrew-formulas/master/qt5.rb) is for a patched version of Qt 5.3.x stable that removes wireless network scanning that can reduce real-time audio performance. We recommended you use this formula to install Qt.*
###Xcode
If Xcode is your editor of choice, you can ask CMake to generate Xcode project files instead of Unix Makefiles.
cmake .. -GXcode
After running cmake, you will have the make files or Xcode project file necessary to build all of the components. Open the hifi.xcodeproj file, choose ALL_BUILD from the Product > Scheme menu (or target drop down), and click Run.
If the build completes successfully, you will have built targets for all components located in the `build/${target_name}/Debug` directories.

149
BUILD_WIN.md Normal file
View file

@ -0,0 +1,149 @@
Please read the [general build guide](BUILD.md) for information on dependencies required for all platforms. Only Windows specific instructions are found in this file.
###Windows Dependencies
* [GLEW](http://glew.sourceforge.net/) ~> 1.10.0
* [freeglut MSVC](http://www.transmissionzero.co.uk/software/freeglut-devel/) ~> 2.8.1
* [zLib](http://www.zlib.net/) ~> 1.2.8
###Visual Studio
Currently building on Windows has been tested using the following compilers:
* Visual Studio 2013
(If anyone can test using Visual Studio 2013 Express then please update this document)
####Visual Studio 2013
You can use the Community or Professional editions of Visual Studio 2013.
You can start a Visual Studio 2013 command prompt using the shortcut provided in the Visual Studio Tools folder installed as part of Visual Studio 2013.
Or you can start a regular command prompt and then run:
"%VS120COMNTOOLS%\vsvars32.bat"
#####Windows SDK 8.1
If using Visual Studio 2013 and building as a Visual Studio 2013 project you need the Windows 8 SDK which you should already have as part of installing Visual Studio 2013. You should be able to see it at `C:\Program Files (x86)\Windows Kits\8.1\Lib\winv6.3\um\x86`.
###Qt
You can use the online installer or the offline installer. If you use the offline installer, be sure to select the "OpenGL" version.
NOTE: Qt does not support 64-bit builds on Windows 7, so you must use the 32-bit version of libraries for interface.exe to run. The 32-bit version of the static library is the one linked by our CMake find modules.
* Download the online installer [here](http://qt-project.org/downloads)
* When it asks you to select components, ONLY select the following:
* Qt > Qt 5.3.2 > **msvc2013 32-bit OpenGL**
* Download the offline installer [here](http://download.qt-project.org/official_releases/qt/5.3/5.3.2/qt-opensource-windows-x86-msvc2013_opengl-5.3.2.exe)
Once Qt is installed, you need to manually configure the following:
* Make sure the Qt runtime DLLs are loadable. You must do this before you attempt to build because some tools for the build depend on Qt. E.g., add to the PATH: `Qt\5.3.2\msvc2013_opengl\bin\`.
* Set the QT_CMAKE_PREFIX_PATH environment variable to your `Qt\5.3.2\msvc2013_opengl` directory.
###External Libraries
CMake will need to know where the headers and libraries for required external dependencies are.
The recommended route for CMake to find the external dependencies is to place all of the dependencies in one folder and set one ENV variable - HIFI_LIB_DIR. That ENV variable should point to a directory with the following structure:
root_lib_dir
-> freeglut
-> bin
-> include
-> lib
-> glew
-> bin
-> include
-> lib
-> glm
-> glm
-> glm.hpp
-> openssl
-> bin
-> include
-> lib
-> tbb
-> include
-> lib
-> zlib
-> include
-> lib
-> test
For many of the external libraries where precompiled binaries are readily available you should be able to simply copy the extracted folder that you get from the download links provided at the top of the guide. Otherwise you may need to build from source and install the built product to this directory. The `root_lib_dir` in the above example can be wherever you choose on your system - as long as the environment variable HIFI_LIB_DIR is set to it. From here on, whenever you see %HIFI_LIB_DIR% you should substitute the directory that you chose.
As with the Qt libraries, you will need to make sure that directories containing DLL'S are in your path. Where possible, you can use static builds of the external dependencies to avoid this requirement.
###OpenSSL
QT will use OpenSSL if it's available, but it doesn't install it, so you must install it separately.
Your system may already have several versions of the OpenSSL DLL's (ssleay32.dll, libeay32.dll) lying around, but they may be the wrong version. If these DLL's are in the PATH then QT will try to use them, and if they're the wrong version then you will see the following errors in the console:
QSslSocket: cannot resolve TLSv1_1_client_method
QSslSocket: cannot resolve TLSv1_2_client_method
QSslSocket: cannot resolve TLSv1_1_server_method
QSslSocket: cannot resolve TLSv1_2_server_method
QSslSocket: cannot resolve SSL_select_next_proto
QSslSocket: cannot resolve SSL_CTX_set_next_proto_select_cb
QSslSocket: cannot resolve SSL_get0_next_proto_negotiated
To prevent these problems, install OpenSSL yourself. Download the following binary packages [from this website](http://slproweb.com/products/Win32OpenSSL.html):
* Visual C++ 2008 Redistributables
* Win32 OpenSSL v1.0.1h
Install OpenSSL into the Windows system directory, to make sure that QT uses the version that you've just installed, and not some other version.
###Intel Threading Building Blocks (TBB)
Download the zip from the [TBB website](https://www.threadingbuildingblocks.org/).
We recommend you extract it to %HIFI_LIB_DIR%\tbb. This will help our FindTBB cmake module find what it needs. You can place it wherever you like on your machine if you specify TBB_ROOT_DIR as an environment variable or a variable passed when cmake is run.
###Zlib
Download the compiled DLL from the [zlib website](http://www.zlib.net/). Extract to %HIFI_LIB_DIR%\zlib.
Add the following environment variables (remember to substitute your own directory for %HIFI_LIB_DIR%):
ZLIB_LIBRARY=%HIFI_LIB_DIR%\zlib\lib\zdll.lib
ZLIB_INCLUDE_DIR=%HIFI_LIB_DIR%\zlib\include
Add to the PATH: `%HIFI_LIB_DIR%\zlib`
Important! This should be added at the beginning of the path, not the end. That's because your
system likely has many copies of zlib1.dll, and you want High Fidelity to use the correct version. If High Fidelity picks up the wrong zlib1.dll then it might be unable to use it, and that would cause it to fail to start, showing only the cryptic error "The application was unable to start correctly: 0xc0000022".
###freeglut
Download the binary package: `freeglut-MSVC-2.8.1-1.mp.zip`. Extract to %HIFI_LIB_DIR%\freeglut.
Add to the PATH: `%HIFI_LIB_DIR%\freeglut\bin`
###GLEW
Download the binary package: `glew-1.10.0-win32.zip`. Extract to %HIFI_LIB_DIR%\glew (you'll need to rename the default directory name).
Add to the PATH: `%HIFI_LIB_DIR%\glew\bin\Release\Win32`
###GLM
This package contains only headers, so there's nothing to add to the PATH.
Be careful with glm. For the folder other libraries would normally call 'include', the folder containing the headers, glm opts to use 'glm'. You will have a glm folder nested inside the top-level glm folder.
###Build High Fidelity using Visual Studio
Follow the same build steps from the CMake section, but pass a different generator to CMake.
cmake .. -DZLIB_LIBRARY=%ZLIB_LIBRARY% -DZLIB_INCLUDE_DIR=%ZLIB_INCLUDE_DIR% -G "Visual Studio 12"
Open %HIFI_DIR%\build\hifi.sln and compile.
###Running Interface
If you need to debug Interface, you can run interface from within Visual Studio (see the section below). You can also run Interface by launching it from command line or File Explorer from %HIFI_DIR%\build\interface\Debug\interface.exe
###Debugging Interface
* In the Solution Explorer, right click interface and click Set as StartUp Project
* Set the "Working Directory" for the Interface debugging sessions to the Debug output directory so that your application can load resources. Do this: right click interface and click Properties, choose Debugging from Configuration Properties, set Working Directory to .\Debug
* Now you can run and debug interface through Visual Studio

View file

@ -12,6 +12,10 @@ if (POLICY CMP0043)
cmake_policy(SET CMP0043 OLD)
endif ()
if (POLICY CMP0042)
cmake_policy(SET CMP0042 OLD)
endif ()
project(hifi)
add_definitions(-DGLM_FORCE_RADIANS)
@ -34,6 +38,24 @@ elseif (CMAKE_COMPILER_IS_GNUCC OR CMAKE_COMPILER_IS_GNUCXX)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -fno-strict-aliasing")
endif(WIN32)
include(CheckCXXCompilerFlag)
CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORTS_CXX11)
CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X)
if (COMPILER_SUPPORTS_CXX11)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
elseif(COMPILER_SUPPORTS_CXX0X)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x")
else()
message(STATUS "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.")
endif()
if (APPLE)
set(CMAKE_XCODE_ATTRIBUTE_CLANG_CXX_LANGUAGE_STANDARD "c++0x")
set(CMAKE_XCODE_ATTRIBUTE_CLANG_CXX_LIBRARY "libc++")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} --stdlib=libc++")
endif ()
if (NOT QT_CMAKE_PREFIX_PATH)
set(QT_CMAKE_PREFIX_PATH $ENV{QT_CMAKE_PREFIX_PATH})
endif ()

View file

@ -6,13 +6,13 @@ include_glm()
# link in the shared libraries
link_hifi_libraries(
audio avatars octree voxels fbx entities metavoxels
audio avatars octree voxels gpu model fbx entities metavoxels
networking animation shared script-engine embedded-webserver
physics
)
if (UNIX)
list(APPEND ${TARGET_NAME}_LIBRARIES_TO_LINK ${CMAKE_DL_LIBS})
target_link_libraries(${TARGET_NAME} ${CMAKE_DL_LIBS})
endif (UNIX)
link_shared_dependencies()
include_dependency_includes()

View file

@ -22,6 +22,7 @@
#include <NodeList.h>
#include <PacketHeaders.h>
#include <SharedUtil.h>
#include <ShutdownEventListener.h>
#include <SoundCache.h>
#include "AssignmentFactory.h"
@ -38,7 +39,6 @@ int hifiSockAddrMeta = qRegisterMetaType<HifiSockAddr>("HifiSockAddr");
AssignmentClient::AssignmentClient(int &argc, char **argv) :
QCoreApplication(argc, argv),
_shutdownEventListener(this),
_assignmentServerHostname(DEFAULT_ASSIGNMENT_SERVER_HOSTNAME),
_localASPortSharedMem(NULL)
{
@ -49,8 +49,12 @@ AssignmentClient::AssignmentClient(int &argc, char **argv) :
setApplicationName("assignment-client");
QSettings::setDefaultFormat(QSettings::IniFormat);
installNativeEventFilter(&_shutdownEventListener);
connect(&_shutdownEventListener, SIGNAL(receivedCloseEvent()), SLOT(quit()));
// setup a shutdown event listener to handle SIGTERM or WM_CLOSE for us
#ifdef _WIN32
installNativeEventFilter(&ShutdownEventListener::getInstance());
#else
ShutdownEventListener::getInstance();
#endif
// set the logging target to the the CHILD_TARGET_NAME
LogHandler::getInstance().setTargetName(ASSIGNMENT_CLIENT_TARGET_NAME);

View file

@ -14,7 +14,6 @@
#include <QtCore/QCoreApplication>
#include "ShutdownEventListener.h"
#include "ThreadedAssignment.h"
class QSharedMemory;
@ -34,7 +33,6 @@ private slots:
private:
Assignment _requestAssignment;
static SharedAssignmentPointer _currentAssignment;
ShutdownEventListener _shutdownEventListener;
QString _assignmentServerHostname;
HifiSockAddr _assignmentServerSocket;
QSharedMemory* _localASPortSharedMem;

View file

@ -12,6 +12,7 @@
#include <signal.h>
#include <LogHandler.h>
#include <ShutdownEventListener.h>
#include "AssignmentClientMonitor.h"
@ -19,24 +20,19 @@ const char* NUM_FORKS_PARAMETER = "-n";
const QString ASSIGNMENT_CLIENT_MONITOR_TARGET_NAME = "assignment-client-monitor";
void signalHandler(int param){
// get the qApp and cast it to an AssignmentClientMonitor
AssignmentClientMonitor* app = qobject_cast<AssignmentClientMonitor*>(qApp);
// tell it to stop the child processes and then go down
app->stopChildProcesses();
app->quit();
}
AssignmentClientMonitor::AssignmentClientMonitor(int &argc, char **argv, int numAssignmentClientForks) :
QCoreApplication(argc, argv)
{
// be a signal handler for SIGTERM so we can stop our children when we get it
signal(SIGTERM, signalHandler);
{
// start the Logging class with the parent's target name
LogHandler::getInstance().setTargetName(ASSIGNMENT_CLIENT_MONITOR_TARGET_NAME);
// setup a shutdown event listener to handle SIGTERM or WM_CLOSE for us
#ifdef _WIN32
installNativeEventFilter(&ShutdownEventListener::getInstance());
#else
ShutdownEventListener::getInstance();
#endif
_childArguments = arguments();
// remove the parameter for the number of forks so it isn't passed to the child forked processes
@ -52,6 +48,10 @@ AssignmentClientMonitor::AssignmentClientMonitor(int &argc, char **argv, int num
}
}
AssignmentClientMonitor::~AssignmentClientMonitor() {
stopChildProcesses();
}
void AssignmentClientMonitor::stopChildProcesses() {
QList<QPointer<QProcess> >::Iterator it = _childProcesses.begin();

View file

@ -24,6 +24,7 @@ class AssignmentClientMonitor : public QCoreApplication {
Q_OBJECT
public:
AssignmentClientMonitor(int &argc, char **argv, int numAssignmentClientForks);
~AssignmentClientMonitor();
void stopChildProcesses();
private slots:

View file

@ -439,12 +439,13 @@ int AudioMixer::prepareMixForListeningNode(Node* node) {
// loop through all other nodes that have sufficient audio to mix
int streamsMixed = 0;
foreach (const SharedNodePointer& otherNode, NodeList::getInstance()->getNodeHash()) {
NodeList::getInstance()->eachNode([&](const SharedNodePointer& otherNode){
if (otherNode->getLinkedData()) {
AudioMixerClientData* otherNodeClientData = (AudioMixerClientData*) otherNode->getLinkedData();
// enumerate the ARBs attached to the otherNode and add all that should be added to mix
const QHash<QUuid, PositionalAudioStream*>& otherNodeAudioStreams = otherNodeClientData->getAudioStreams();
QHash<QUuid, PositionalAudioStream*>::ConstIterator i;
for (i = otherNodeAudioStreams.constBegin(); i != otherNodeAudioStreams.constEnd(); i++) {
@ -454,14 +455,15 @@ int AudioMixer::prepareMixForListeningNode(Node* node) {
if (otherNodeStream->getType() == PositionalAudioStream::Microphone) {
streamUUID = otherNode->getUUID();
}
if (*otherNode != *node || otherNodeStream->shouldLoopbackForNode()) {
streamsMixed += addStreamToMixForListeningNodeWithStream(listenerNodeData, streamUUID,
otherNodeStream, nodeAudioStream);
streamsMixed += addStreamToMixForListeningNodeWithStream(listenerNodeData, streamUUID,
otherNodeStream, nodeAudioStream);
}
}
}
}
});
return streamsMixed;
}
@ -539,12 +541,11 @@ void AudioMixer::readPendingDatagram(const QByteArray& receivedPacket, const Hif
QByteArray packet = receivedPacket;
populatePacketHeader(packet, PacketTypeMuteEnvironment);
foreach (const SharedNodePointer& node, nodeList->getNodeHash()) {
nodeList->eachNode([&](const SharedNodePointer& node){
if (node->getType() == NodeType::Agent && node->getActiveSocket() && node->getLinkedData() && node != nodeList->sendingNodeForPacket(receivedPacket)) {
nodeList->writeDatagram(packet, packet.size(), node);
}
}
});
} else {
// let processNodeData handle it.
nodeList->processNodeData(senderSockAddr, receivedPacket);
@ -607,8 +608,9 @@ void AudioMixer::sendStatsPacket() {
NodeList* nodeList = NodeList::getInstance();
int clientNumber = 0;
foreach (const SharedNodePointer& node, nodeList->getNodeHash()) {
nodeList->eachNode([&](const SharedNodePointer& node) {
// if we're too large, send the packet
if (sizeOfStats > TOO_BIG_FOR_MTU) {
nodeList->sendStatsToDomainServer(statsObject2);
@ -626,7 +628,7 @@ void AudioMixer::sendStatsPacket() {
somethingToSend = true;
sizeOfStats += property.size() + value.size();
}
}
});
if (somethingToSend) {
nodeList->sendStatsToDomainServer(statsObject2);
@ -764,7 +766,8 @@ void AudioMixer::run() {
_lastPerSecondCallbackTime = now;
}
foreach (const SharedNodePointer& node, nodeList->getNodeHash()) {
nodeList->eachNode([&](const SharedNodePointer& node) {
if (node->getLinkedData()) {
AudioMixerClientData* nodeData = (AudioMixerClientData*)node->getLinkedData();
@ -831,7 +834,7 @@ void AudioMixer::run() {
++_sumListeners;
}
}
}
});
++_numStatFrames;
@ -889,7 +892,7 @@ void AudioMixer::perSecondActions() {
_timeSpentPerHashMatchCallStats.getWindowSum() / WINDOW_LENGTH_USECS * 100.0,
_timeSpentPerHashMatchCallStats.getCurrentIntervalSum() / USECS_PER_SECOND * 100.0);
foreach(const SharedNodePointer& node, NodeList::getInstance()->getNodeHash()) {
NodeList::getInstance()->eachNode([](const SharedNodePointer& node) {
if (node->getLinkedData()) {
AudioMixerClientData* nodeData = (AudioMixerClientData*)node->getLinkedData();
@ -899,7 +902,7 @@ void AudioMixer::perSecondActions() {
nodeData->printUpstreamDownstreamStats();
}
}
}
});
}
_datagramsReadPerCallStats.currentIntervalComplete();

View file

@ -122,7 +122,7 @@ void AvatarMixer::broadcastAvatarData() {
AvatarMixerClientData* nodeData = NULL;
AvatarMixerClientData* otherNodeData = NULL;
foreach (const SharedNodePointer& node, nodeList->getNodeHash()) {
nodeList->eachNode([&](const SharedNodePointer& node) {
if (node->getLinkedData() && node->getType() == NodeType::Agent && node->getActiveSocket()
&& (nodeData = reinterpret_cast<AvatarMixerClientData*>(node->getLinkedData()))->getMutex().tryLock()) {
++_sumListeners;
@ -135,7 +135,7 @@ void AvatarMixer::broadcastAvatarData() {
// this is an AGENT we have received head data from
// send back a packet with other active node data to this node
foreach (const SharedNodePointer& otherNode, nodeList->getNodeHash()) {
nodeList->eachNode([&](const SharedNodePointer& otherNode) {
if (otherNode->getLinkedData() && otherNode->getUUID() != node->getUUID()
&& (otherNodeData = reinterpret_cast<AvatarMixerClientData*>(otherNode->getLinkedData()))->getMutex().tryLock()) {
@ -203,13 +203,13 @@ void AvatarMixer::broadcastAvatarData() {
otherNodeData->getMutex().unlock();
}
}
});
nodeList->writeDatagram(mixedAvatarByteArray, node);
nodeData->getMutex().unlock();
}
}
});
_lastFrameTimestamp = QDateTime::currentMSecsSinceEpoch();
}

View file

@ -28,7 +28,7 @@ void ScriptableAvatar::startAnimation(const QString& url, float fps, float prior
Q_ARG(float, lastFrame), Q_ARG(const QStringList&, maskedJoints));
return;
}
_animation = _scriptEngine->getAnimationCache()->getAnimation(url);
_animation = DependencyManager::get<AnimationCache>()->getAnimation(url);
_animationDetails = AnimationDetails("", QUrl(url), fps, 0, loop, hold, false, firstFrame, lastFrame, true, firstFrame);
_maskedJoints = maskedJoints;
}

View file

@ -131,15 +131,17 @@ void EntityServer::pruneDeletedEntities() {
if (tree->hasAnyDeletedEntities()) {
quint64 earliestLastDeletedEntitiesSent = usecTimestampNow() + 1; // in the future
foreach (const SharedNodePointer& otherNode, NodeList::getInstance()->getNodeHash()) {
if (otherNode->getLinkedData()) {
EntityNodeData* nodeData = static_cast<EntityNodeData*>(otherNode->getLinkedData());
NodeList::getInstance()->eachNode([&earliestLastDeletedEntitiesSent](const SharedNodePointer& node) {
if (node->getLinkedData()) {
EntityNodeData* nodeData = static_cast<EntityNodeData*>(node->getLinkedData());
quint64 nodeLastDeletedEntitiesSentAt = nodeData->getLastDeletedEntitiesSentAt();
if (nodeLastDeletedEntitiesSentAt < earliestLastDeletedEntitiesSent) {
earliestLastDeletedEntitiesSent = nodeLastDeletedEntitiesSentAt;
}
}
}
});
tree->forgetEntitiesDeletedBefore(earliestLastDeletedEntitiesSent);
}
}

View file

@ -261,7 +261,7 @@ int OctreeInboundPacketProcessor::sendNackPackets() {
continue;
}
const SharedNodePointer& destinationNode = NodeList::getInstance()->getNodeHash().value(nodeUUID);
const SharedNodePointer& destinationNode = NodeList::getInstance()->nodeWithUUID(nodeUUID);
// retrieve sequence number stats of node, prune its missing set
SequenceNumberStats& sequenceNumberStats = nodeStats.getIncomingEditSequenceNumberStats();

View file

@ -1222,13 +1222,16 @@ void OctreeServer::aboutToFinish() {
qDebug() << qPrintable(_safeServerName) << "server STARTING about to finish...";
qDebug() << qPrintable(_safeServerName) << "inform Octree Inbound Packet Processor that we are shutting down...";
_octreeInboundPacketProcessor->shuttingDown();
foreach (const SharedNodePointer& node, NodeList::getInstance()->getNodeHash()) {
NodeList::getInstance()->eachNode([this](const SharedNodePointer& node) {
qDebug() << qPrintable(_safeServerName) << "server about to finish while node still connected node:" << *node;
forceNodeShutdown(node);
}
});
if (_persistThread) {
_persistThread->aboutToFinish();
}
qDebug() << qPrintable(_safeServerName) << "server ENDING about to finish...";
}

View file

@ -0,0 +1,22 @@
#
# IncludeDependencyIncludes.cmake
# cmake/macros
#
# Copyright 2014 High Fidelity, Inc.
# Created by Stephen Birarda on August 8, 2014
#
# Distributed under the Apache License, Version 2.0.
# See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
#
macro(INCLUDE_DEPENDENCY_INCLUDES)
if (${TARGET_NAME}_DEPENDENCY_INCLUDES)
list(REMOVE_DUPLICATES ${TARGET_NAME}_DEPENDENCY_INCLUDES)
# include those in our own target
include_directories(SYSTEM ${${TARGET_NAME}_DEPENDENCY_INCLUDES})
endif ()
# set the property on this target so it can be retreived by targets linking to us
set_target_properties(${TARGET_NAME} PROPERTIES DEPENDENCY_INCLUDES "${${TARGET_NAME}_DEPENDENCY_INCLUDES}")
endmacro(INCLUDE_DEPENDENCY_INCLUDES)

View file

@ -25,10 +25,9 @@ macro(LINK_HIFI_LIBRARIES)
# link the actual library - it is static so don't bubble it up
target_link_libraries(${TARGET_NAME} ${HIFI_LIBRARY})
# ask the library what its dynamic dependencies are and link them
get_target_property(LINKED_TARGET_DEPENDENCY_LIBRARIES ${HIFI_LIBRARY} DEPENDENCY_LIBRARIES)
list(APPEND ${TARGET_NAME}_LIBRARIES_TO_LINK ${LINKED_TARGET_DEPENDENCY_LIBRARIES})
# ask the library what its include dependencies are and link them
get_target_property(LINKED_TARGET_DEPENDENCY_INCLUDES ${HIFI_LIBRARY} DEPENDENCY_INCLUDES)
list(APPEND ${TARGET_NAME}_DEPENDENCY_INCLUDES ${LINKED_TARGET_DEPENDENCY_INCLUDES})
endforeach()
endmacro(LINK_HIFI_LIBRARIES)

View file

@ -1,25 +0,0 @@
#
# LinkSharedDependencies.cmake
# cmake/macros
#
# Copyright 2014 High Fidelity, Inc.
# Created by Stephen Birarda on August 8, 2014
#
# Distributed under the Apache License, Version 2.0.
# See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
#
macro(LINK_SHARED_DEPENDENCIES)
if (${TARGET_NAME}_LIBRARIES_TO_LINK)
list(REMOVE_DUPLICATES ${TARGET_NAME}_LIBRARIES_TO_LINK)
# link these libraries to our target
target_link_libraries(${TARGET_NAME} ${${TARGET_NAME}_LIBRARIES_TO_LINK})
endif ()
# we've already linked our Qt modules, but we need to bubble them up to parents
list(APPEND ${TARGET_NAME}_LIBRARIES_TO_LINK "${${TARGET}_QT_MODULES_TO_LINK}")
# set the property on this target so it can be retreived by targets linking to us
set_target_properties(${TARGET_NAME} PROPERTIES DEPENDENCY_LIBRARIES "${${TARGET}_LIBRARIES_TO_LINK}")
endmacro(LINK_SHARED_DEPENDENCIES)

View file

@ -18,16 +18,14 @@ macro(SETUP_HIFI_LIBRARY)
# create a library and set the property so it can be referenced later
add_library(${TARGET_NAME} ${LIB_SRCS} ${AUTOMTC_SRC})
set(QT_MODULES_TO_LINK ${ARGN})
list(APPEND QT_MODULES_TO_LINK Core)
set(${TARGET_NAME}_DEPENDENCY_QT_MODULES ${ARGN})
list(APPEND ${TARGET_NAME}_DEPENDENCY_QT_MODULES Core)
find_package(Qt5 COMPONENTS ${QT_MODULES_TO_LINK} REQUIRED)
foreach(QT_MODULE ${QT_MODULES_TO_LINK})
get_target_property(QT_LIBRARY_LOCATION Qt5::${QT_MODULE} LOCATION)
# add the actual path to the Qt module to our LIBRARIES_TO_LINK variable
# find these Qt modules and link them to our own target
find_package(Qt5 COMPONENTS ${${TARGET_NAME}_DEPENDENCY_QT_MODULES} REQUIRED)
foreach(QT_MODULE ${${TARGET_NAME}_DEPENDENCY_QT_MODULES})
target_link_libraries(${TARGET_NAME} Qt5::${QT_MODULE})
list(APPEND ${TARGET_NAME}_QT_MODULES_TO_LINK ${QT_LIBRARY_LOCATION})
endforeach()
endmacro(SETUP_HIFI_LIBRARY)

View file

@ -25,17 +25,13 @@ macro(SETUP_HIFI_PROJECT)
# add the executable, include additional optional sources
add_executable(${TARGET_NAME} ${TARGET_SRCS} "${AUTOMTC_SRC}")
set(QT_MODULES_TO_LINK ${ARGN})
list(APPEND QT_MODULES_TO_LINK Core)
set(${TARGET_NAME}_DEPENDENCY_QT_MODULES ${ARGN})
list(APPEND ${TARGET_NAME}_DEPENDENCY_QT_MODULES Core)
find_package(Qt5 COMPONENTS ${QT_MODULES_TO_LINK} REQUIRED)
foreach(QT_MODULE ${QT_MODULES_TO_LINK})
# find these Qt modules and link them to our own target
find_package(Qt5 COMPONENTS ${${TARGET_NAME}_DEPENDENCY_QT_MODULES} REQUIRED)
foreach(QT_MODULE ${${TARGET_NAME}_DEPENDENCY_QT_MODULES})
target_link_libraries(${TARGET_NAME} Qt5::${QT_MODULE})
# add the actual path to the Qt module to our LIBRARIES_TO_LINK variable
get_target_property(QT_LIBRARY_LOCATION Qt5::${QT_MODULE} LOCATION)
list(APPEND ${TARGET_NAME}_QT_MODULES_TO_LINK ${QT_LIBRARY_LOCATION})
endforeach()
endmacro()

View file

@ -1,47 +0,0 @@
#
# FindGLUT.cmake
#
# Try to find GLUT library and include path.
# Once done this will define
#
# GLUT_FOUND
# GLUT_INCLUDE_DIRS
# GLUT_LIBRARIES
#
# Created on 2/6/2014 by Stephen Birarda
# Copyright 2014 High Fidelity, Inc.
#
# Adapted from FindGLUT.cmake available in tlorach's OpenGLText Repository
# https://raw.github.com/tlorach/OpenGLText/master/cmake/FindGLUT.cmake
#
# Distributed under the Apache License, Version 2.0.
# See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
#
include("${MACRO_DIR}/HifiLibrarySearchHints.cmake")
hifi_library_search_hints("freeglut")
if (WIN32)
set(GLUT_HINT_DIRS "${FREEGLUT_SEARCH_DIRS} ${OPENGL_INCLUDE_DIR}")
find_path(GLUT_INCLUDE_DIRS GL/glut.h PATH_SUFFIXES include HINTS ${FREEGLUT_SEARCH_DIRS})
find_library(GLUT_LIBRARY freeglut PATH_SUFFIXES lib HINTS ${FREEGLUT_SEARCH_DIRS})
else ()
find_path(GLUT_INCLUDE_DIRS GL/glut.h PATH_SUFFIXES include HINTS ${FREEGLUT_SEARCH_DIRS})
find_library(GLUT_LIBRARY glut PATH_SUFFIXES lib HINTS ${FREEGLUT_SEARCH_DIRS})
endif ()
include(FindPackageHandleStandardArgs)
set(GLUT_LIBRARIES "${GLUT_LIBRARY}" "${XMU_LIBRARY}" "${XI_LIBRARY}")
if (UNIX)
find_library(XI_LIBRARY Xi PATH_SUFFIXES lib HINTS ${FREEGLUT_SEARCH_DIRS})
find_library(XMU_LIBRARY Xmu PATH_SUFFIXES lib HINTS ${FREEGLUT_SEARCH_DIRS})
find_package_handle_standard_args(GLUT DEFAULT_MSG GLUT_INCLUDE_DIRS GLUT_LIBRARIES XI_LIBRARY XMU_LIBRARY)
else ()
find_package_handle_standard_args(GLUT DEFAULT_MSG GLUT_INCLUDE_DIRS GLUT_LIBRARIES)
endif ()
mark_as_advanced(GLUT_INCLUDE_DIRS GLUT_LIBRARIES GLUT_LIBRARY XI_LIBRARY XMU_LIBRARY FREEGLUT_SEARCH_DIRS)

View file

@ -0,0 +1,75 @@
#
# FindTBB.cmake
#
# Try to find the Intel Threading Building Blocks library
#
# You can provide a TBB_ROOT_DIR which contains lib and include directories
#
# Once done this will define
#
# TBB_FOUND - system was able to find TBB
# TBB_INCLUDE_DIRS - the TBB include directory
# TBB_LIBRARIES - link this to use TBB
#
# Created on 12/14/2014 by Stephen Birarda
# Copyright 2014 High Fidelity, Inc.
#
# Distributed under the Apache License, Version 2.0.
# See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
#
include("${MACRO_DIR}/HifiLibrarySearchHints.cmake")
hifi_library_search_hints("tbb")
find_path(TBB_INCLUDE_DIRS tbb/tbb.h PATH_SUFFIXES include HINTS ${TBB_SEARCH_DIRS})
set(_TBB_LIB_NAME "tbb")
set(_TBB_LIB_MALLOC_NAME "${_TBB_LIB_NAME}malloc")
if (APPLE)
set(_TBB_LIB_DIR "lib/libc++")
elseif (UNIX)
if(CMAKE_SIZEOF_VOID_P EQUAL 8)
set(_TBB_ARCH_DIR "intel64")
else()
set(_TBB_ARCH_DIR "ia32")
endif()
execute_process(
COMMAND ${CMAKE_C_COMPILER} -dumpversion
OUTPUT_VARIABLE GCC_VERSION
)
if (GCC_VERSION VERSION_GREATER 4.4 OR GCC_VERSION VERSION_EQUAL 4.4)
set(_TBB_LIB_DIR "lib/${_TBB_ARCH_DIR}/gcc4.4")
elseif (GCC_VERSION VERSION_GREATER 4.1 OR GCC_VERSION VERSION_EQUAL 4.1)
set(_TBB_LIB_DIR "lib/${_TBB_ARCH_DIR}/gcc4.1")
else ()
message(FATAL_ERROR "Could not find a compatible version of Threading Building Blocks library for your compiler.")
endif ()
elseif (WIN32)
if (CMAKE_CL_64)
set(_TBB_ARCH_DIR "intel64")
else()
set(_TBB_ARCH_DIR "ia32")
endif()
set(_TBB_LIB_DIR "lib/${_TBB_ARCH_DIR}/vc12")
endif ()
find_library(TBB_LIBRARY_DEBUG NAMES ${_TBB_LIB_NAME}_debug PATH_SUFFIXES ${_TBB_LIB_DIR} HINTS ${TBB_SEARCH_DIRS})
find_library(TBB_LIBRARY_RELEASE NAMES ${_TBB_LIB_NAME} PATH_SUFFIXES ${_TBB_LIB_DIR} HINTS ${TBB_SEARCH_DIRS})
find_library(TBB_MALLOC_LIBRARY_DEBUG NAMES ${_TBB_LIB_MALLOC_NAME}_debug PATH_SUFFIXES ${_TBB_LIB_DIR} HINTS ${TBB_SEARCH_DIRS})
find_library(TBB_MALLOC_LIBRARY_RELEASE NAMES ${_TBB_LIB_MALLOC_NAME} PATH_SUFFIXES ${_TBB_LIB_DIR} HINTS ${TBB_SEARCH_DIRS})
include(SelectLibraryConfigurations)
include(FindPackageHandleStandardArgs)
select_library_configurations(TBB)
select_library_configurations(TBB_MALLOC)
find_package_handle_standard_args(TBB DEFAULT_MSG TBB_LIBRARY TBB_MALLOC_LIBRARY TBB_INCLUDE_DIRS)
set(TBB_LIBRARIES ${TBB_LIBRARY} ${TBB_MALLOC_LIBRARY})

View file

@ -32,9 +32,10 @@ if (APPLE)
find_library(VISAGE_VISION_LIBRARY NAME vsvision PATH_SUFFIXES lib HINTS ${VISAGE_SEARCH_DIRS})
find_library(VISAGE_OPENCV_LIBRARY NAME OpenCV PATH_SUFFIXES dependencies/OpenCV_MacOSX/lib HINTS ${VISAGE_SEARCH_DIRS})
find_library(AppKit AppKit)
find_library(QuartzCore QuartzCore)
find_library(CoreVideo CoreVideo)
find_library(QTKit QTKit)
find_library(IOKit IOKit)
elseif (WIN32)
find_path(VISAGE_XML_INCLUDE_DIR libxml/xmlreader.h PATH_SUFFIXES dependencies/libxml2/include HINTS ${VISAGE_SEARCH_DIRS})
find_path(VISAGE_OPENCV_INCLUDE_DIR opencv/cv.h PATH_SUFFIXES dependencies/OpenCV/include HINTS ${VISAGE_SEARCH_DIRS})
@ -49,19 +50,19 @@ include(FindPackageHandleStandardArgs)
list(APPEND VISAGE_ARGS_LIST VISAGE_BASE_INCLUDE_DIR VISAGE_XML_INCLUDE_DIR
VISAGE_OPENCV_INCLUDE_DIR VISAGE_OPENCV2_INCLUDE_DIR
VISAGE_CORE_LIBRARY VISAGE_VISION_LIBRARY VISAGE_OPENCV_LIBRARY)
VISAGE_CORE_LIBRARY VISAGE_VISION_LIBRARY)
if (APPLE)
list(APPEND VISAGE_ARGS_LIST QuartzCore AppKit QTKit)
list(APPEND VISAGE_ARGS_LIST CoreVideo QTKit IOKit)
endif ()
find_package_handle_standard_args(Visage DEFAULT_MSG ${VISAGE_ARGS_LIST})
set(VISAGE_INCLUDE_DIRS "${VISAGE_XML_INCLUDE_DIR}" "${VISAGE_OPENCV_INCLUDE_DIR}" "${VISAGE_OPENCV2_INCLUDE_DIR}" "${VISAGE_BASE_INCLUDE_DIR}")
set(VISAGE_LIBRARIES "${VISAGE_CORE_LIBRARY}" "${VISAGE_VISION_LIBRARY}" "${VISAGE_OPENCV_LIBRARY}")
set(VISAGE_LIBRARIES "${VISAGE_CORE_LIBRARY}" "${VISAGE_VISION_LIBRARY}")
if (APPLE)
list(APPEND VISAGE_LIBRARIES ${QuartzCore} ${AppKit} ${QTKit})
list(APPEND VISAGE_LIBRARIES "${CoreVideo}" "${QTKit}" "${IOKit}")
endif ()
mark_as_advanced(VISAGE_INCLUDE_DIRS VISAGE_LIBRARIES)

View file

@ -50,6 +50,6 @@ endif ()
include_directories(SYSTEM "${OPENSSL_INCLUDE_DIR}")
# append OpenSSL to our list of libraries to link
list(APPEND ${TARGET_NAME}_LIBRARIES_TO_LINK "${OPENSSL_LIBRARIES}")
target_link_libraries(${TARGET_NAME} ${OPENSSL_LIBRARIES})
link_shared_dependencies()
include_dependency_includes()

View file

@ -76,6 +76,33 @@
}
]
},
{
"name": "scripts",
"label": "Scripts",
"settings": [
{
"name": "persistent_scripts",
"type": "table",
"label": "Persistent Scripts",
"help": "Add the URLs for scripts that you would like to ensure are always running in your domain.",
"columns": [
{
"name": "url",
"label": "Script URL"
},
{
"name": "num_instances",
"label": "# instances",
"default": 1
},
{
"name": "pool",
"label": "Pool"
}
]
}
]
},
{
"name": "audio_env",
"label": "Audio Environment",

View file

@ -1,131 +1,3 @@
// Add your JavaScript for assignment below this line
// The following is an example of Conway's Game of Life (http://en.wikipedia.org/wiki/Conway's_Game_of_Life)
var NUMBER_OF_CELLS_EACH_DIMENSION = 64;
var NUMBER_OF_CELLS = NUMBER_OF_CELLS_EACH_DIMENSION * NUMBER_OF_CELLS_EACH_DIMENSION;
var currentCells = [];
var nextCells = [];
var METER_LENGTH = 1;
var cellScale = (NUMBER_OF_CELLS_EACH_DIMENSION * METER_LENGTH) / NUMBER_OF_CELLS_EACH_DIMENSION;
// randomly populate the cell start values
for (var i = 0; i < NUMBER_OF_CELLS_EACH_DIMENSION; i++) {
// create the array to hold this row
currentCells[i] = [];
// create the array to hold this row in the nextCells array
nextCells[i] = [];
for (var j = 0; j < NUMBER_OF_CELLS_EACH_DIMENSION; j++) {
currentCells[i][j] = Math.floor(Math.random() * 2);
// put the same value in the nextCells array for first board draw
nextCells[i][j] = currentCells[i][j];
}
}
function isNeighbourAlive(i, j) {
if (i < 0 || i >= NUMBER_OF_CELLS_EACH_DIMENSION
|| i < 0 || j >= NUMBER_OF_CELLS_EACH_DIMENSION) {
return 0;
} else {
return currentCells[i][j];
}
}
function updateCells() {
var i = 0;
var j = 0;
for (i = 0; i < NUMBER_OF_CELLS_EACH_DIMENSION; i++) {
for (j = 0; j < NUMBER_OF_CELLS_EACH_DIMENSION; j++) {
// figure out the number of live neighbours for the i-j cell
var liveNeighbours =
isNeighbourAlive(i + 1, j - 1) + isNeighbourAlive(i + 1, j) + isNeighbourAlive(i + 1, j + 1) +
isNeighbourAlive(i, j - 1) + isNeighbourAlive(i, j + 1) +
isNeighbourAlive(i - 1, j - 1) + isNeighbourAlive(i - 1, j) + isNeighbourAlive(i - 1, j + 1);
if (currentCells[i][j]) {
// live cell
if (liveNeighbours < 2) {
// rule #1 - under-population - this cell will die
// mark it zero to mark the change
nextCells[i][j] = 0;
} else if (liveNeighbours < 4) {
// rule #2 - this cell lives
// mark it -1 to mark no change
nextCells[i][j] = -1;
} else {
// rule #3 - overcrowding - this cell dies
// mark it zero to mark the change
nextCells[i][j] = 0;
}
} else {
// dead cell
if (liveNeighbours == 3) {
// rule #4 - reproduction - this cell revives
// mark it one to mark the change
nextCells[i][j] = 1;
} else {
// this cell stays dead
// mark it -1 for no change
nextCells[i][j] = -1;
}
}
if (Math.random() < 0.001) {
// Random mutation to keep things interesting in there.
nextCells[i][j] = 1;
}
}
}
for (i = 0; i < NUMBER_OF_CELLS_EACH_DIMENSION; i++) {
for (j = 0; j < NUMBER_OF_CELLS_EACH_DIMENSION; j++) {
if (nextCells[i][j] != -1) {
// there has been a change to this cell, change the value in the currentCells array
currentCells[i][j] = nextCells[i][j];
}
}
}
}
function sendNextCells() {
for (var i = 0; i < NUMBER_OF_CELLS_EACH_DIMENSION; i++) {
for (var j = 0; j < NUMBER_OF_CELLS_EACH_DIMENSION; j++) {
if (nextCells[i][j] != -1) {
// there has been a change to the state of this cell, send it
// find the x and y position for this voxel, z = 0
var x = j * cellScale;
var y = i * cellScale;
// queue a packet to add a voxel for the new cell
var color = (nextCells[i][j] == 1) ? 255 : 1;
Voxels.setVoxel(x, y, 0, cellScale, color, color, color);
}
}
}
}
var sentFirstBoard = false;
function step(deltaTime) {
if (sentFirstBoard) {
// we've already sent the first full board, perform a step in time
updateCells();
} else {
// this will be our first board send
sentFirstBoard = true;
}
sendNextCells();
}
Script.update.connect(step);
Voxels.setPacketsPerSecond(200);
// Here you can put a script that will be run by an assignment-client (AC)
// For examples, please go to http://public.highfidelity.io/scripts
// The directory named acScripts contains assignment-client specific scripts you can try.

View file

@ -363,7 +363,8 @@ function makeTableInputs(setting) {
_.each(setting.columns, function(col) {
html += "<td class='" + Settings.DATA_COL_CLASS + "'name='" + col.name + "'>\
<input type='text' class='form-control' placeholder='" + (col.placeholder ? col.placeholder : "") + "' value=''>\
<input type='text' class='form-control' placeholder='" + (col.placeholder ? col.placeholder : "") + "'\
value='" + (col.default ? col.default : "") + "'>\
</td>"
})
@ -389,8 +390,9 @@ function badgeSidebarForDifferences(changedElement) {
// badge for any settings we have that are not the same or are not present in initialValues
for (var setting in panelJSON) {
if (!_.isEqual(panelJSON[setting], initialPanelJSON[setting])
&& (panelJSON[setting] !== "" || _.has(initialPanelJSON, setting))) {
if ((!_.has(initialPanelJSON, setting) && panelJSON[setting] !== "") ||
(!_.isEqual(panelJSON[setting], initialPanelJSON[setting])
&& (panelJSON[setting] !== "" || _.has(initialPanelJSON, setting)))) {
badgeValue += 1
}
}

View file

@ -29,6 +29,7 @@
#include <LogUtils.h>
#include <PacketHeaders.h>
#include <SharedUtil.h>
#include <ShutdownEventListener.h>
#include <UUID.h>
#include "DomainServerNodeData.h"
@ -41,7 +42,6 @@ const QString ICE_SERVER_DEFAULT_HOSTNAME = "ice.highfidelity.io";
DomainServer::DomainServer(int argc, char* argv[]) :
QCoreApplication(argc, argv),
_shutdownEventListener(this),
_httpManager(DOMAIN_SERVER_HTTP_PORT, QString("%1/resources/web/").arg(QCoreApplication::applicationDirPath()), this),
_httpsManager(NULL),
_allAssignments(),
@ -70,8 +70,12 @@ DomainServer::DomainServer(int argc, char* argv[]) :
_settingsManager.setupConfigMap(arguments());
installNativeEventFilter(&_shutdownEventListener);
connect(&_shutdownEventListener, SIGNAL(receivedCloseEvent()), SLOT(quit()));
// setup a shutdown event listener to handle SIGTERM or WM_CLOSE for us
#ifdef _WIN32
installNativeEventFilter(&ShutdownEventListener::getInstance());
#else
ShutdownEventListener::getInstance();
#endif
qRegisterMetaType<DomainServerWebSessionData>("DomainServerWebSessionData");
qRegisterMetaTypeStreamOperators<DomainServerWebSessionData>("DomainServerWebSessionData");
@ -229,6 +233,9 @@ void DomainServer::setupNodeListAndAssignments(const QUuid& sessionUUID) {
parseAssignmentConfigs(parsedTypes);
populateDefaultStaticAssignmentsExcludingTypes(parsedTypes);
// check for scripts the user wants to persist from their domain-server config
populateStaticScriptedAssignmentsFromSettings();
LimitedNodeList* nodeList = LimitedNodeList::createInstance(domainServerPort, domainServerDTLSPort);
@ -447,8 +454,6 @@ void DomainServer::parseAssignmentConfigs(QSet<Assignment::Type>& excludedTypes)
if (assignmentType != Assignment::AgentType) {
createStaticAssignmentsForType(assignmentType, assignmentList);
} else {
createScriptedAssignmentsFromList(assignmentList);
}
excludedTypes.insert(assignmentType);
@ -464,35 +469,37 @@ void DomainServer::addStaticAssignmentToAssignmentHash(Assignment* newAssignment
_allAssignments.insert(newAssignment->getUUID(), SharedAssignmentPointer(newAssignment));
}
void DomainServer::createScriptedAssignmentsFromList(const QVariantList &configList) {
foreach(const QVariant& configVariant, configList) {
if (configVariant.canConvert(QMetaType::QVariantMap)) {
QVariantMap configMap = configVariant.toMap();
// make sure we were passed a URL, otherwise this is an invalid scripted assignment
const QString ASSIGNMENT_URL_KEY = "url";
QString assignmentURL = configMap[ASSIGNMENT_URL_KEY].toString();
if (!assignmentURL.isEmpty()) {
// check the json for a pool
const QString ASSIGNMENT_POOL_KEY = "pool";
QString assignmentPool = configMap[ASSIGNMENT_POOL_KEY].toString();
// check for a number of instances, if not passed then default is 1
const QString ASSIGNMENT_INSTANCES_KEY = "instances";
int numInstances = configMap[ASSIGNMENT_INSTANCES_KEY].toInt();
numInstances = (numInstances == 0 ? 1 : numInstances);
qDebug() << "Adding a static scripted assignment from" << assignmentURL;
for (int i = 0; i < numInstances; i++) {
void DomainServer::populateStaticScriptedAssignmentsFromSettings() {
const QString PERSISTENT_SCRIPTS_KEY_PATH = "scripts.persistent_scripts";
const QVariant* persistentScriptsVariant = valueForKeyPath(_settingsManager.getSettingsMap(), PERSISTENT_SCRIPTS_KEY_PATH);
if (persistentScriptsVariant) {
QVariantList persistentScriptsList = persistentScriptsVariant->toList();
foreach(const QVariant& persistentScriptVariant, persistentScriptsList) {
QVariantMap persistentScript = persistentScriptVariant.toMap();
const QString PERSISTENT_SCRIPT_URL_KEY = "url";
const QString PERSISTENT_SCRIPT_NUM_INSTANCES_KEY = "num_instances";
const QString PERSISTENT_SCRIPT_POOL_KEY = "pool";
if (persistentScript.contains(PERSISTENT_SCRIPT_URL_KEY)) {
// check how many instances of this script to add
int numInstances = persistentScript[PERSISTENT_SCRIPT_NUM_INSTANCES_KEY].toInt();
QString scriptURL = persistentScript[PERSISTENT_SCRIPT_URL_KEY].toString();
QString scriptPool = persistentScript.value(PERSISTENT_SCRIPT_POOL_KEY).toString();
qDebug() << "Adding" << numInstances << "of persistent script at URL" << scriptURL << "- pool" << scriptPool;
for (int i = 0; i < numInstances; ++i) {
// add a scripted assignment to the queue for this instance
Assignment* scriptAssignment = new Assignment(Assignment::CreateCommand,
Assignment::AgentType,
assignmentPool);
scriptAssignment->setPayload(assignmentURL.toUtf8());
// scripts passed on CL or via JSON are static - so they are added back to the queue if the node dies
scriptPool);
scriptAssignment->setPayload(scriptURL.toUtf8());
// add it to static hash so we know we have to keep giving it back out
addStaticAssignmentToAssignmentHash(scriptAssignment);
}
}
@ -542,7 +549,9 @@ void DomainServer::populateDefaultStaticAssignmentsExcludingTypes(const QSet<Ass
for (Assignment::Type defaultedType = Assignment::AudioMixerType;
defaultedType != Assignment::AllTypes;
defaultedType = static_cast<Assignment::Type>(static_cast<int>(defaultedType) + 1)) {
if (!excludedTypes.contains(defaultedType) && defaultedType != Assignment::AgentType) {
if (!excludedTypes.contains(defaultedType)
&& defaultedType != Assignment::UNUSED
&& defaultedType != Assignment::AgentType) {
// type has not been set from a command line or config file config, use the default
// by clearing whatever exists and writing a single default assignment with no payload
Assignment* newAssignment = new Assignment(Assignment::CreateCommand, (Assignment::Type) defaultedType);
@ -850,49 +859,48 @@ void DomainServer::sendDomainListToNode(const SharedNodePointer& node, const Hif
if (nodeData->isAuthenticated()) {
// if this authenticated node has any interest types, send back those nodes as well
foreach (const SharedNodePointer& otherNode, nodeList->getNodeHash()) {
nodeList->eachNode([&](const SharedNodePointer& otherNode){
// reset our nodeByteArray and nodeDataStream
QByteArray nodeByteArray;
QDataStream nodeDataStream(&nodeByteArray, QIODevice::Append);
if (otherNode->getUUID() != node->getUUID() && nodeInterestList.contains(otherNode->getType())) {
// don't send avatar nodes to other avatars, that will come from avatar mixer
nodeDataStream << *otherNode.data();
// pack the secret that these two nodes will use to communicate with each other
QUuid secretUUID = nodeData->getSessionSecretHash().value(otherNode->getUUID());
if (secretUUID.isNull()) {
// generate a new secret UUID these two nodes can use
secretUUID = QUuid::createUuid();
// set that on the current Node's sessionSecretHash
nodeData->getSessionSecretHash().insert(otherNode->getUUID(), secretUUID);
// set it on the other Node's sessionSecretHash
reinterpret_cast<DomainServerNodeData*>(otherNode->getLinkedData())
->getSessionSecretHash().insert(node->getUUID(), secretUUID);
}
nodeDataStream << secretUUID;
if (broadcastPacket.size() + nodeByteArray.size() > dataMTU) {
// we need to break here and start a new packet
// so send the current one
nodeList->writeDatagram(broadcastPacket, node, senderSockAddr);
// reset the broadcastPacket structure
broadcastPacket.resize(numBroadcastPacketLeadBytes);
broadcastDataStream.device()->seek(numBroadcastPacketLeadBytes);
}
// append the nodeByteArray to the current state of broadcastDataStream
broadcastPacket.append(nodeByteArray);
}
}
});
}
// always write the last broadcastPacket
@ -989,14 +997,14 @@ void DomainServer::readAvailableDatagrams() {
void DomainServer::setupPendingAssignmentCredits() {
// enumerate the NodeList to find the assigned nodes
foreach (const SharedNodePointer& node, LimitedNodeList::getInstance()->getNodeHash()) {
NodeList::getInstance()->eachNode([&](const SharedNodePointer& node){
DomainServerNodeData* nodeData = reinterpret_cast<DomainServerNodeData*>(node->getLinkedData());
if (!nodeData->getAssignmentUUID().isNull() && !nodeData->getWalletUUID().isNull()) {
// check if we have a non-finalized transaction for this node to add this amount to
TransactionHash::iterator i = _pendingAssignmentCredits.find(nodeData->getWalletUUID());
WalletTransaction* existingTransaction = NULL;
while (i != _pendingAssignmentCredits.end() && i.key() == nodeData->getWalletUUID()) {
if (!i.value()->isFinalized()) {
existingTransaction = i.value();
@ -1005,16 +1013,16 @@ void DomainServer::setupPendingAssignmentCredits() {
++i;
}
}
qint64 elapsedMsecsSinceLastPayment = nodeData->getPaymentIntervalTimer().elapsed();
nodeData->getPaymentIntervalTimer().restart();
const float CREDITS_PER_HOUR = 0.10f;
const float CREDITS_PER_MSEC = CREDITS_PER_HOUR / (60 * 60 * 1000);
const int SATOSHIS_PER_MSEC = CREDITS_PER_MSEC * SATOSHIS_PER_CREDIT;
float pendingCredits = elapsedMsecsSinceLastPayment * SATOSHIS_PER_MSEC;
if (existingTransaction) {
existingTransaction->incrementAmount(pendingCredits);
} else {
@ -1023,7 +1031,7 @@ void DomainServer::setupPendingAssignmentCredits() {
_pendingAssignmentCredits.insert(nodeData->getWalletUUID(), freshTransaction);
}
}
}
});
}
void DomainServer::sendPendingTransactionsToServer() {
@ -1148,11 +1156,12 @@ void DomainServer::sendHeartbeatToDataServer(const QString& networkAddress) {
// add the number of currently connected agent users
int numConnectedAuthedUsers = 0;
foreach(const SharedNodePointer& node, LimitedNodeList::getInstance()->getNodeHash()) {
NodeList::getInstance()->eachNode([&numConnectedAuthedUsers](const SharedNodePointer& node){
if (node->getLinkedData() && !static_cast<DomainServerNodeData*>(node->getLinkedData())->getUsername().isEmpty()) {
++numConnectedAuthedUsers;
}
}
});
const QString DOMAIN_HEARTBEAT_KEY = "heartbeat";
const QString HEARTBEAT_NUM_USERS_KEY = "num_users";
@ -1270,8 +1279,9 @@ void DomainServer::processDatagram(const QByteArray& receivedPacket, const HifiS
parseNodeDataFromByteArray(packetStream, throwawayNodeType, nodePublicAddress, nodeLocalAddress,
senderSockAddr);
SharedNodePointer checkInNode = nodeList->updateSocketsForNode(nodeUUID,
nodePublicAddress, nodeLocalAddress);
SharedNodePointer checkInNode = nodeList->nodeWithUUID(nodeUUID);
checkInNode->setPublicSocket(nodePublicAddress);
checkInNode->setLocalSocket(nodeLocalAddress);
// update last receive to now
quint64 timeNow = usecTimestampNow();
@ -1463,15 +1473,15 @@ bool DomainServer::handleHTTPRequest(HTTPConnection* connection, const QUrl& url
QJsonObject assignedNodesJSON;
// enumerate the NodeList to find the assigned nodes
foreach (const SharedNodePointer& node, LimitedNodeList::getInstance()->getNodeHash()) {
NodeList::getInstance()->eachNode([this, &assignedNodesJSON](const SharedNodePointer& node){
DomainServerNodeData* nodeData = reinterpret_cast<DomainServerNodeData*>(node->getLinkedData());
if (!nodeData->getAssignmentUUID().isNull()) {
// add the node using the UUID as the key
QString uuidString = uuidStringWithoutCurlyBraces(nodeData->getAssignmentUUID());
assignedNodesJSON[uuidString] = jsonObjectForNode(node);
}
}
});
assignmentJSON["fulfilled"] = assignedNodesJSON;
@ -1525,12 +1535,10 @@ bool DomainServer::handleHTTPRequest(HTTPConnection* connection, const QUrl& url
QJsonArray nodesJSONArray;
// enumerate the NodeList to find the assigned nodes
LimitedNodeList* nodeList = LimitedNodeList::getInstance();
foreach (const SharedNodePointer& node, nodeList->getNodeHash()) {
LimitedNodeList::getInstance()->eachNode([this, &nodesJSONArray](const SharedNodePointer& node){
// add the node using the UUID as the key
nodesJSONArray.append(jsonObjectForNode(node));
}
});
rootJSON["nodes"] = nodesJSONArray;
@ -2058,16 +2066,9 @@ void DomainServer::addStaticAssignmentsToQueue() {
QHash<QUuid, SharedAssignmentPointer>::iterator staticAssignment = staticHashCopy.begin();
while (staticAssignment != staticHashCopy.end()) {
// add any of the un-matched static assignments to the queue
bool foundMatchingAssignment = false;
// enumerate the nodes and check if there is one with an attached assignment with matching UUID
foreach (const SharedNodePointer& node, LimitedNodeList::getInstance()->getNodeHash()) {
if (node->getUUID() == staticAssignment->data()->getUUID()) {
foundMatchingAssignment = true;
}
}
if (!foundMatchingAssignment) {
if (!NodeList::getInstance()->nodeWithUUID(staticAssignment->data()->getUUID())) {
// this assignment has not been fulfilled - reset the UUID and add it to the assignment queue
refreshStaticAssignmentAndAddToQueue(*staticAssignment);
}

View file

@ -27,7 +27,6 @@
#include "DomainServerSettingsManager.h"
#include "DomainServerWebSessionData.h"
#include "ShutdownEventListener.h"
#include "WalletTransaction.h"
#include "PendingAssignedNodeData.h"
@ -101,9 +100,9 @@ private:
void parseAssignmentConfigs(QSet<Assignment::Type>& excludedTypes);
void addStaticAssignmentToAssignmentHash(Assignment* newAssignment);
void createScriptedAssignmentsFromList(const QVariantList& configList);
void createStaticAssignmentsForType(Assignment::Type type, const QVariantList& configList);
void populateDefaultStaticAssignmentsExcludingTypes(const QSet<Assignment::Type>& excludedTypes);
void populateStaticScriptedAssignmentsFromSettings();
SharedAssignmentPointer matchingQueuedAssignmentForCheckIn(const QUuid& checkInUUID, NodeType_t nodeType);
SharedAssignmentPointer deployableAssignmentForRequest(const Assignment& requestAssignment);
@ -125,8 +124,6 @@ private:
QJsonObject jsonForSocket(const HifiSockAddr& socket);
QJsonObject jsonObjectForNode(const SharedNodePointer& node);
ShutdownEventListener _shutdownEventListener;
HTTPManager _httpManager;
HTTPSManager* _httpsManager;

View file

@ -1,844 +0,0 @@
//
// audioReflectorTools.js
// hifi
//
// Created by Brad Hefta-Gaub on 2/14/14.
// Copyright (c) 2014 HighFidelity, Inc. All rights reserved.
//
// Tools for manipulating the attributes of the AudioReflector behavior
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
Script.include("libraries/globals.js");
var delayScale = 100.0;
var fanoutScale = 10.0;
var speedScale = 20;
var factorScale = 5.0;
var localFactorScale = 1.0;
var reflectiveScale = 100.0;
var diffusionScale = 100.0;
var absorptionScale = 100.0;
var combFilterScale = 50.0;
var originalScale = 2.0;
var echoesScale = 2.0;
// these three properties are bound together, if you change one, the others will also change
var reflectiveRatio = AudioReflector.getReflectiveRatio();
var diffusionRatio = AudioReflector.getDiffusionRatio();
var absorptionRatio = AudioReflector.getAbsorptionRatio();
var reflectiveThumbX;
var diffusionThumbX;
var absorptionThumbX;
function setReflectiveRatio(reflective) {
var total = diffusionRatio + absorptionRatio + (reflective / reflectiveScale);
diffusionRatio = diffusionRatio / total;
absorptionRatio = absorptionRatio / total;
reflectiveRatio = (reflective / reflectiveScale) / total;
updateRatioValues();
}
function setDiffusionRatio(diffusion) {
var total = (diffusion / diffusionScale) + absorptionRatio + reflectiveRatio;
diffusionRatio = (diffusion / diffusionScale) / total;
absorptionRatio = absorptionRatio / total;
reflectiveRatio = reflectiveRatio / total;
updateRatioValues();
}
function setAbsorptionRatio(absorption) {
var total = diffusionRatio + (absorption / absorptionScale) + reflectiveRatio;
diffusionRatio = diffusionRatio / total;
absorptionRatio = (absorption / absorptionScale) / total;
reflectiveRatio = reflectiveRatio / total;
updateRatioValues();
}
function updateRatioSliders() {
reflectiveThumbX = reflectiveMinThumbX + ((reflectiveMaxThumbX - reflectiveMinThumbX) * reflectiveRatio);
diffusionThumbX = diffusionMinThumbX + ((diffusionMaxThumbX - diffusionMinThumbX) * diffusionRatio);
absorptionThumbX = absorptionMinThumbX + ((absorptionMaxThumbX - absorptionMinThumbX) * absorptionRatio);
Overlays.editOverlay(reflectiveThumb, { x: reflectiveThumbX } );
Overlays.editOverlay(diffusionThumb, { x: diffusionThumbX } );
Overlays.editOverlay(absorptionThumb, { x: absorptionThumbX } );
}
function updateRatioValues() {
AudioReflector.setReflectiveRatio(reflectiveRatio);
AudioReflector.setDiffusionRatio(diffusionRatio);
AudioReflector.setAbsorptionRatio(absorptionRatio);
}
var topY = 250;
var sliderHeight = 35;
var delayY = topY;
topY += sliderHeight;
var delayLabel = Overlays.addOverlay("text", {
x: 40,
y: delayY,
width: 60,
height: sliderHeight,
color: { red: 0, green: 0, blue: 0},
textColor: { red: 255, green: 255, blue: 255},
topMargin: 12,
leftMargin: 5,
text: "Delay:"
});
var delaySlider = Overlays.addOverlay("image", {
// alternate form of expressing bounds
bounds: { x: 100, y: delayY, width: 150, height: sliderHeight},
subImage: { x: 46, y: 0, width: 200, height: 71 },
imageURL: HIFI_PUBLIC_BUCKET + "images/slider.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var delayMinThumbX = 110;
var delayMaxThumbX = delayMinThumbX + 110;
var delayThumbX = delayMinThumbX + ((delayMaxThumbX - delayMinThumbX) * (AudioReflector.getPreDelay() / delayScale));
var delayThumb = Overlays.addOverlay("image", {
x: delayThumbX,
y: delayY + 9,
width: 18,
height: 17,
imageURL: HIFI_PUBLIC_BUCKET + "images/thumb.png",
color: { red: 255, green: 0, blue: 0},
alpha: 1
});
var fanoutY = topY;
topY += sliderHeight;
var fanoutLabel = Overlays.addOverlay("text", {
x: 40,
y: fanoutY,
width: 60,
height: sliderHeight,
color: { red: 0, green: 0, blue: 0},
textColor: { red: 255, green: 255, blue: 255},
topMargin: 12,
leftMargin: 5,
text: "Fanout:"
});
var fanoutSlider = Overlays.addOverlay("image", {
// alternate form of expressing bounds
bounds: { x: 100, y: fanoutY, width: 150, height: sliderHeight},
subImage: { x: 46, y: 0, width: 200, height: 71 },
imageURL: HIFI_PUBLIC_BUCKET + "images/slider.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var fanoutMinThumbX = 110;
var fanoutMaxThumbX = fanoutMinThumbX + 110;
var fanoutThumbX = fanoutMinThumbX + ((fanoutMaxThumbX - fanoutMinThumbX) * (AudioReflector.getDiffusionFanout() / fanoutScale));
var fanoutThumb = Overlays.addOverlay("image", {
x: fanoutThumbX,
y: fanoutY + 9,
width: 18,
height: 17,
imageURL: HIFI_PUBLIC_BUCKET + "images/thumb.png",
color: { red: 255, green: 255, blue: 0},
alpha: 1
});
var speedY = topY;
topY += sliderHeight;
var speedLabel = Overlays.addOverlay("text", {
x: 40,
y: speedY,
width: 60,
height: sliderHeight,
color: { red: 0, green: 0, blue: 0},
textColor: { red: 255, green: 255, blue: 255},
topMargin: 6,
leftMargin: 5,
text: "Speed\nin ms/m:"
});
var speedSlider = Overlays.addOverlay("image", {
// alternate form of expressing bounds
bounds: { x: 100, y: speedY, width: 150, height: sliderHeight},
subImage: { x: 46, y: 0, width: 200, height: 71 },
imageURL: HIFI_PUBLIC_BUCKET + "images/slider.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var speedMinThumbX = 110;
var speedMaxThumbX = speedMinThumbX + 110;
var speedThumbX = speedMinThumbX + ((speedMaxThumbX - speedMinThumbX) * (AudioReflector.getSoundMsPerMeter() / speedScale));
var speedThumb = Overlays.addOverlay("image", {
x: speedThumbX,
y: speedY+9,
width: 18,
height: 17,
imageURL: HIFI_PUBLIC_BUCKET + "images/thumb.png",
color: { red: 0, green: 255, blue: 0},
alpha: 1
});
var factorY = topY;
topY += sliderHeight;
var factorLabel = Overlays.addOverlay("text", {
x: 40,
y: factorY,
width: 60,
height: sliderHeight,
color: { red: 0, green: 0, blue: 0},
textColor: { red: 255, green: 255, blue: 255},
topMargin: 6,
leftMargin: 5,
text: "Attenuation\nFactor:"
});
var factorSlider = Overlays.addOverlay("image", {
// alternate form of expressing bounds
bounds: { x: 100, y: factorY, width: 150, height: sliderHeight},
subImage: { x: 46, y: 0, width: 200, height: 71 },
imageURL: HIFI_PUBLIC_BUCKET + "images/slider.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var factorMinThumbX = 110;
var factorMaxThumbX = factorMinThumbX + 110;
var factorThumbX = factorMinThumbX + ((factorMaxThumbX - factorMinThumbX) * (AudioReflector.getDistanceAttenuationScalingFactor() / factorScale));
var factorThumb = Overlays.addOverlay("image", {
x: factorThumbX,
y: factorY+9,
width: 18,
height: 17,
imageURL: HIFI_PUBLIC_BUCKET + "images/thumb.png",
color: { red: 0, green: 0, blue: 255},
alpha: 1
});
var localFactorY = topY;
topY += sliderHeight;
var localFactorLabel = Overlays.addOverlay("text", {
x: 40,
y: localFactorY,
width: 60,
height: sliderHeight,
color: { red: 0, green: 0, blue: 0},
textColor: { red: 255, green: 255, blue: 255},
topMargin: 6,
leftMargin: 5,
text: "Local\nFactor:"
});
var localFactorSlider = Overlays.addOverlay("image", {
// alternate form of expressing bounds
bounds: { x: 100, y: localFactorY, width: 150, height: sliderHeight},
subImage: { x: 46, y: 0, width: 200, height: 71 },
imageURL: HIFI_PUBLIC_BUCKET + "images/slider.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var localFactorMinThumbX = 110;
var localFactorMaxThumbX = localFactorMinThumbX + 110;
var localFactorThumbX = localFactorMinThumbX + ((localFactorMaxThumbX - localFactorMinThumbX) * (AudioReflector.getLocalAudioAttenuationFactor() / localFactorScale));
var localFactorThumb = Overlays.addOverlay("image", {
x: localFactorThumbX,
y: localFactorY+9,
width: 18,
height: 17,
imageURL: HIFI_PUBLIC_BUCKET + "images/thumb.png",
color: { red: 0, green: 128, blue: 128},
alpha: 1
});
var combFilterY = topY;
topY += sliderHeight;
var combFilterLabel = Overlays.addOverlay("text", {
x: 40,
y: combFilterY,
width: 60,
height: sliderHeight,
color: { red: 0, green: 0, blue: 0},
textColor: { red: 255, green: 255, blue: 255},
topMargin: 6,
leftMargin: 5,
text: "Comb Filter\nWindow:"
});
var combFilterSlider = Overlays.addOverlay("image", {
// alternate form of expressing bounds
bounds: { x: 100, y: combFilterY, width: 150, height: sliderHeight},
subImage: { x: 46, y: 0, width: 200, height: 71 },
imageURL: HIFI_PUBLIC_BUCKET + "images/slider.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var combFilterMinThumbX = 110;
var combFilterMaxThumbX = combFilterMinThumbX + 110;
var combFilterThumbX = combFilterMinThumbX + ((combFilterMaxThumbX - combFilterMinThumbX) * (AudioReflector.getCombFilterWindow() / combFilterScale));
var combFilterThumb = Overlays.addOverlay("image", {
x: combFilterThumbX,
y: combFilterY+9,
width: 18,
height: 17,
imageURL: HIFI_PUBLIC_BUCKET + "images/thumb.png",
color: { red: 128, green: 128, blue: 0},
alpha: 1
});
var reflectiveY = topY;
topY += sliderHeight;
var reflectiveLabel = Overlays.addOverlay("text", {
x: 40,
y: reflectiveY,
width: 60,
height: sliderHeight,
color: { red: 0, green: 0, blue: 0},
textColor: { red: 255, green: 255, blue: 255},
topMargin: 6,
leftMargin: 5,
text: "Reflective\nRatio:"
});
var reflectiveSlider = Overlays.addOverlay("image", {
// alternate form of expressing bounds
bounds: { x: 100, y: reflectiveY, width: 150, height: sliderHeight},
subImage: { x: 46, y: 0, width: 200, height: 71 },
imageURL: HIFI_PUBLIC_BUCKET + "images/slider.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var reflectiveMinThumbX = 110;
var reflectiveMaxThumbX = reflectiveMinThumbX + 110;
reflectiveThumbX = reflectiveMinThumbX + ((reflectiveMaxThumbX - reflectiveMinThumbX) * AudioReflector.getReflectiveRatio());
var reflectiveThumb = Overlays.addOverlay("image", {
x: reflectiveThumbX,
y: reflectiveY+9,
width: 18,
height: 17,
imageURL: HIFI_PUBLIC_BUCKET + "images/thumb.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var diffusionY = topY;
topY += sliderHeight;
var diffusionLabel = Overlays.addOverlay("text", {
x: 40,
y: diffusionY,
width: 60,
height: sliderHeight,
color: { red: 0, green: 0, blue: 0},
textColor: { red: 255, green: 255, blue: 255},
topMargin: 6,
leftMargin: 5,
text: "Diffusion\nRatio:"
});
var diffusionSlider = Overlays.addOverlay("image", {
// alternate form of expressing bounds
bounds: { x: 100, y: diffusionY, width: 150, height: sliderHeight},
subImage: { x: 46, y: 0, width: 200, height: 71 },
imageURL: HIFI_PUBLIC_BUCKET + "images/slider.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var diffusionMinThumbX = 110;
var diffusionMaxThumbX = diffusionMinThumbX + 110;
diffusionThumbX = diffusionMinThumbX + ((diffusionMaxThumbX - diffusionMinThumbX) * AudioReflector.getDiffusionRatio());
var diffusionThumb = Overlays.addOverlay("image", {
x: diffusionThumbX,
y: diffusionY+9,
width: 18,
height: 17,
imageURL: HIFI_PUBLIC_BUCKET + "images/thumb.png",
color: { red: 0, green: 255, blue: 255},
alpha: 1
});
var absorptionY = topY;
topY += sliderHeight;
var absorptionLabel = Overlays.addOverlay("text", {
x: 40,
y: absorptionY,
width: 60,
height: sliderHeight,
color: { red: 0, green: 0, blue: 0},
textColor: { red: 255, green: 255, blue: 255},
topMargin: 6,
leftMargin: 5,
text: "Absorption\nRatio:"
});
var absorptionSlider = Overlays.addOverlay("image", {
// alternate form of expressing bounds
bounds: { x: 100, y: absorptionY, width: 150, height: sliderHeight},
subImage: { x: 46, y: 0, width: 200, height: 71 },
imageURL: HIFI_PUBLIC_BUCKET + "images/slider.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var absorptionMinThumbX = 110;
var absorptionMaxThumbX = absorptionMinThumbX + 110;
absorptionThumbX = absorptionMinThumbX + ((absorptionMaxThumbX - absorptionMinThumbX) * AudioReflector.getAbsorptionRatio());
var absorptionThumb = Overlays.addOverlay("image", {
x: absorptionThumbX,
y: absorptionY+9,
width: 18,
height: 17,
imageURL: HIFI_PUBLIC_BUCKET + "images/thumb.png",
color: { red: 255, green: 0, blue: 255},
alpha: 1
});
var originalY = topY;
topY += sliderHeight;
var originalLabel = Overlays.addOverlay("text", {
x: 40,
y: originalY,
width: 60,
height: sliderHeight,
color: { red: 0, green: 0, blue: 0},
textColor: { red: 255, green: 255, blue: 255},
topMargin: 6,
leftMargin: 5,
text: "Original\nMix:"
});
var originalSlider = Overlays.addOverlay("image", {
// alternate form of expressing bounds
bounds: { x: 100, y: originalY, width: 150, height: sliderHeight},
subImage: { x: 46, y: 0, width: 200, height: 71 },
imageURL: HIFI_PUBLIC_BUCKET + "images/slider.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var originalMinThumbX = 110;
var originalMaxThumbX = originalMinThumbX + 110;
var originalThumbX = originalMinThumbX + ((originalMaxThumbX - originalMinThumbX) * (AudioReflector.getOriginalSourceAttenuation() / originalScale));
var originalThumb = Overlays.addOverlay("image", {
x: originalThumbX,
y: originalY+9,
width: 18,
height: 17,
imageURL: HIFI_PUBLIC_BUCKET + "images/thumb.png",
color: { red: 128, green: 128, blue: 0},
alpha: 1
});
var echoesY = topY;
topY += sliderHeight;
var echoesLabel = Overlays.addOverlay("text", {
x: 40,
y: echoesY,
width: 60,
height: sliderHeight,
color: { red: 0, green: 0, blue: 0},
textColor: { red: 255, green: 255, blue: 255},
topMargin: 6,
leftMargin: 5,
text: "Echoes\nMix:"
});
var echoesSlider = Overlays.addOverlay("image", {
// alternate form of expressing bounds
bounds: { x: 100, y: echoesY, width: 150, height: sliderHeight},
subImage: { x: 46, y: 0, width: 200, height: 71 },
imageURL: HIFI_PUBLIC_BUCKET + "images/slider.png",
color: { red: 255, green: 255, blue: 255},
alpha: 1
});
var echoesMinThumbX = 110;
var echoesMaxThumbX = echoesMinThumbX + 110;
var echoesThumbX = echoesMinThumbX + ((echoesMaxThumbX - echoesMinThumbX) * (AudioReflector.getEchoesAttenuation() / echoesScale));
var echoesThumb = Overlays.addOverlay("image", {
x: echoesThumbX,
y: echoesY+9,
width: 18,
height: 17,
imageURL: HIFI_PUBLIC_BUCKET + "images/thumb.png",
color: { red: 128, green: 128, blue: 0},
alpha: 1
});
// When our script shuts down, we should clean up all of our overlays
function scriptEnding() {
Overlays.deleteOverlay(factorLabel);
Overlays.deleteOverlay(factorThumb);
Overlays.deleteOverlay(factorSlider);
Overlays.deleteOverlay(combFilterLabel);
Overlays.deleteOverlay(combFilterThumb);
Overlays.deleteOverlay(combFilterSlider);
Overlays.deleteOverlay(localFactorLabel);
Overlays.deleteOverlay(localFactorThumb);
Overlays.deleteOverlay(localFactorSlider);
Overlays.deleteOverlay(speedLabel);
Overlays.deleteOverlay(speedThumb);
Overlays.deleteOverlay(speedSlider);
Overlays.deleteOverlay(delayLabel);
Overlays.deleteOverlay(delayThumb);
Overlays.deleteOverlay(delaySlider);
Overlays.deleteOverlay(fanoutLabel);
Overlays.deleteOverlay(fanoutThumb);
Overlays.deleteOverlay(fanoutSlider);
Overlays.deleteOverlay(reflectiveLabel);
Overlays.deleteOverlay(reflectiveThumb);
Overlays.deleteOverlay(reflectiveSlider);
Overlays.deleteOverlay(diffusionLabel);
Overlays.deleteOverlay(diffusionThumb);
Overlays.deleteOverlay(diffusionSlider);
Overlays.deleteOverlay(absorptionLabel);
Overlays.deleteOverlay(absorptionThumb);
Overlays.deleteOverlay(absorptionSlider);
Overlays.deleteOverlay(echoesLabel);
Overlays.deleteOverlay(echoesThumb);
Overlays.deleteOverlay(echoesSlider);
Overlays.deleteOverlay(originalLabel);
Overlays.deleteOverlay(originalThumb);
Overlays.deleteOverlay(originalSlider);
}
Script.scriptEnding.connect(scriptEnding);
var count = 0;
// Our update() function is called at approximately 60fps, and we will use it to animate our various overlays
function update(deltaTime) {
count++;
}
Script.update.connect(update);
// The slider is handled in the mouse event callbacks.
var movingSliderDelay = false;
var movingSliderFanout = false;
var movingSliderSpeed = false;
var movingSliderFactor = false;
var movingSliderCombFilter = false;
var movingSliderLocalFactor = false;
var movingSliderReflective = false;
var movingSliderDiffusion = false;
var movingSliderAbsorption = false;
var movingSliderOriginal = false;
var movingSliderEchoes = false;
var thumbClickOffsetX = 0;
function mouseMoveEvent(event) {
if (movingSliderDelay) {
newThumbX = event.x - thumbClickOffsetX;
if (newThumbX < delayMinThumbX) {
newThumbX = delayMinThumbX;
}
if (newThumbX > delayMaxThumbX) {
newThumbX = delayMaxThumbX;
}
Overlays.editOverlay(delayThumb, { x: newThumbX } );
var delay = ((newThumbX - delayMinThumbX) / (delayMaxThumbX - delayMinThumbX)) * delayScale;
AudioReflector.setPreDelay(delay);
}
if (movingSliderFanout) {
newThumbX = event.x - thumbClickOffsetX;
if (newThumbX < fanoutMinThumbX) {
newThumbX = fanoutMinThumbX;
}
if (newThumbX > fanoutMaxThumbX) {
newThumbX = fanoutMaxThumbX;
}
Overlays.editOverlay(fanoutThumb, { x: newThumbX } );
var fanout = Math.round(((newThumbX - fanoutMinThumbX) / (fanoutMaxThumbX - fanoutMinThumbX)) * fanoutScale);
AudioReflector.setDiffusionFanout(fanout);
}
if (movingSliderSpeed) {
newThumbX = event.x - thumbClickOffsetX;
if (newThumbX < speedMinThumbX) {
newThumbX = speedMminThumbX;
}
if (newThumbX > speedMaxThumbX) {
newThumbX = speedMaxThumbX;
}
Overlays.editOverlay(speedThumb, { x: newThumbX } );
var speed = ((newThumbX - speedMinThumbX) / (speedMaxThumbX - speedMinThumbX)) * speedScale;
AudioReflector.setSoundMsPerMeter(speed);
}
if (movingSliderFactor) {
newThumbX = event.x - thumbClickOffsetX;
if (newThumbX < factorMinThumbX) {
newThumbX = factorMminThumbX;
}
if (newThumbX > factorMaxThumbX) {
newThumbX = factorMaxThumbX;
}
Overlays.editOverlay(factorThumb, { x: newThumbX } );
var factor = ((newThumbX - factorMinThumbX) / (factorMaxThumbX - factorMinThumbX)) * factorScale;
AudioReflector.setDistanceAttenuationScalingFactor(factor);
}
if (movingSliderCombFilter) {
newThumbX = event.x - thumbClickOffsetX;
if (newThumbX < combFilterMinThumbX) {
newThumbX = combFilterMminThumbX;
}
if (newThumbX > combFilterMaxThumbX) {
newThumbX = combFilterMaxThumbX;
}
Overlays.editOverlay(combFilterThumb, { x: newThumbX } );
var combFilter = ((newThumbX - combFilterMinThumbX) / (combFilterMaxThumbX - combFilterMinThumbX)) * combFilterScale;
AudioReflector.setCombFilterWindow(combFilter);
}
if (movingSliderLocalFactor) {
newThumbX = event.x - thumbClickOffsetX;
if (newThumbX < localFactorMinThumbX) {
newThumbX = localFactorMminThumbX;
}
if (newThumbX > localFactorMaxThumbX) {
newThumbX = localFactorMaxThumbX;
}
Overlays.editOverlay(localFactorThumb, { x: newThumbX } );
var localFactor = ((newThumbX - localFactorMinThumbX) / (localFactorMaxThumbX - localFactorMinThumbX)) * localFactorScale;
AudioReflector.setLocalAudioAttenuationFactor(localFactor);
}
if (movingSliderAbsorption) {
newThumbX = event.x - thumbClickOffsetX;
if (newThumbX < absorptionMinThumbX) {
newThumbX = absorptionMminThumbX;
}
if (newThumbX > absorptionMaxThumbX) {
newThumbX = absorptionMaxThumbX;
}
Overlays.editOverlay(absorptionThumb, { x: newThumbX } );
var absorption = ((newThumbX - absorptionMinThumbX) / (absorptionMaxThumbX - absorptionMinThumbX)) * absorptionScale;
setAbsorptionRatio(absorption);
}
if (movingSliderReflective) {
newThumbX = event.x - thumbClickOffsetX;
if (newThumbX < reflectiveMinThumbX) {
newThumbX = reflectiveMminThumbX;
}
if (newThumbX > reflectiveMaxThumbX) {
newThumbX = reflectiveMaxThumbX;
}
Overlays.editOverlay(reflectiveThumb, { x: newThumbX } );
var reflective = ((newThumbX - reflectiveMinThumbX) / (reflectiveMaxThumbX - reflectiveMinThumbX)) * reflectiveScale;
setReflectiveRatio(reflective);
}
if (movingSliderDiffusion) {
newThumbX = event.x - thumbClickOffsetX;
if (newThumbX < diffusionMinThumbX) {
newThumbX = diffusionMminThumbX;
}
if (newThumbX > diffusionMaxThumbX) {
newThumbX = diffusionMaxThumbX;
}
Overlays.editOverlay(diffusionThumb, { x: newThumbX } );
var diffusion = ((newThumbX - diffusionMinThumbX) / (diffusionMaxThumbX - diffusionMinThumbX)) * diffusionScale;
setDiffusionRatio(diffusion);
}
if (movingSliderEchoes) {
newThumbX = event.x - thumbClickOffsetX;
if (newThumbX < echoesMinThumbX) {
newThumbX = echoesMminThumbX;
}
if (newThumbX > echoesMaxThumbX) {
newThumbX = echoesMaxThumbX;
}
Overlays.editOverlay(echoesThumb, { x: newThumbX } );
var echoes = ((newThumbX - echoesMinThumbX) / (echoesMaxThumbX - echoesMinThumbX)) * echoesScale;
AudioReflector.setEchoesAttenuation(echoes);
}
if (movingSliderOriginal) {
newThumbX = event.x - thumbClickOffsetX;
if (newThumbX < originalMinThumbX) {
newThumbX = originalMminThumbX;
}
if (newThumbX > originalMaxThumbX) {
newThumbX = originalMaxThumbX;
}
Overlays.editOverlay(originalThumb, { x: newThumbX } );
var original = ((newThumbX - originalMinThumbX) / (originalMaxThumbX - originalMinThumbX)) * originalScale;
AudioReflector.setOriginalSourceAttenuation(original);
}
}
// we also handle click detection in our mousePressEvent()
function mousePressEvent(event) {
var clickedOverlay = Overlays.getOverlayAtPoint({x: event.x, y: event.y});
if (clickedOverlay == delayThumb) {
movingSliderDelay = true;
thumbClickOffsetX = event.x - delayThumbX;
}
if (clickedOverlay == fanoutThumb) {
movingSliderFanout = true;
thumbClickOffsetX = event.x - fanoutThumbX;
}
if (clickedOverlay == speedThumb) {
movingSliderSpeed = true;
thumbClickOffsetX = event.x - speedThumbX;
}
if (clickedOverlay == factorThumb) {
movingSliderFactor = true;
thumbClickOffsetX = event.x - factorThumbX;
}
if (clickedOverlay == localFactorThumb) {
movingSliderLocalFactor = true;
thumbClickOffsetX = event.x - localFactorThumbX;
}
if (clickedOverlay == combFilterThumb) {
movingSliderCombFilter = true;
thumbClickOffsetX = event.x - combFilterThumbX;
}
if (clickedOverlay == diffusionThumb) {
movingSliderDiffusion = true;
thumbClickOffsetX = event.x - diffusionThumbX;
}
if (clickedOverlay == absorptionThumb) {
movingSliderAbsorption = true;
thumbClickOffsetX = event.x - absorptionThumbX;
}
if (clickedOverlay == reflectiveThumb) {
movingSliderReflective = true;
thumbClickOffsetX = event.x - reflectiveThumbX;
}
if (clickedOverlay == originalThumb) {
movingSliderOriginal = true;
thumbClickOffsetX = event.x - originalThumbX;
}
if (clickedOverlay == echoesThumb) {
movingSliderEchoes = true;
thumbClickOffsetX = event.x - echoesThumbX;
}
}
function mouseReleaseEvent(event) {
if (movingSliderDelay) {
movingSliderDelay = false;
var delay = ((newThumbX - delayMinThumbX) / (delayMaxThumbX - delayMinThumbX)) * delayScale;
AudioReflector.setPreDelay(delay);
delayThumbX = newThumbX;
}
if (movingSliderFanout) {
movingSliderFanout = false;
var fanout = Math.round(((newThumbX - fanoutMinThumbX) / (fanoutMaxThumbX - fanoutMinThumbX)) * fanoutScale);
AudioReflector.setDiffusionFanout(fanout);
fanoutThumbX = newThumbX;
}
if (movingSliderSpeed) {
movingSliderSpeed = false;
var speed = ((newThumbX - speedMinThumbX) / (speedMaxThumbX - speedMinThumbX)) * speedScale;
AudioReflector.setSoundMsPerMeter(speed);
speedThumbX = newThumbX;
}
if (movingSliderFactor) {
movingSliderFactor = false;
var factor = ((newThumbX - factorMinThumbX) / (factorMaxThumbX - factorMinThumbX)) * factorScale;
AudioReflector.setDistanceAttenuationScalingFactor(factor);
factorThumbX = newThumbX;
}
if (movingSliderCombFilter) {
movingSliderCombFilter = false;
var combFilter = ((newThumbX - combFilterMinThumbX) / (combFilterMaxThumbX - combFilterMinThumbX)) * combFilterScale;
AudioReflector.setCombFilterWindow(combFilter);
combFilterThumbX = newThumbX;
}
if (movingSliderLocalFactor) {
movingSliderLocalFactor = false;
var localFactor = ((newThumbX - localFactorMinThumbX) / (localFactorMaxThumbX - localFactorMinThumbX)) * localFactorScale;
AudioReflector.setLocalAudioAttenuationFactor(localFactor);
localFactorThumbX = newThumbX;
}
if (movingSliderReflective) {
movingSliderReflective = false;
var reflective = ((newThumbX - reflectiveMinThumbX) / (reflectiveMaxThumbX - reflectiveMinThumbX)) * reflectiveScale;
setReflectiveRatio(reflective);
reflectiveThumbX = newThumbX;
updateRatioSliders();
}
if (movingSliderDiffusion) {
movingSliderDiffusion = false;
var diffusion = ((newThumbX - diffusionMinThumbX) / (diffusionMaxThumbX - diffusionMinThumbX)) * diffusionScale;
setDiffusionRatio(diffusion);
diffusionThumbX = newThumbX;
updateRatioSliders();
}
if (movingSliderAbsorption) {
movingSliderAbsorption = false;
var absorption = ((newThumbX - absorptionMinThumbX) / (absorptionMaxThumbX - absorptionMinThumbX)) * absorptionScale;
setAbsorptionRatio(absorption);
absorptionThumbX = newThumbX;
updateRatioSliders();
}
if (movingSliderEchoes) {
movingSliderEchoes = false;
var echoes = ((newThumbX - echoesMinThumbX) / (echoesMaxThumbX - echoesMinThumbX)) * echoesScale;
AudioReflector.setEchoesAttenuation(echoes);
echoesThumbX = newThumbX;
}
if (movingSliderOriginal) {
movingSliderOriginal = false;
var original = ((newThumbX - originalMinThumbX) / (originalMaxThumbX - originalMinThumbX)) * originalScale;
AudioReflector.setOriginalSourceAttenuation(original);
originalThumbX = newThumbX;
}
}
Controller.mouseMoveEvent.connect(mouseMoveEvent);
Controller.mousePressEvent.connect(mousePressEvent);
Controller.mouseReleaseEvent.connect(mouseReleaseEvent);

View file

@ -22,7 +22,7 @@ var velocity = {
y: 0,
z: 1 };
var damping = 0.1;
var damping = 0;
var color = {
red: 255,

View file

@ -9,9 +9,10 @@
//
Script.load("lookWithTouch.js");
Script.load("editVoxels.js");
Script.load("editModels.js");
Script.load("editEntities.js");
Script.load("selectAudioDevice.js");
Script.load("hydraMove.js");
Script.load("headMove.js");
Script.load("inspect.js");
Script.load("lobby.js");
Script.load("notifications.js");

View file

@ -23,12 +23,8 @@ function setupMenus() {
Menu.addMenuItem({ menuName: "Developer > Entities", menuItemName: "Display Model Element Bounds", isCheckable: true, isChecked: false });
Menu.addMenuItem({ menuName: "Developer > Entities", menuItemName: "Display Model Element Children", isCheckable: true, isChecked: false });
Menu.addMenuItem({ menuName: "Developer > Entities", menuItemName: "Don't Do Precision Picking", isCheckable: true, isChecked: false });
Menu.addMenuItem({ menuName: "Developer > Entities", menuItemName: "Don't Attempt to Reduce Material Switches", isCheckable: true, isChecked: false });
Menu.addMenuItem({ menuName: "Developer > Entities", menuItemName: "Don't Attempt Render Entities as Scene", isCheckable: true, isChecked: false });
Menu.addMenuItem({ menuName: "Developer > Entities", menuItemName: "Don't Do Precision Picking", isCheckable: true, isChecked: false });
Menu.addMenu("Developer > Entities > Culling");
Menu.addMenuItem({ menuName: "Developer > Entities > Culling", menuItemName: "Don't Cull Out Of View Mesh Parts", isCheckable: true, isChecked: false });
Menu.addMenuItem({ menuName: "Developer > Entities > Culling", menuItemName: "Don't Cull Too Small Mesh Parts", isCheckable: true, isChecked: false });
Menu.addMenuItem({ menuName: "Developer > Entities", menuItemName: "Disable Light Entities", isCheckable: true, isChecked: false });
}
}

View file

@ -159,8 +159,8 @@ var toolBar = (function () {
visible: false
});
menuItemWidth = Math.max(Overlays.textWidth(loadURLMenuItem, "Model URL"),
Overlays.textWidth(loadFileMenuItem, "Model File")) + 20;
menuItemWidth = Math.max(Overlays.textSize(loadURLMenuItem, "Model URL").width,
Overlays.textSize(loadFileMenuItem, "Model File").width) + 20;
Overlays.editOverlay(loadURLMenuItem, { width: menuItemWidth });
Overlays.editOverlay(loadFileMenuItem, { width: menuItemWidth });
@ -839,9 +839,11 @@ Controller.keyReleaseEvent.connect(function (event) {
selectionDisplay.toggleSpaceMode();
} else if (event.text == "f") {
if (isActive) {
cameraManager.focus(selectionManager.worldPosition,
selectionManager.worldDimensions,
Menu.isOptionChecked(MENU_EASE_ON_FOCUS));
if (selectionManager.hasSelection()) {
cameraManager.focus(selectionManager.worldPosition,
selectionManager.worldDimensions,
Menu.isOptionChecked(MENU_EASE_ON_FOCUS));
}
}
} else if (event.text == '[') {
if (isActive) {
@ -1001,7 +1003,9 @@ PropertiesTool = function(opts) {
type: 'update',
};
if (selectionManager.hasSelection()) {
data.id = selectionManager.selections[0].id;
data.properties = Entities.getEntityProperties(selectionManager.selections[0]);
data.properties.rotation = Quat.safeEulerAngles(data.properties.rotation);
}
webView.eventBridge.emitScriptEvent(JSON.stringify(data));
});
@ -1010,8 +1014,59 @@ PropertiesTool = function(opts) {
print(data);
data = JSON.parse(data);
if (data.type == "update") {
selectionManager.saveProperties();
if (data.properties.rotation !== undefined) {
var rotation = data.properties.rotation;
data.properties.rotation = Quat.fromPitchYawRollDegrees(rotation.x, rotation.y, rotation.z);
}
Entities.editEntity(selectionManager.selections[0], data.properties);
pushCommandForSelections();
selectionManager._update();
} else if (data.type == "action") {
if (data.action == "moveSelectionToGrid") {
if (selectionManager.hasSelection()) {
selectionManager.saveProperties();
var dY = grid.getOrigin().y - (selectionManager.worldPosition.y - selectionManager.worldDimensions.y / 2),
var diff = { x: 0, y: dY, z: 0 };
for (var i = 0; i < selectionManager.selections.length; i++) {
var properties = selectionManager.savedProperties[selectionManager.selections[i].id];
var newPosition = Vec3.sum(properties.position, diff);
Entities.editEntity(selectionManager.selections[i], {
position: newPosition,
});
}
pushCommandForSelections();
selectionManager._update();
}
} else if (data.action == "moveAllToGrid") {
if (selectionManager.hasSelection()) {
selectionManager.saveProperties();
for (var i = 0; i < selectionManager.selections.length; i++) {
var properties = selectionManager.savedProperties[selectionManager.selections[i].id];
var bottomY = properties.boundingBox.center.y - properties.boundingBox.dimensions.y / 2;
var dY = grid.getOrigin().y - bottomY;
var diff = { x: 0, y: dY, z: 0 };
var newPosition = Vec3.sum(properties.position, diff);
Entities.editEntity(selectionManager.selections[i], {
position: newPosition,
});
}
pushCommandForSelections();
selectionManager._update();
}
} else if (data.action == "resetToNaturalDimensions") {
if (selectionManager.hasSelection()) {
selectionManager.saveProperties();
for (var i = 0; i < selectionManager.selections.length; i++) {
var properties = selectionManager.savedProperties[selectionManager.selections[i].id];
Entities.editEntity(selectionManager.selections[i], {
dimensions: properties.naturalDimensions,
});
}
pushCommandForSelections();
selectionManager._update();
}
}
}
});

View file

@ -1191,8 +1191,8 @@ var toolBar = (function () {
visible: false
});
menuItemWidth = Math.max(Overlays.textWidth(loadURLMenuItem, "Model URL"),
Overlays.textWidth(loadFileMenuItem, "Model File")) + 20;
menuItemWidth = Math.max(Overlays.textSize(loadURLMenuItem, "Model URL").width,
Overlays.textSize(loadFileMenuItem, "Model File").width) + 20;
Overlays.editOverlay(loadURLMenuItem, { width: menuItemWidth });
Overlays.editOverlay(loadFileMenuItem, { width: menuItemWidth });

View file

@ -13,34 +13,7 @@
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
(function(){
this.oldColor = {};
this.oldColorKnown = false;
this.storeOldColor = function(entityID) {
var oldProperties = Entities.getEntityProperties(entityID);
this.oldColor = oldProperties.color;
this.oldColorKnown = true;
print("storing old color... this.oldColor=" + this.oldColor.red + "," + this.oldColor.green + "," + this.oldColor.blue);
};
this.preload = function(entityID) {
print("preload");
this.storeOldColor(entityID);
};
this.hoverEnterEntity = function(entityID, mouseEvent) {
print("hoverEnterEntity");
if (!this.oldColorKnown) {
this.storeOldColor(entityID);
}
Entities.editEntity(entityID, { color: { red: 0, green: 255, blue: 255} });
};
this.hoverLeaveEntity = function(entityID, mouseEvent) {
print("hoverLeaveEntity");
if (this.oldColorKnown) {
print("leave restoring old color... this.oldColor="
+ this.oldColor.red + "," + this.oldColor.green + "," + this.oldColor.blue);
Entities.editEntity(entityID, { color: this.oldColor });
}
};
})
(function() {
Script.include("changeColorOnHoverClass.js");
return new ChangeColorOnHover();
})

View file

@ -0,0 +1,55 @@
//
// changeColorOnHover.js
// examples/entityScripts
//
// Created by Brad Hefta-Gaub on 11/1/14.
// Copyright 2014 High Fidelity, Inc.
//
// This is an example of an entity script which when assigned to a non-model entity like a box or sphere, will
// change the color of the entity when you hover over it. This script uses the JavaScript prototype/class functionality
// to construct an object that has methods for hoverEnterEntity and hoverLeaveEntity;
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
ChangeColorOnHover = function(){
this.oldColor = {};
this.oldColorKnown = false;
};
ChangeColorOnHover.prototype = {
storeOldColor: function(entityID) {
var oldProperties = Entities.getEntityProperties(entityID);
this.oldColor = oldProperties.color;
this.oldColorKnown = true;
print("storing old color... this.oldColor=" + this.oldColor.red + "," + this.oldColor.green + "," + this.oldColor.blue);
},
preload: function(entityID) {
print("preload");
this.storeOldColor(entityID);
},
hoverEnterEntity: function(entityID, mouseEvent) {
print("hoverEnterEntity");
if (!this.oldColorKnown) {
this.storeOldColor(entityID);
}
Entities.editEntity(entityID, { color: { red: 0, green: 255, blue: 255} });
},
hoverLeaveEntity: function(entityID, mouseEvent) {
print("hoverLeaveEntity");
if (this.oldColorKnown) {
print("leave restoring old color... this.oldColor="
+ this.oldColor.red + "," + this.oldColor.green + "," + this.oldColor.blue);
Entities.editEntity(entityID, { color: this.oldColor });
}
}
};

View file

@ -27,10 +27,12 @@
};
this.enterEntity = function(entityID) {
print("enterEntity("+entityID.id+")");
playSound();
};
this.leaveEntity = function(entityID) {
print("leaveEntity("+entityID.id+")");
playSound();
};
})

View file

@ -70,21 +70,29 @@
};
function loaded() {
var elID = document.getElementById("property-id");
var elType = document.getElementById("property-type");
var elLocked = document.getElementById("property-locked");
var elVisible = document.getElementById("property-visible");
var elPositionX = document.getElementById("property-pos-x");
var elPositionY = document.getElementById("property-pos-y");
var elPositionZ = document.getElementById("property-pos-z");
var elMoveSelectionToGrid = document.getElementById("move-selection-to-grid");
var elMoveAllToGrid = document.getElementById("move-all-to-grid");
var elDimensionsX = document.getElementById("property-dim-x");
var elDimensionsY = document.getElementById("property-dim-y");
var elDimensionsZ = document.getElementById("property-dim-z");
var elResetToNaturalDimensions = document.getElementById("reset-to-natural-dimensions");
var elRegistrationX = document.getElementById("property-reg-x");
var elRegistrationY = document.getElementById("property-reg-y");
var elRegistrationZ = document.getElementById("property-reg-z");
var elRotationX = document.getElementById("property-rot-x");
var elRotationY = document.getElementById("property-rot-y");
var elRotationZ = document.getElementById("property-rot-z");
var elLinearVelocityX = document.getElementById("property-lvel-x");
var elLinearVelocityY = document.getElementById("property-lvel-y");
var elLinearVelocityZ = document.getElementById("property-lvel-z");
@ -156,6 +164,8 @@
} else {
var properties = data.properties;
elID.innerHTML = data.id;
elType.innerHTML = properties.type;
elLocked.checked = properties.locked;
@ -181,6 +191,10 @@
elRegistrationY.value = properties.registrationPoint.y.toFixed(2);
elRegistrationZ.value = properties.registrationPoint.z.toFixed(2);
elRotationX.value = properties.rotation.x.toFixed(2);
elRotationY.value = properties.rotation.y.toFixed(2);
elRotationZ.value = properties.rotation.z.toFixed(2);
elLinearVelocityX.value = properties.velocity.x.toFixed(2);
elLinearVelocityY.value = properties.velocity.y.toFixed(2);
elLinearVelocityZ.value = properties.velocity.z.toFixed(2);
@ -302,6 +316,12 @@
elRegistrationY.addEventListener('change', registrationChangeFunction);
elRegistrationZ.addEventListener('change', registrationChangeFunction);
var rotationChangeFunction = createEmitVec3PropertyUpdateFunction(
'rotation', elRotationX, elRotationY, elRotationZ);
elRotationX.addEventListener('change', rotationChangeFunction);
elRotationY.addEventListener('change', rotationChangeFunction);
elRotationZ.addEventListener('change', rotationChangeFunction);
var velocityChangeFunction = createEmitVec3PropertyUpdateFunction(
'velocity', elLinearVelocityX, elLinearVelocityY, elLinearVelocityZ);
elLinearVelocityX.addEventListener('change', velocityChangeFunction);
@ -381,6 +401,25 @@
elTextBackgroundColorGreen.addEventListener('change', textBackgroundColorChangeFunction);
elTextBackgroundColorBlue.addEventListener('change', textBackgroundColorChangeFunction);
elMoveSelectionToGrid.addEventListener("click", function() {
EventBridge.emitWebEvent(JSON.stringify({
type: "action",
action: "moveSelectionToGrid",
}));
});
elMoveAllToGrid.addEventListener("click", function() {
EventBridge.emitWebEvent(JSON.stringify({
type: "action",
action: "moveAllToGrid",
}));
});
elResetToNaturalDimensions.addEventListener("click", function() {
EventBridge.emitWebEvent(JSON.stringify({
type: "action",
action: "resetToNaturalDimensions",
}));
});
var resizing = false;
var startX = 0;
@ -437,6 +476,14 @@
<col id="col-label">
<col>
</colgroup>
<tr>
<td class="label">
ID
</td>
<td>
<label id="property-id" class="selectable"></label>
</td>
</tr>
<tr>
<td class="label">
Type
@ -465,6 +512,10 @@
<div class="input-area">X <input class="coord" type='number' id="property-pos-x"></input></div>
<div class="input-area">Y <input class="coord" type='number' id="property-pos-y"></input></div>
<div class="input-area">Z <input class="coord" type='number' id="property-pos-z"></input></div>
<div>
<input type="button" id="move-selection-to-grid" value="Selection to Grid">
<input type="button" id="move-all-to-grid" value="All to Grid">
</div>
</td>
</tr>
@ -483,6 +534,18 @@
<div class="input-area">X <input class="coord" type='number' id="property-dim-x"></input></div>
<div class="input-area">Y <input class="coord" type='number' id="property-dim-y"></input></div>
<div class="input-area">Z <input class="coord" type='number' id="property-dim-z"></input></div>
<div>
<input type="button" id="reset-to-natural-dimensions" value="Reset to Natural Dimensions">
</div>
</td>
</tr>
<tr>
<td class="label">Rotation</td>
<td>
<div class="input-area">Pitch <input class="coord" type='number' id="property-rot-x"></input></div>
<div class="input-area">Yaw <input class="coord" type='number' id="property-rot-y"></input></div>
<div class="input-area">Roll <input class="coord" type='number' id="property-rot-z"></input></div>
</td>
</tr>

View file

@ -17,6 +17,17 @@ body {
user-select: none;
}
.selectable {
-webkit-touch-callout: text;
-webkit-user-select: text;
-khtml-user-select: text;
-moz-user-select: text;
-ms-user-select: text;
user-select: text;
cursor: text;
}
.color-box {
display: inline-block;
width: 20px;

View file

@ -5,19 +5,89 @@
// Created by Clément Brisset on 7/18/14.
// Copyright 2014 High Fidelity, Inc.
//
// If using Hydra controllers, pulling the triggers makes laser pointers emanate from the respective hands.
// If using a Leap Motion or similar to control your avatar's hands and fingers, pointing with your index fingers makes
// laser pointers emanate from the respective index fingers.
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
var LEFT = 0;
var RIGHT = 1;
var LEFT_HAND_FLAG = 1;
var RIGHT_HAND_FLAG = 2;
var laserPointer = (function () {
function update() {
var state = ((Controller.getTriggerValue(LEFT) > 0.9) ? LEFT_HAND_FLAG : 0) +
((Controller.getTriggerValue(RIGHT) > 0.9) ? RIGHT_HAND_FLAG : 0);
MyAvatar.setHandState(state);
}
var NUM_FINGERs = 4, // Excluding thumb
fingers = [
[ "LeftHandIndex", "LeftHandMiddle", "LeftHandRing", "LeftHandPinky" ],
[ "RightHandIndex", "RightHandMiddle", "RightHandRing", "RightHandPinky" ]
];
Script.update.connect(update);
function isHandPointing(hand) {
var MINIMUM_TRIGGER_PULL = 0.9;
return Controller.getTriggerValue(hand) > MINIMUM_TRIGGER_PULL;
}
function isFingerPointing(hand) {
// Index finger is pointing if final two bones of middle, ring, and pinky fingers are > 90 degrees w.r.t. index finger
var pointing,
indexDirection,
otherDirection,
f;
pointing = true;
indexDirection = Vec3.subtract(
MyAvatar.getJointPosition(fingers[hand][0] + "4"),
MyAvatar.getJointPosition(fingers[hand][0] + "2")
);
for (f = 1; f < NUM_FINGERs; f += 1) {
otherDirection = Vec3.subtract(
MyAvatar.getJointPosition(fingers[hand][f] + "4"),
MyAvatar.getJointPosition(fingers[hand][f] + "2")
);
pointing = pointing && Vec3.dot(indexDirection, otherDirection) < 0;
}
return pointing;
}
function update() {
var LEFT_HAND = 0,
RIGHT_HAND = 1,
LEFT_HAND_POINTING_FLAG = 1,
RIGHT_HAND_POINTING_FLAG = 2,
FINGER_POINTING_FLAG = 4,
handState;
handState = 0;
if (isHandPointing(LEFT_HAND)) {
handState += LEFT_HAND_POINTING_FLAG;
}
if (isHandPointing(RIGHT_HAND)) {
handState += RIGHT_HAND_POINTING_FLAG;
}
if (handState === 0) {
if (isFingerPointing(LEFT_HAND)) {
handState += LEFT_HAND_POINTING_FLAG;
}
if (isFingerPointing(RIGHT_HAND)) {
handState += RIGHT_HAND_POINTING_FLAG;
}
if (handState !== 0) {
handState += FINGER_POINTING_FLAG;
}
}
MyAvatar.setHandState(handState);
}
return {
update: update
};
}());
Script.update.connect(laserPointer.update);

View file

@ -1,9 +1,3 @@
var SETTING_GRID_VISIBLE = 'gridVisible';
var SETTING_GRID_SNAP_TO_GRID = 'gridSnapToGrid';
var SETTING_GRID_MINOR_WIDTH= 'gridMinorWidth';
var SETTING_GRID_MAJOR_EVERY = 'gridMajorEvery';
var SETTING_GRID_COLOR = 'gridColor';
Grid = function(opts) {
var that = {};
@ -18,6 +12,9 @@ Grid = function(opts) {
var worldSize = 16384;
var minorGridWidth = 0.5;
var majorGridWidth = 1.5;
var snapToGrid = false;
var gridOverlay = Overlays.addOverlay("grid", {
@ -26,7 +23,7 @@ Grid = function(opts) {
color: { red: 0, green: 0, blue: 128 },
alpha: 1.0,
rotation: Quat.fromPitchYawRollDegrees(90, 0, 0),
minorGridSpacing: 0.1,
minorGridWidth: 0.1,
majorGridEvery: 2,
});
@ -43,37 +40,16 @@ Grid = function(opts) {
that.getSnapToGrid = function() { return snapToGrid; };
that.setEnabled = function(enabled) {
if (that.enabled != enabled) {
that.enabled = enabled;
if (enabled) {
if (selectionManager.hasSelection()) {
that.setPosition(selectionManager.getBottomPosition());
} else {
that.setPosition(MyAvatar.position);
}
}
updateGrid();
}
that.enabled = enabled;
updateGrid();
}
that.setVisible = function(visible, noUpdate) {
if (visible != that.visible) {
that.visible = visible;
updateGrid();
that.visible = visible;
updateGrid();
if (visible) {
if (selectionManager.hasSelection()) {
that.setPosition(selectionManager.getBottomPosition());
} else {
that.setPosition(MyAvatar.position);
}
}
if (!noUpdate) {
that.emitUpdate();
}
if (!noUpdate) {
that.emitUpdate();
}
}
@ -195,7 +171,7 @@ Grid = function(opts) {
Overlays.editOverlay(gridOverlay, {
position: { x: origin.y, y: origin.y, z: -origin.y },
visible: that.visible && that.enabled,
minorGridSpacing: minorGridSpacing,
minorGridWidth: minorGridSpacing,
majorGridEvery: majorGridEvery,
color: gridColor,
alpha: gridAlpha,
@ -205,43 +181,15 @@ Grid = function(opts) {
}
function cleanup() {
saveSettings();
Overlays.deleteOverlay(gridOverlay);
}
function loadSettings() {
that.setVisible(Settings.getValue(SETTING_GRID_VISIBLE) == "true", true);
snapToGrid = Settings.getValue(SETTING_GRID_SNAP_TO_GRID) == "true";
minorGridSpacing = parseFloat(Settings.getValue(SETTING_GRID_MINOR_WIDTH), 10);
majorGridEvery = parseInt(Settings.getValue(SETTING_GRID_MAJOR_EVERY), 10);
try {
var newColor = JSON.parse(Settings.getValue(SETTING_GRID_COLOR));
if (newColor.red !== undefined && newColor.green !== undefined && newColor.blue !== undefined) {
gridColor.red = newColor.red;
gridColor.green = newColor.green;
gridColor.blue = newColor.blue;
}
} catch (e) {
}
updateGrid();
}
function saveSettings() {
Settings.setValue(SETTING_GRID_VISIBLE, that.visible);
Settings.setValue(SETTING_GRID_SNAP_TO_GRID, snapToGrid);
Settings.setValue(SETTING_GRID_MINOR_WIDTH, minorGridSpacing);
Settings.setValue(SETTING_GRID_MAJOR_EVERY, majorGridEvery);
Settings.setValue(SETTING_GRID_COLOR, JSON.stringify(gridColor));
}
that.addListener = function(callback) {
that.onUpdate = callback;
}
Script.scriptEnding.connect(cleanup);
loadSettings();
updateGrid();
that.onUpdate = null;

View file

@ -274,19 +274,11 @@ modelUploader = (function () {
}
}
if (view.string(0, 18) === "Kaydara FBX Binary") {
previousNodeFilename = "";
index = 27;
while (index < view.byteLength - 39 && !EOF) {
parseBinaryFBX();
}
} else {
readTextFBX();
}
}
function readModel() {

View file

@ -13,8 +13,8 @@ Script.include("libraries/globals.js");
var panelWall = false;
var orbShell = false;
var reticle = false;
var descriptionText = false;
var showText = false;
// used for formating the description text, in meters
var textWidth = 4;
@ -23,8 +23,6 @@ var numberOfLines = 2;
var textMargin = 0.0625;
var lineHeight = (textHeight - (2 * textMargin)) / numberOfLines;
var lastMouseMove = 0;
var IDLE_HOVER_TIME = 2000; // if you haven't moved the mouse in 2 seconds, and in HMD mode, then we use reticle for hover
var avatarStickPosition = {};
var orbNaturalExtentsMin = { x: -1.230354, y: -1.22077, z: -1.210487 };
@ -49,6 +47,10 @@ var ORB_SHIFT = { x: 0, y: -1.4, z: -0.8};
var HELMET_ATTACHMENT_URL = HIFI_PUBLIC_BUCKET + "models/attachments/IronManMaskOnly.fbx"
var LOBBY_PANEL_WALL_URL = HIFI_PUBLIC_BUCKET + "models/sets/Lobby/PanelWallForInterface.fbx";
var LOBBY_BLANK_PANEL_TEXTURE_URL = HIFI_PUBLIC_BUCKET + "models/sets/Lobby/Texture.jpg";
var LOBBY_SHELL_URL = HIFI_PUBLIC_BUCKET + "models/sets/Lobby/LobbyShellForInterface.fbx";
var droneSound = SoundCache.getSound(HIFI_PUBLIC_BUCKET + "sounds/Lobby/drone.stereo.raw")
var currentDrone = null;
@ -57,13 +59,6 @@ var elevatorSound = SoundCache.getSound(HIFI_PUBLIC_BUCKET + "sounds/Lobby/eleva
var currentMuzakInjector = null;
var currentSound = null;
var inOculusMode = false;
function reticlePosition() {
var RETICLE_DISTANCE = 1;
return Vec3.sum(Camera.position, Vec3.multiply(Quat.getFront(Camera.orientation), RETICLE_DISTANCE));
}
function textOverlayPosition() {
var TEXT_DISTANCE_OUT = 6;
var TEXT_DISTANCE_DOWN = -2;
@ -71,6 +66,21 @@ function textOverlayPosition() {
Vec3.multiply(Quat.getUp(Camera.orientation), TEXT_DISTANCE_DOWN));
}
var panelLocationOrder = [
7, 8, 9, 10, 11, 12, 13,
0, 1, 2, 3, 4, 5, 6,
14, 15, 16, 17, 18, 19, 20
];
// Location index is 0-based
function locationIndexToPanelIndex(locationIndex) {
return panelLocationOrder.indexOf(locationIndex) + 1;
}
// Panel index is 1-based
function panelIndexToLocationIndex(panelIndex) {
return panelLocationOrder[panelIndex - 1];
}
var MAX_NUM_PANELS = 21;
var DRONE_VOLUME = 0.3;
@ -85,14 +95,14 @@ function drawLobby() {
var orbPosition = Vec3.sum(Camera.position, Vec3.multiplyQbyV(towardsMe, ORB_SHIFT));
var panelWallProps = {
url: HIFI_PUBLIC_BUCKET + "models/sets/Lobby/Lobby_v8/forStephen1/PanelWall2.fbx",
url: LOBBY_PANEL_WALL_URL,
position: Vec3.sum(orbPosition, Vec3.multiplyQbyV(towardsMe, panelsCenterShift)),
rotation: towardsMe,
dimensions: panelsDimensions
};
var orbShellProps = {
url: HIFI_PUBLIC_BUCKET + "models/sets/Lobby/Lobby_v8/forStephen1/LobbyShell1.4_LightTag.fbx",
url: LOBBY_SHELL_URL,
position: orbPosition,
rotation: towardsMe,
dimensions: orbDimensions,
@ -124,22 +134,6 @@ function drawLobby() {
panelWall = Overlays.addOverlay("model", panelWallProps);
orbShell = Overlays.addOverlay("model", orbShellProps);
descriptionText = Overlays.addOverlay("text3d", descriptionTextProps);
inOculusMode = Menu.isOptionChecked("Enable VR Mode");
// for HMD wearers, create a reticle in center of screen
if (inOculusMode) {
var CURSOR_SCALE = 0.025;
reticle = Overlays.addOverlay("billboard", {
url: HIFI_PUBLIC_BUCKET + "images/cursor.svg",
position: reticlePosition(),
ignoreRayIntersection: true,
isFacingAvatar: true,
alpha: 1.0,
scale: CURSOR_SCALE
});
}
// add an attachment on this avatar so other people see them in the lobby
MyAvatar.attach(HELMET_ATTACHMENT_URL, "Neck", {x: 0, y: 0, z: 0}, Quat.fromPitchYawRollDegrees(0, 0, 0), 1.15);
@ -168,7 +162,9 @@ function changeLobbyTextures() {
};
for (var j = 0; j < NUM_PANELS; j++) {
textureProp["textures"]["file" + (j + 1)] = "http:" + locations[j].thumbnail_url
var panelIndex = locationIndexToPanelIndex(j);
textureProp["textures"]["file" + panelIndex] = HIFI_PUBLIC_BUCKET + "images/locations/"
+ locations[j].id + "/hifi-location-" + locations[j].id + "_640x360.jpg";
};
Overlays.editOverlay(panelWall, textureProp);
@ -218,7 +214,7 @@ function cleanupLobby() {
panelTexturesReset["textures"] = {};
for (var j = 0; j < MAX_NUM_PANELS; j++) {
panelTexturesReset["textures"]["file" + (j + 1)] = HIFI_PUBLIC_BUCKET + "models/sets/Lobby/LobbyPrototype/Texture.jpg";
panelTexturesReset["textures"]["file" + (j + 1)] = LOBBY_BLANK_PANEL_TEXTURE_URL;
};
Overlays.editOverlay(panelWall, panelTexturesReset);
@ -227,14 +223,8 @@ function cleanupLobby() {
Overlays.deleteOverlay(orbShell);
Overlays.deleteOverlay(descriptionText);
if (reticle) {
Overlays.deleteOverlay(reticle);
}
panelWall = false;
orbShell = false;
reticle = false;
Audio.stopInjector(currentDrone);
currentDrone = null;
@ -259,12 +249,13 @@ function actionStartEvent(event) {
var panelStringIndex = panelName.indexOf("Panel");
if (panelStringIndex != -1) {
var panelIndex = parseInt(panelName.slice(5)) - 1;
if (panelIndex < locations.length) {
var actionLocation = locations[panelIndex];
var panelIndex = parseInt(panelName.slice(5));
var locationIndex = panelIndexToLocationIndex(panelIndex);
if (locationIndex < locations.length) {
var actionLocation = locations[locationIndex];
print("Jumping to " + actionLocation.name + " at " + actionLocation.path
+ " in " + actionLocation.domain.name + " after click on panel " + panelIndex);
+ " in " + actionLocation.domain.name + " after click on panel " + panelIndex + " with location index " + locationIndex);
Window.location = actionLocation;
maybeCleanupLobby();
@ -308,12 +299,13 @@ function handleLookAt(pickRay) {
var panelName = result.extraInfo;
var panelStringIndex = panelName.indexOf("Panel");
if (panelStringIndex != -1) {
var panelIndex = parseInt(panelName.slice(5)) - 1;
if (panelIndex < locations.length) {
var actionLocation = locations[panelIndex];
var panelIndex = parseInt(panelName.slice(5));
var locationIndex = panelIndexToLocationIndex(panelIndex);
if (locationIndex < locations.length) {
var actionLocation = locations[locationIndex];
if (actionLocation.description == "") {
Overlays.editOverlay(descriptionText, { text: actionLocation.name, visible: true });
Overlays.editOverlay(descriptionText, { text: actionLocation.name, visible: showText });
} else {
// handle line wrapping
var allWords = actionLocation.description.split(" ");
@ -331,7 +323,7 @@ function handleLookAt(pickRay) {
} else {
currentTestLine = allWords[currentTestWord];
}
var lineLength = Overlays.textWidth(descriptionText, currentTestLine);
var lineLength = Overlays.textSize(descriptionText, currentTestLine).width;
if (lineLength < textWidth || wordsOnLine == 0) {
wordsFormated++;
currentTestWord++;
@ -345,7 +337,7 @@ function handleLookAt(pickRay) {
}
}
formatedDescription += currentGoodLine;
Overlays.editOverlay(descriptionText, { text: formatedDescription, visible: true });
Overlays.editOverlay(descriptionText, { text: formatedDescription, visible: showText });
}
} else {
Overlays.editOverlay(descriptionText, { text: "", visible: false });
@ -358,20 +350,6 @@ function handleLookAt(pickRay) {
function update(deltaTime) {
maybeCleanupLobby();
if (panelWall) {
if (reticle) {
Overlays.editOverlay(reticle, {
position: reticlePosition()
});
var nowDate = new Date();
var now = nowDate.getTime();
if (now - lastMouseMove > IDLE_HOVER_TIME) {
var pickRay = Camera.computeViewPickRay(0.5, 0.5);
handleLookAt(pickRay);
}
}
Overlays.editOverlay(descriptionText, { position: textOverlayPosition() });
// if the reticle is up then we may need to play the next muzak
@ -383,8 +361,6 @@ function update(deltaTime) {
function mouseMoveEvent(event) {
if (panelWall) {
var nowDate = new Date();
lastMouseMove = nowDate.getTime();
var pickRay = Camera.computePickRay(event.x, event.y);
handleLookAt(pickRay);
}

View file

@ -1,25 +1,32 @@
//
// notifications.js
// notifications.js
// Version 0.801
// Created by Adrian
//
// Adrian McCarlie 8-10-14
// This script demonstrates on-screen overlay type notifications.
// Copyright 2014 High Fidelity, Inc.
//
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
// This script demonstrates notifications created via a number of ways, such as:
// Simple key press alerts, which only depend on a key being pressed,
// dummy examples of this are "q", "w", "e", "r", and "SPACEBAR".
// actual working examples are "a" for left turn, "d" for right turn and Ctrl/s for snapshot.
// System generated alerts such as users joining and leaving and chat messages which mention this user.
// System generated alerts which originate with a user interface event such as Window Resize, and Mic Mute/Unmute.
// Mic Mute/Unmute may appear to be a key press alert, but it actually gets the call from the system as mic is muted and unmuted,
// so the mic mute/unmute will also trigger the notification by clicking the Mic Mute button at the top of the screen.
// This script generates notifications created via a number of ways, such as:
// keystroke:
//
// "q" returns number of users currently online (for debug purposes)
// CTRL/s for snapshot.
// CTRL/m for mic mute and unmute.
// System generated notifications:
// Displays users online at startup.
// If Screen is resized.
// Triggers notification if @MyUserName is mentioned in chat.
// Announces existing user logging out.
// Announces new user logging in.
// If mic is muted for any reason.
//
// To add a new System notification type:
//
// 1. Set the Event Connector at the bottom of the script.
@ -45,22 +52,23 @@
// 2. Declare a text string.
// 3. Call createNotifications(text) parsing the text.
// example:
// if (key.text == "a") {
// var noteString = "Turning to the Left";
// createNotification(noteString);
// }
// if (key.text == "q") { //queries number of users online
// var numUsers = GlobalServices.onlineUsers.length;
// var welcome = "There are " + numUsers + " users online now.";
// createNotification(welcome);
// }
var width = 340.0; //width of notification overlay
var height = 40.0; // height of a single line notification overlay
var windowDimensions = Controller.getViewportDimensions(); // get the size of the interface window
var overlayLocationX = (windowDimensions.x - (width + 60.0));// positions window 60px from the right of the interface window
var overlayLocationX = (windowDimensions.x - (width + 20.0));// positions window 20px from the right of the interface window
var buttonLocationX = overlayLocationX + (width - 28.0);
var locationY = 20.0; // position down from top of interface window
var topMargin = 13.0;
var leftMargin = 10.0;
var textColor = { red: 228, green: 228, blue: 228}; // text color
var backColor = { red: 38, green: 38, blue: 38}; // background color
var backColor = { red: 2, green: 2, blue: 2}; // background color was 38,38,38
var backgroundAlpha = 0;
var fontSize = 12.0;
var persistTime = 10.0; // time in seconds before notification fades
@ -115,7 +123,6 @@ function createNotification(text) {
color: textColor,
backgroundColor: backColor,
alpha: backgroundAlpha,
backgroundAlpha: backgroundAlpha,
topMargin: topMargin,
leftMargin: leftMargin,
font: {size: fontSize},
@ -125,8 +132,8 @@ function createNotification(text) {
var buttonProperties = {
x: buttonLocationX,
y: bLevel,
width: 15.0,
height: 15.0,
width: 10.0,
height: 10.0,
subImage: { x: 0, y: 0, width: 10, height: 10 },
imageURL: "http://hifi-public.s3.amazonaws.com/images/close-small-light.svg",
color: { red: 255, green: 255, blue: 255},
@ -161,7 +168,7 @@ function fadeIn(noticeIn, buttonIn) {
pauseTimer = Script.setInterval(function() {
q++;
qFade = q / 10.0;
Overlays.editOverlay(noticeIn, {alpha: qFade, backgroundAlpha: qFade});
Overlays.editOverlay(noticeIn, {alpha: qFade});
Overlays.editOverlay(buttonIn, {alpha: qFade});
if (q >= 9.0) {
Script.clearInterval(pauseTimer);
@ -203,41 +210,18 @@ function keyPressEvent(key) {
if (key.key == 16777249) {
ctrlIsPressed = true;
}
if (key.text == "a") {
var noteString = "Turning to the Left";
createNotification(noteString);
}
if (key.text == "d") {
var noteString = "Turning to the Right";
createNotification(noteString);
}
if (key.text == "q") { //queries number of users online
var numUsers = GlobalServices.onlineUsers.length;
var welcome = "There are " + numUsers + " users online now.";
createNotification(welcome);
}
if (key.text == "s") {
if (ctrlIsPressed == true){
var noteString = "You have taken a snapshot";
var noteString = "Snapshot taken.";
createNotification(noteString);
}
}
if (key.text == "q") {
var noteString = "Enable Scripted Motor control is now on.";
wordWrap(noteString);
}
if (key.text == "w") {
var noteString = "This notification spans 2 lines. The overlay will resize to fit new lines.";
var noteString = "editVoxels.js stopped, editModels.js stopped, selectAudioDevice.js stopped.";
wordWrap(noteString);
}
if (key.text == "e") {
var noteString = "This is an example of a multiple line notification. This notification will span 3 lines."
wordWrap(noteString);
}
if (key.text == "r") {
var noteString = "This is a very long line of text that we are going to use in this example to divide it into rows of maximum 43 chars and see how many lines we use.";
wordWrap(noteString);
}
if (key.text == "SPACE") {
var noteString = "You have pressed the Spacebar, This is an example of a multiple line notification. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam.";
wordWrap(noteString);
}
}
// formats string to add newline every 43 chars
@ -275,18 +259,14 @@ function checkSize(){
// Triggers notification if a user logs on or off
function onOnlineUsersChanged(users) {
var joiners = [];
var leavers = [];
for (user in users) {
if (last_users.indexOf(users[user]) == -1.0) {
joiners.push(users[user]);
createNotification(users[user] + " Has joined");
if (last_users.indexOf(users[user]) == -1.0) {
createNotification(users[user] + " has joined");
}
}
for (user in last_users) {
if (users.indexOf(last_users[user]) == -1.0) {
leavers.push(last_users[user]);
createNotification(last_users[user] + " Has left");
createNotification(last_users[user] + " has left");
}
}
last_users = users;
@ -303,8 +283,8 @@ function onIncomingMessage(user, message) {
}
// Triggers mic mute notification
function onMuteStateChanged() {
var muteState = AudioDevice.getMuted() ? "Muted" : "Unmuted";
var muteString = "Microphone is set to " + muteState;
var muteState = AudioDevice.getMuted() ? "muted" : "unmuted";
var muteString = "Microphone is now " + muteState;
createNotification(muteString);
}
@ -345,7 +325,7 @@ function fadeOut(noticeOut, buttonOut, arraysOut) {
pauseTimer = Script.setInterval(function() {
r--;
rFade = r / 10.0;
Overlays.editOverlay(noticeOut, {alpha: rFade, backgroundAlpha: rFade});
Overlays.editOverlay(noticeOut, {alpha: rFade});
Overlays.editOverlay(buttonOut, {alpha: rFade});
if (r < 0) {
dismiss(noticeOut, buttonOut, arraysOut);
@ -368,10 +348,17 @@ function dismiss(firstNoteOut, firstButOut, firstOut) {
myAlpha.splice(firstOut,1);
}
onMuteStateChanged();
// This is meant to show users online at startup but currently shows 0 users.
function onConnected() {
var numUsers = GlobalServices.onlineUsers.length;
var welcome = "Welcome! There are " + numUsers + " users online now.";
createNotification(welcome);
}
AudioDevice.muteToggled.connect(onMuteStateChanged);
Controller.keyPressEvent.connect(keyPressEvent);
Controller.mousePressEvent.connect(mousePressEvent);
GlobalServices.connected.connect(onConnected);
GlobalServices.onlineUsersChanged.connect(onOnlineUsersChanged);
GlobalServices.incomingMessage.connect(onIncomingMessage);
Controller.keyReleaseEvent.connect(keyReleaseEvent);

View file

@ -86,6 +86,7 @@ ChessGame.Board = (function(position, scale) {
modelURL: ChessGame.BOARD.modelURL,
position: this.position,
dimensions: this.dimensions,
rotation: ChessGame.BOARD.rotation,
userData: this.buildUserDataString()
}
this.entity = null;

View file

@ -80,7 +80,7 @@ function updateTextOverlay() {
var textLines = textText.split("\n");
var maxLineWidth = 0;
for (textLine in textLines) {
var lineWidth = Overlays.textWidth(text, textLines[textLine]);
var lineWidth = Overlays.textSize(text, textLines[textLine]).width;
if (lineWidth > maxLineWidth) {
maxLineWidth = lineWidth;
}
@ -92,7 +92,7 @@ function updateTextOverlay() {
Overlays.editOverlay(text, {text: textText, font: {size: textFontSize}, topMargin: topMargin});
var maxLineWidth = 0;
for (textLine in textLines) {
var lineWidth = Overlays.textWidth(text, textLines[textLine]);
var lineWidth = Overlays.textSize(text, textLines[textLine]).width;
if (lineWidth > maxLineWidth) {
maxLineWidth = lineWidth;
}
@ -122,18 +122,18 @@ keyboard.onKeyRelease = function(event) {
var textLines = textText.split("\n");
var maxLineWidth = 0;
for (textLine in textLines) {
var lineWidth = Overlays.textWidth(textSizeMeasureOverlay, textLines[textLine]);
var lineWidth = Overlays.textSize(textSizeMeasureOverlay, textLines[textLine]).width;
if (lineWidth > maxLineWidth) {
maxLineWidth = lineWidth;
}
}
var usernameLine = "--" + GlobalServices.myUsername;
var usernameWidth = Overlays.textWidth(textSizeMeasureOverlay, usernameLine);
var usernameWidth = Overlays.textSize(textSizeMeasureOverlay, usernameLine).width;
if (maxLineWidth < usernameWidth) {
maxLineWidth = usernameWidth;
} else {
var spaceableWidth = maxLineWidth - usernameWidth;
var spaceWidth = Overlays.textWidth(textSizeMeasureOverlay, " ");
var spaceWidth = Overlays.textSize(textSizeMeasureOverlay, " ").width;
var numberOfSpaces = Math.floor(spaceableWidth / spaceWidth);
for (var i = 0; i < numberOfSpaces; i++) {
usernameLine = " " + usernameLine;

View file

@ -6,4 +6,4 @@ setup_hifi_project(Network)
# link the shared hifi libraries
link_hifi_libraries(networking shared)
link_shared_dependencies()
include_dependency_includes()

View file

@ -2,7 +2,7 @@ set(TARGET_NAME interface)
project(${TARGET_NAME})
# set a default root dir for each of our optional externals if it was not passed
set(OPTIONAL_EXTERNALS "Faceshift" "LibOVR" "PrioVR" "Sixense" "Visage" "LeapMotion" "RtMidi" "Qxmpp" "SDL2" "Gverb")
set(OPTIONAL_EXTERNALS "Faceshift" "LibOVR" "PrioVR" "Sixense" "LeapMotion" "RtMidi" "Qxmpp" "SDL2" "Gverb")
foreach(EXTERNAL ${OPTIONAL_EXTERNALS})
string(TOUPPER ${EXTERNAL} ${EXTERNAL}_UPPERCASE)
if (NOT ${${EXTERNAL}_UPPERCASE}_ROOT_DIR)
@ -25,15 +25,15 @@ else ()
endif ()
if (APPLE)
set(GL_HEADERS "#include <GLUT/glut.h>\n#include <OpenGL/glext.h>")
set(GL_HEADERS "#include <OpenGL/glext.h>")
elseif (UNIX)
# include the right GL headers for UNIX
set(GL_HEADERS "#include <GL/gl.h>\n#include <GL/glut.h>\n#include <GL/glext.h>")
set(GL_HEADERS "#include <GL/gl.h>\n#include <GL/glext.h>")
elseif (WIN32)
add_definitions(-D_USE_MATH_DEFINES) # apparently needed to get M_PI and other defines from cmath/math.h
add_definitions(-DWINDOWS_LEAN_AND_MEAN) # needed to make sure windows doesn't go to crazy with its defines
set(GL_HEADERS "#include <windowshacks.h>\n#include <GL/glew.h>\n#include <GL/glut.h>\n#include <GL/wglew.h>")
set(GL_HEADERS "#include <windowshacks.h>\n#include <GL/glew.h>\n#include <GL/wglew.h>")
endif ()
# set up the external glm library
@ -107,7 +107,8 @@ endif()
add_executable(${TARGET_NAME} MACOSX_BUNDLE ${INTERFACE_SRCS} ${QM})
# link required hifi libraries
link_hifi_libraries(shared octree voxels gpu fbx metavoxels networking entities avatars audio animation script-engine physics)
link_hifi_libraries(shared octree voxels gpu model fbx metavoxels networking entities avatars audio animation script-engine physics
render-utils entities-renderer)
# find any optional and required libraries
find_package(ZLIB REQUIRED)
@ -156,7 +157,6 @@ if (VISAGE_FOUND AND NOT DISABLE_VISAGE AND APPLE)
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-comment")
find_library(AVFoundation AVFoundation)
find_library(CoreMedia CoreMedia)
find_library(NEW_STD_LIBRARY libc++.dylib /usr/lib/)
target_link_libraries(${TARGET_NAME} ${AVFoundation} ${CoreMedia} ${NEW_STD_LIBRARY})
endif ()
@ -194,11 +194,10 @@ if (APPLE)
# link in required OS X frameworks and include the right GL headers
find_library(CoreAudio CoreAudio)
find_library(CoreFoundation CoreFoundation)
find_library(GLUT GLUT)
find_library(OpenGL OpenGL)
find_library(AppKit AppKit)
target_link_libraries(${TARGET_NAME} ${CoreAudio} ${CoreFoundation} ${GLUT} ${OpenGL} ${AppKit})
target_link_libraries(${TARGET_NAME} ${CoreAudio} ${CoreFoundation} ${OpenGL} ${AppKit})
# install command for OS X bundle
INSTALL(TARGETS ${TARGET_NAME}
@ -214,15 +213,12 @@ else (APPLE)
)
find_package(OpenGL REQUIRED)
find_package(GLUT REQUIRED)
include_directories(SYSTEM "${GLUT_INCLUDE_DIRS}")
if (${OPENGL_INCLUDE_DIR})
include_directories(SYSTEM "${OPENGL_INCLUDE_DIR}")
endif ()
target_link_libraries(${TARGET_NAME} "${OPENGL_LIBRARY}" "${GLUT_LIBRARIES}")
target_link_libraries(${TARGET_NAME} "${OPENGL_LIBRARY}")
# link target to external libraries
if (WIN32)
@ -232,7 +228,7 @@ else (APPLE)
# we're using static GLEW, so define GLEW_STATIC
add_definitions(-DGLEW_STATIC)
target_link_libraries(${TARGET_NAME} "${GLEW_LIBRARIES}" "${NSIGHT_LIBRARIES}" wsock32.lib opengl32.lib)
target_link_libraries(${TARGET_NAME} "${GLEW_LIBRARIES}" "${NSIGHT_LIBRARIES}" wsock32.lib opengl32.lib Winmm.lib)
# try to find the Nsight package and add it to the build if we find it
find_package(NSIGHT)
@ -246,4 +242,4 @@ else (APPLE)
endif (APPLE)
# link any dependencies bubbled up from our linked dependencies
link_shared_dependencies()
include_dependency_includes()

View file

@ -1,21 +0,0 @@
#version 120
//
// passthrough.vert
// vertex shader
//
// Copyright 2013 High Fidelity, Inc.
//
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
attribute float voxelSizeIn;
varying float voxelSize;
void main(void) {
gl_FrontColor = gl_Color;
gl_Position = gl_Vertex; // don't call ftransform(), because we do projection in geometry shader
voxelSize = voxelSizeIn;
}

View file

@ -1,174 +0,0 @@
#version 120
//
// point_size.vert
// vertex shader
//
// Copyright 2013 High Fidelity, Inc.
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
attribute float voxelSizeIn;
varying float voxelSize;
uniform float viewportWidth;
uniform float viewportHeight;
uniform vec3 cameraPosition;
// Bit codes for faces
const int NONE = 0;
const int RIGHT = 1;
const int LEFT = 2;
const int BOTTOM = 4;
const int BOTTOM_RIGHT = BOTTOM + RIGHT;
const int BOTTOM_LEFT = BOTTOM + LEFT;
const int TOP = 8;
const int TOP_RIGHT = TOP + RIGHT;
const int TOP_LEFT = TOP + LEFT;
const int NEAR = 16;
const int NEAR_RIGHT = NEAR + RIGHT;
const int NEAR_LEFT = NEAR + LEFT;
const int NEAR_BOTTOM = NEAR + BOTTOM;
const int NEAR_BOTTOM_RIGHT = NEAR + BOTTOM + RIGHT;
const int NEAR_BOTTOM_LEFT = NEAR + BOTTOM + LEFT;
const int NEAR_TOP = NEAR + TOP;
const int NEAR_TOP_RIGHT = NEAR + TOP + RIGHT;
const int NEAR_TOP_LEFT = NEAR + TOP + LEFT;
const int FAR = 32;
const int FAR_RIGHT = FAR + RIGHT;
const int FAR_LEFT = FAR + LEFT;
const int FAR_BOTTOM = FAR + BOTTOM;
const int FAR_BOTTOM_RIGHT = FAR + BOTTOM + RIGHT;
const int FAR_BOTTOM_LEFT = FAR + BOTTOM + LEFT;
const int FAR_TOP = FAR + TOP;
const int FAR_TOP_RIGHT = FAR + TOP + RIGHT;
const int FAR_TOP_LEFT = FAR + TOP + LEFT;
// If we know the position of the camera relative to the voxel, we can a priori know the vertices that make the visible hull
// polygon. This also tells us which two vertices are known to make the longest possible distance between any pair of these
// vertices for the projected polygon. This is a visibleFaces table based on this knowledge.
void main(void) {
// Note: the gl_Vertex in this case are in "world coordinates" meaning they've already been scaled to TREE_SCALE
// this is also true for voxelSizeIn.
vec4 bottomNearRight = gl_Vertex;
vec4 topFarLeft = (gl_Vertex + vec4(voxelSizeIn, voxelSizeIn, voxelSizeIn, 0.0));
int visibleFaces = NONE;
// In order to use our visibleFaces "table" (implemented as if statements) below, we need to encode the 6-bit code to
// orient camera relative to the 6 defining faces of the voxel. Based on camera position relative to the bottomNearRight
// corner and the topFarLeft corner, we can calculate which hull and therefore which two vertices are furthest apart
// linearly once projected
if (cameraPosition.x < bottomNearRight.x) {
visibleFaces += RIGHT;
}
if (cameraPosition.x > topFarLeft.x) {
visibleFaces += LEFT;
}
if (cameraPosition.y < bottomNearRight.y) {
visibleFaces += BOTTOM;
}
if (cameraPosition.y > topFarLeft.y) {
visibleFaces += TOP;
}
if (cameraPosition.z < bottomNearRight.z) {
visibleFaces += NEAR;
}
if (cameraPosition.z > topFarLeft.z) {
visibleFaces += FAR;
}
vec4 cornerAdjustOne;
vec4 cornerAdjustTwo;
if (visibleFaces == RIGHT) {
cornerAdjustOne = vec4(0,0,0,0) * voxelSizeIn;
cornerAdjustTwo = vec4(0,1,1,0) * voxelSizeIn;
} else if (visibleFaces == LEFT) {
cornerAdjustOne = vec4(1,0,0,0) * voxelSizeIn;
cornerAdjustTwo = vec4(1,1,1,0) * voxelSizeIn;
} else if (visibleFaces == BOTTOM) {
cornerAdjustOne = vec4(0,0,0,0) * voxelSizeIn;
cornerAdjustTwo = vec4(1,0,1,0) * voxelSizeIn;
} else if (visibleFaces == TOP) {
cornerAdjustOne = vec4(0,1,0,0) * voxelSizeIn;
cornerAdjustTwo = vec4(1,1,1,0) * voxelSizeIn;
} else if (visibleFaces == NEAR) {
cornerAdjustOne = vec4(0,0,0,0) * voxelSizeIn;
cornerAdjustTwo = vec4(1,1,0,0) * voxelSizeIn;
} else if (visibleFaces == FAR) {
cornerAdjustOne = vec4(0,0,1,0) * voxelSizeIn;
cornerAdjustTwo = vec4(1,1,1,0) * voxelSizeIn;
} else if (visibleFaces == NEAR_BOTTOM_LEFT ||
visibleFaces == FAR_TOP ||
visibleFaces == FAR_TOP_RIGHT) {
cornerAdjustOne = vec4(0,1,0,0) * voxelSizeIn;
cornerAdjustTwo = vec4(1,0,1,0) * voxelSizeIn;
} else if (visibleFaces == FAR_TOP_LEFT ||
visibleFaces == NEAR_RIGHT ||
visibleFaces == NEAR_BOTTOM ||
visibleFaces == NEAR_BOTTOM_RIGHT) {
cornerAdjustOne = vec4(0,0,1,0) * voxelSizeIn;
cornerAdjustTwo = vec4(1,1,0,0) * voxelSizeIn;
} else if (visibleFaces == NEAR_TOP_RIGHT ||
visibleFaces == FAR_LEFT ||
visibleFaces == FAR_BOTTOM_LEFT ||
visibleFaces == BOTTOM_RIGHT ||
visibleFaces == TOP_LEFT) {
cornerAdjustOne = vec4(1,0,0,0) * voxelSizeIn;
cornerAdjustTwo = vec4(0,1,1,0) * voxelSizeIn;
// Everything else...
//} else if (visibleFaces == BOTTOM_LEFT ||
// visibleFaces == TOP_RIGHT ||
// visibleFaces == NEAR_LEFT ||
// visibleFaces == FAR_RIGHT ||
// visibleFaces == NEAR_TOP ||
// visibleFaces == NEAR_TOP_LEFT ||
// visibleFaces == FAR_BOTTOM ||
// visibleFaces == FAR_BOTTOM_RIGHT) {
} else {
cornerAdjustOne = vec4(0,0,0,0) * voxelSizeIn;
cornerAdjustTwo = vec4(1,1,1,0) * voxelSizeIn;
}
// Determine our corners
vec4 cornerOne = gl_Vertex + cornerAdjustOne;
vec4 cornerTwo = gl_Vertex + cornerAdjustTwo;
// Find their model view projections
vec4 cornerOneMVP = gl_ModelViewProjectionMatrix * cornerOne;
vec4 cornerTwoMVP = gl_ModelViewProjectionMatrix * cornerTwo;
// Map to x, y screen coordinates
vec2 cornerOneScreen = vec2(cornerOneMVP.x / cornerOneMVP.w, cornerOneMVP.y / cornerOneMVP.w);
if (cornerOneMVP.w < 0) {
cornerOneScreen.x = -cornerOneScreen.x;
cornerOneScreen.y = -cornerOneScreen.y;
}
vec2 cornerTwoScreen = vec2(cornerTwoMVP.x / cornerTwoMVP.w, cornerTwoMVP.y / cornerTwoMVP.w);
if (cornerTwoMVP.w < 0) {
cornerTwoScreen.x = -cornerTwoScreen.x;
cornerTwoScreen.y = -cornerTwoScreen.y;
}
// Find the distance between them in pixels
float voxelScreenWidth = abs(cornerOneScreen.x - cornerTwoScreen.x) * viewportWidth / 2.0;
float voxelScreenHeight = abs(cornerOneScreen.y - cornerTwoScreen.y) * viewportHeight / 2.0;
float voxelScreenLength = sqrt(voxelScreenHeight * voxelScreenHeight + voxelScreenWidth * voxelScreenWidth);
// Find the center of the voxel
vec4 centerVertex = gl_Vertex;
float halfSizeIn = voxelSizeIn / 2;
centerVertex += vec4(halfSizeIn, halfSizeIn, halfSizeIn, 0.0);
vec4 center = gl_ModelViewProjectionMatrix * centerVertex;
// Finally place the point at the center of the voxel, with a size equal to the maximum screen length
gl_Position = center;
gl_PointSize = voxelScreenLength;
gl_FrontColor = gl_Color;
}

View file

@ -1,87 +0,0 @@
#version 120
#extension GL_ARB_geometry_shader4 : enable
//
// voxel.geom
// geometry shader
//
// Copyright 2013 High Fidelity, Inc.
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
//
// VOXEL GEOMETRY SHADER
//
// Input: gl_VerticesIn/gl_PositionIn
// GL_POINTS
// Assumes vertex shader has not transformed coordinates
// Each gl_PositionIn is the corner of voxel
// varying voxelSize - which is the voxel size
//
// Note: In vertex shader doesn't do any transform. Therefore passing the 3D world coordinates xyz to us
//
// Output: GL_TRIANGLE_STRIP
//
// Issues:
// how do we need to handle lighting of these colors??
// how do we handle normals?
// check for size=0 and don't output the primitive
//
varying in float voxelSize[1];
const int VERTICES_PER_FACE = 4;
const int COORD_PER_VERTEX = 3;
const int COORD_PER_FACE = COORD_PER_VERTEX * VERTICES_PER_FACE;
void faceOfVoxel(vec4 corner, float scale, float[COORD_PER_FACE] facePoints, vec4 color, vec4 normal) {
for (int v = 0; v < VERTICES_PER_FACE; v++ ) {
vec4 vertex = corner;
for (int c = 0; c < COORD_PER_VERTEX; c++ ) {
int cIndex = c + (v * COORD_PER_VERTEX);
vertex[c] += (facePoints[cIndex] * scale);
}
gl_FrontColor = color * (gl_LightModel.ambient + gl_LightSource[0].ambient +
gl_LightSource[0].diffuse * max(0.0, dot(normal, gl_LightSource[0].position)));
gl_Position = gl_ModelViewProjectionMatrix * vertex;
EmitVertex();
}
EndPrimitive();
}
void main() {
//increment variable
int i;
vec4 corner;
float scale;
float bottomFace[COORD_PER_FACE] = float[COORD_PER_FACE]( 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1 );
float topFace[COORD_PER_FACE] = float[COORD_PER_FACE]( 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1 );
float rightFace[COORD_PER_FACE] = float[COORD_PER_FACE]( 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1 );
float leftFace[COORD_PER_FACE] = float[COORD_PER_FACE]( 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1 );
float frontFace[COORD_PER_FACE] = float[COORD_PER_FACE]( 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0 );
float backFace[COORD_PER_FACE] = float[COORD_PER_FACE]( 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1 );
vec4 bottomNormal = vec4(0.0, -1, 0.0, 0.0);
vec4 topNormal = vec4(0.0, 1, 0.0, 0.0);
vec4 rightNormal = vec4( -1, 0.0, 0.0, 0.0);
vec4 leftNormal = vec4( 1, 0.0, 0.0, 0.0);
vec4 frontNormal = vec4(0.0, 0.0, -1, 0.0);
vec4 backNormal = vec4(0.0, 0.0, 1, 0.0);
for(i = 0; i < gl_VerticesIn; i++) {
corner = gl_PositionIn[i];
scale = voxelSize[i];
faceOfVoxel(corner, scale, bottomFace, gl_FrontColorIn[i], bottomNormal);
faceOfVoxel(corner, scale, topFace, gl_FrontColorIn[i], topNormal );
faceOfVoxel(corner, scale, rightFace, gl_FrontColorIn[i], rightNormal );
faceOfVoxel(corner, scale, leftFace, gl_FrontColorIn[i], leftNormal );
faceOfVoxel(corner, scale, frontFace, gl_FrontColorIn[i], frontNormal );
faceOfVoxel(corner, scale, backFace, gl_FrontColorIn[i], backNormal );
}
}

File diff suppressed because it is too large Load diff

View file

@ -12,37 +12,33 @@
#ifndef hifi_Application_h
#define hifi_Application_h
#include <map>
#include <time.h>
#include <gpu/GPUConfig.h>
#include <QApplication>
#include <QMainWindow>
#include <QAction>
#include <QHash>
#include <QImage>
#include <QList>
#include <QPointer>
#include <QSet>
#include <QSettings>
#include <QStringList>
#include <QHash>
#include <QTouchEvent>
#include <QUndoStack>
#include <QSystemTrayIcon>
#include <AbstractScriptingServicesInterface.h>
#include <AbstractViewStateInterface.h>
#include <EntityCollisionSystem.h>
#include <EntityEditPacketSender.h>
#include <EntityTreeRenderer.h>
#include <GeometryCache.h>
#include <NetworkPacket.h>
#include <NodeList.h>
#include <OctreeQuery.h>
#include <PacketHeaders.h>
#include <ScriptEngine.h>
#include <OctreeQuery.h>
#include <TextureCache.h>
#include <ViewFrustum.h>
#include <VoxelEditPacketSender.h>
#include "MainWindow.h"
#include "Audio.h"
#include "AudioReflector.h"
#include "Camera.h"
#include "DatagramProcessor.h"
#include "Environment.h"
@ -55,19 +51,8 @@
#include "avatar/Avatar.h"
#include "avatar/AvatarManager.h"
#include "avatar/MyAvatar.h"
#include "devices/Faceshift.h"
#include "devices/PrioVR.h"
#include "devices/SixenseManager.h"
#include "devices/Visage.h"
#include "devices/DdeFaceTracker.h"
#include "entities/EntityTreeRenderer.h"
#include "renderer/AmbientOcclusionEffect.h"
#include "renderer/DeferredLightingEffect.h"
#include "renderer/GeometryCache.h"
#include "renderer/GlowEffect.h"
#include "renderer/PointShader.h"
#include "renderer/TextureCache.h"
#include "renderer/VoxelShader.h"
#include "scripting/ControllerScriptingInterface.h"
#include "ui/BandwidthDialog.h"
#include "ui/BandwidthMeter.h"
@ -95,16 +80,19 @@
#include "UndoStackScriptingInterface.h"
class QAction;
class QActionGroup;
class QGLWidget;
class QKeyEvent;
class QMouseEvent;
class QSettings;
class QSystemTrayIcon;
class QTouchEvent;
class QWheelEvent;
class FaceTracker;
class MainWindow;
class Node;
class ProgramObject;
class ScriptEngine;
static const float NODE_ADDED_RED = 0.0f;
static const float NODE_ADDED_GREEN = 1.0f;
@ -132,7 +120,7 @@ static const quint64 TOO_LONG_SINCE_LAST_SEND_DOWNSTREAM_AUDIO_STATS = 1 * USECS
static const QString INFO_HELP_PATH = "html/interface-welcome-allsvg.html";
static const QString INFO_EDIT_ENTITIES_PATH = "html/edit-entities-commands.html";
class Application : public QApplication {
class Application : public QApplication, public AbstractViewStateInterface, AbstractScriptingServicesInterface {
Q_OBJECT
friend class OctreePacketProcessor;
@ -141,7 +129,6 @@ class Application : public QApplication {
public:
static Application* getInstance() { return static_cast<Application*>(QCoreApplication::instance()); }
static QString& resourcesPath();
static const glm::vec3& getPositionForPath() { return getInstance()->_myAvatar->getPosition(); }
static glm::quat getOrientationForPath() { return getInstance()->_myAvatar->getOrientation(); }
@ -188,49 +175,50 @@ public:
void removeVoxel(glm::vec3 position, float scale);
glm::vec3 getMouseVoxelWorldCoordinates(const VoxelDetail& mouseVoxel);
bool isThrottleRendering() const { return DependencyManager::get<GLCanvas>()->isThrottleRendering(); }
GLCanvas* getGLWidget() { return _glWidget; }
bool isThrottleRendering() const { return _glWidget->isThrottleRendering(); }
MyAvatar* getAvatar() { return _myAvatar; }
const MyAvatar* getAvatar() const { return _myAvatar; }
Audio* getAudio() { return &_audio; }
const AudioReflector* getAudioReflector() const { return &_audioReflector; }
Camera* getCamera() { return &_myCamera; }
ViewFrustum* getViewFrustum() { return &_viewFrustum; }
ViewFrustum* getDisplayViewFrustum() { return &_displayViewFrustum; }
ViewFrustum* getShadowViewFrustum() { return &_shadowViewFrustum; }
VoxelImporter* getVoxelImporter() { return &_voxelImporter; }
VoxelSystem* getVoxels() { return &_voxels; }
VoxelTree* getVoxelTree() { return _voxels.getTree(); }
const OctreePacketProcessor& getOctreePacketProcessor() const { return _octreeProcessor; }
MetavoxelSystem* getMetavoxels() { return &_metavoxels; }
EntityTreeRenderer* getEntities() { return &_entities; }
bool getImportSucceded() { return _importSucceded; }
VoxelSystem* getSharedVoxelSystem() { return &_sharedVoxelSystem; }
Environment* getEnvironment() { return &_environment; }
PrioVR* getPrioVR() { return &_prioVR; }
QUndoStack* getUndoStack() { return &_undoStack; }
MainWindow* getWindow() { return _window; }
VoxelImporter* getVoxelImporter() { return &_voxelImporter; }
VoxelTree* getClipboard() { return &_clipboard; }
EntityTree* getEntityClipboard() { return &_entityClipboard; }
EntityTreeRenderer* getEntityClipboardRenderer() { return &_entityClipboardRenderer; }
Environment* getEnvironment() { return &_environment; }
VoxelTree* getVoxelTree() { return _voxels.getTree(); }
bool getImportSucceded() { return _importSucceded; }
bool isMousePressed() const { return _mousePressed; }
bool isMouseHidden() const { return _mouseHidden; }
bool isMouseHidden() const { return DependencyManager::get<GLCanvas>()->cursor().shape() == Qt::BlankCursor; }
void setCursorVisible(bool visible);
const glm::vec3& getMouseRayOrigin() const { return _mouseRayOrigin; }
const glm::vec3& getMouseRayDirection() const { return _mouseRayDirection; }
bool mouseOnScreen() const;
int getMouseX() const;
int getMouseY() const;
int getTrueMouseX() const { return _glWidget->mapFromGlobal(QCursor::pos()).x(); }
int getTrueMouseY() const { return _glWidget->mapFromGlobal(QCursor::pos()).y(); }
int getTrueMouseX() const { return DependencyManager::get<GLCanvas>()->mapFromGlobal(QCursor::pos()).x(); }
int getTrueMouseY() const { return DependencyManager::get<GLCanvas>()->mapFromGlobal(QCursor::pos()).y(); }
int getMouseDragStartedX() const;
int getMouseDragStartedY() const;
int getTrueMouseDragStartedX() const { return _mouseDragStartedX; }
int getTrueMouseDragStartedY() const { return _mouseDragStartedY; }
bool getLastMouseMoveWasSimulated() const { return _lastMouseMoveWasSimulated;; }
Faceshift* getFaceshift() { return &_faceshift; }
Visage* getVisage() { return &_visage; }
DdeFaceTracker* getDDE() { return &_dde; }
bool getLastMouseMoveWasSimulated() const { return _lastMouseMoveWasSimulated; }
FaceTracker* getActiveFaceTracker();
PrioVR* getPrioVR() { return &_prioVR; }
BandwidthMeter* getBandwidthMeter() { return &_bandwidthMeter; }
QUndoStack* getUndoStack() { return &_undoStack; }
QSystemTrayIcon* getTrayIcon() { return _trayIcon; }
ApplicationOverlay& getApplicationOverlay() { return _applicationOverlay; }
Overlays& getOverlays() { return _overlays; }
@ -241,7 +229,7 @@ public:
const glm::vec3& getViewMatrixTranslation() const { return _viewMatrixTranslation; }
void setViewMatrixTranslation(const glm::vec3& translation) { _viewMatrixTranslation = translation; }
const Transform& getViewTransform() const { return _viewTransform; }
virtual const Transform& getViewTransform() const { return _viewTransform; }
void setViewTransform(const Transform& view);
/// if you need to access the application settings, use lockSettings()/unlockSettings()
@ -250,26 +238,23 @@ public:
void saveSettings();
MainWindow* getWindow() { return _window; }
NodeToOctreeSceneStats* getOcteeSceneStats() { return &_octreeServerSceneStats; }
void lockOctreeSceneStats() { _octreeSceneStatsLock.lockForRead(); }
void unlockOctreeSceneStats() { _octreeSceneStatsLock.unlock(); }
ToolWindow* getToolWindow() { return _toolWindow ; }
GeometryCache* getGeometryCache() { return &_geometryCache; }
AnimationCache* getAnimationCache() { return &_animationCache; }
TextureCache* getTextureCache() { return &_textureCache; }
DeferredLightingEffect* getDeferredLightingEffect() { return &_deferredLightingEffect; }
GlowEffect* getGlowEffect() { return &_glowEffect; }
ControllerScriptingInterface* getControllerScriptingInterface() { return &_controllerScriptingInterface; }
virtual AbstractControllerScriptingInterface* getControllerScriptingInterface() { return &_controllerScriptingInterface; }
virtual void registerScriptEngineWithApplicationServices(ScriptEngine* scriptEngine);
AvatarManager& getAvatarManager() { return _avatarManager; }
void resetProfile(const QString& username);
void controlledBroadcastToNodes(const QByteArray& packet, const NodeSet& destinationNodeTypes);
void setupWorldLight();
virtual void setupWorldLight();
virtual bool shouldRenderMesh(float largestDimension, float distanceToCamera);
QImage renderAvatarBillboard();
@ -288,19 +273,27 @@ public:
void getModelViewMatrix(glm::dmat4* modelViewMatrix);
void getProjectionMatrix(glm::dmat4* projectionMatrix);
const glm::vec3& getShadowDistances() const { return _shadowDistances; }
virtual const glm::vec3& getShadowDistances() const { return _shadowDistances; }
/// Computes the off-axis frustum parameters for the view frustum, taking mirroring into account.
void computeOffAxisFrustum(float& left, float& right, float& bottom, float& top, float& nearVal,
virtual void computeOffAxisFrustum(float& left, float& right, float& bottom, float& top, float& nearVal,
float& farVal, glm::vec4& nearClipPlane, glm::vec4& farClipPlane) const;
virtual ViewFrustum* getCurrentViewFrustum() { return getDisplayViewFrustum(); }
virtual bool getShadowsEnabled();
virtual bool getCascadeShadowsEnabled();
virtual QThread* getMainThread() { return thread(); }
virtual float getSizeScale() const;
virtual int getBoundaryLevelAdjust() const;
virtual PickRay computePickRay(float x, float y);
virtual const glm::vec3& getAvatarPosition() const { return getAvatar()->getPosition(); }
NodeBounds& getNodeBoundsDisplay() { return _nodeBoundsDisplay; }
VoxelShader& getVoxelShader() { return _voxelShader; }
PointShader& getPointShader() { return _pointShader; }
FileLogger* getLogger() { return _logger; }
glm::vec2 getViewportDimensions() const { return glm::vec2(_glWidget->getDeviceWidth(), _glWidget->getDeviceHeight()); }
glm::vec2 getViewportDimensions() const { return glm::vec2(DependencyManager::get<GLCanvas>()->getDeviceWidth(),
DependencyManager::get<GLCanvas>()->getDeviceHeight()); }
NodeToJurisdictionMap& getVoxelServerJurisdictions() { return _voxelServerJurisdictions; }
NodeToJurisdictionMap& getEntityServerJurisdictions() { return _entityServerJurisdictions; }
void pasteVoxelsToOctalCode(const unsigned char* octalCodeDestination);
@ -309,8 +302,6 @@ public:
QStringList getRunningScripts() { return _scriptEnginesHash.keys(); }
ScriptEngine* getScriptEngine(QString scriptHash) { return _scriptEnginesHash.contains(scriptHash) ? _scriptEnginesHash[scriptHash] : NULL; }
void setCursorVisible(bool visible);
bool isLookingAtMyAvatar(Avatar* avatar);
@ -321,13 +312,13 @@ public:
bool isVSyncEditable() const;
bool isAboutToQuit() const { return _aboutToQuit; }
void registerScriptEngineWithApplicationServices(ScriptEngine* scriptEngine);
// the isHMDmode is true whenever we use the interface from an HMD and not a standard flat display
// rendering of several elements depend on that
// TODO: carry that information on the Camera as a setting
bool isHMDMode() const;
QRect getDesirableApplicationGeometry();
RunningScriptsWidget* getRunningScriptsWidget() { return _runningScriptsWidget; }
signals:
@ -344,7 +335,6 @@ signals:
void importDone();
public slots:
void changeDomainHostname(const QString& newDomainHostname);
void domainChanged(const QString& domainHostname);
void updateWindowTitle();
void updateLocationInServer();
@ -384,6 +374,7 @@ public slots:
void uploadHead();
void uploadSkeleton();
void uploadAttachment();
void uploadEntity();
void openUrl(const QUrl& url);
@ -469,15 +460,12 @@ private:
void setMenuShortcutsEnabled(bool enabled);
void uploadModel(ModelType modelType);
static void attachNewHeadToNode(Node *newNode);
static void* networkReceive(void* args); // network receive thread
int sendNackPackets();
MainWindow* _window;
GLCanvas* _glWidget; // our GLCanvas has a couple extra features
ToolWindow* _toolWindow;
@ -536,10 +524,6 @@ private:
AvatarManager _avatarManager;
MyAvatar* _myAvatar; // TODO: move this and relevant code to AvatarManager (or MyAvatar as the case may be)
Faceshift _faceshift;
Visage _visage;
DdeFaceTracker _dde;
PrioVR _prioVR;
Camera _myCamera; // My view onto the world
@ -567,8 +551,6 @@ private:
int _mouseDragStartedY;
quint64 _lastMouseMove;
bool _lastMouseMoveWasSimulated;
bool _mouseHidden;
bool _seenMouseMove;
glm::vec3 _mouseRayOrigin;
glm::vec3 _mouseRayDirection;
@ -583,17 +565,6 @@ private:
QSet<int> _keysPressed;
GeometryCache _geometryCache;
AnimationCache _animationCache;
TextureCache _textureCache;
DeferredLightingEffect _deferredLightingEffect;
GlowEffect _glowEffect;
AmbientOcclusionEffect _ambientOcclusionEffect;
VoxelShader _voxelShader;
PointShader _pointShader;
Audio _audio;
bool _enableProcessVoxelsThread;
@ -638,7 +609,6 @@ private:
Overlays _overlays;
ApplicationOverlay _applicationOverlay;
AudioReflector _audioReflector;
RunningScriptsWidget* _runningScriptsWidget;
QHash<QString, ScriptEngine*> _scriptEnginesHash;
bool _runningScriptsWidgetWasVisible;

View file

@ -37,6 +37,7 @@
#include <AudioInjector.h>
#include <NodeList.h>
#include <PacketHeaders.h>
#include <PathUtils.h>
#include <SharedUtil.h>
#include <StDev.h>
#include <UUID.h>
@ -103,10 +104,6 @@ Audio::Audio(QObject* parent) :
_gverb(NULL),
_iconColor(1.0f),
_iconPulseTimeReference(usecTimestampNow()),
_processSpatialAudio(false),
_spatialAudioStart(0),
_spatialAudioFinish(0),
_spatialAudioRingBuffer(NETWORK_BUFFER_LENGTH_SAMPLES_STEREO, true), // random access mode
_scopeEnabled(false),
_scopeEnabledPause(false),
_scopeInputOffset(0),
@ -144,9 +141,9 @@ Audio::Audio(QObject* parent) :
}
void Audio::init(QGLWidget *parent) {
_micTextureId = parent->bindTexture(QImage(Application::resourcesPath() + "images/mic.svg"));
_muteTextureId = parent->bindTexture(QImage(Application::resourcesPath() + "images/mic-mute.svg"));
_boxTextureId = parent->bindTexture(QImage(Application::resourcesPath() + "images/audio-box.svg"));
_micTextureId = parent->bindTexture(QImage(PathUtils::resourcesPath() + "images/mic.svg"));
_muteTextureId = parent->bindTexture(QImage(PathUtils::resourcesPath() + "images/mic-mute.svg"));
_boxTextureId = parent->bindTexture(QImage(PathUtils::resourcesPath() + "images/audio-box.svg"));
}
void Audio::reset() {
@ -454,7 +451,9 @@ void Audio::start() {
qDebug() << "Unable to set up audio output because of a problem with output format.";
}
_inputFrameBuffer.initialize( _inputFormat.channelCount(), _audioInput->bufferSize() * 8 );
if (_audioInput) {
_inputFrameBuffer.initialize( _inputFormat.channelCount(), _audioInput->bufferSize() * 8 );
}
_inputGain.initialize();
_sourceGain.initialize();
_noiseSource.initialize();
@ -838,13 +837,6 @@ void Audio::handleAudioInput() {
_lastInputLoudness = 0;
}
// at this point we have clean monoAudioSamples, which match our target output...
// this is what we should send to our interested listeners
if (_processSpatialAudio && !_muted && !_isStereoInput && _audioOutput) {
QByteArray monoInputData((char*)networkAudioSamples, NETWORK_BUFFER_LENGTH_SAMPLES_PER_CHANNEL * sizeof(int16_t));
emit processLocalAudio(_spatialAudioStart, monoInputData, _desiredInputFormat);
}
if (!_isStereoInput && _proceduralAudioOutput) {
processProceduralAudio(networkAudioSamples, NETWORK_BUFFER_LENGTH_SAMPLES_PER_CHANNEL);
}
@ -984,35 +976,11 @@ void Audio::processReceivedSamples(const QByteArray& inputBuffer, QByteArray& ou
outputBuffer.resize(numDeviceOutputSamples * sizeof(int16_t));
const int16_t* receivedSamples;
if (_processSpatialAudio) {
unsigned int sampleTime = _spatialAudioStart;
QByteArray buffer = inputBuffer;
// copy the samples we'll resample from the ring buffer - this also
// pushes the read pointer of the ring buffer forwards
//receivedAudioStreamPopOutput.readSamples(receivedSamples, numNetworkOutputSamples);
// Accumulate direct transmission of audio from sender to receiver
bool includeOriginal = true; // Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingIncludeOriginal)
if (includeOriginal) {
emit preProcessOriginalInboundAudio(sampleTime, buffer, _desiredOutputFormat);
addSpatialAudioToBuffer(sampleTime, buffer, numNetworkOutputSamples);
}
// Send audio off for spatial processing
emit processInboundAudio(sampleTime, buffer, _desiredOutputFormat);
// copy the samples we'll resample from the spatial audio ring buffer - this also
// pushes the read pointer of the spatial audio ring buffer forwards
_spatialAudioRingBuffer.readSamples(_outputProcessingBuffer, numNetworkOutputSamples);
// Advance the start point for the next packet of audio to arrive
_spatialAudioStart += numNetworkOutputSamples / _desiredOutputFormat.channelCount();
receivedSamples = _outputProcessingBuffer;
} else {
// copy the samples we'll resample from the ring buffer - this also
// pushes the read pointer of the ring buffer forwards
//receivedAudioStreamPopOutput.readSamples(receivedSamples, numNetworkOutputSamples);
receivedSamples = reinterpret_cast<const int16_t*>(inputBuffer.data());
}
receivedSamples = reinterpret_cast<const int16_t*>(inputBuffer.data());
// copy the packet from the RB to the output
linearResampling(receivedSamples,
@ -1123,68 +1091,7 @@ void Audio::sendDownstreamAudioStatsPacket() {
nodeList->writeDatagram(packet, dataAt - packet, audioMixer);
}
// NOTE: numSamples is the total number of single channel samples, since callers will always call this with stereo
// data we know that we will have 2x samples for each stereo time sample at the format's sample rate
void Audio::addSpatialAudioToBuffer(unsigned int sampleTime, const QByteArray& spatialAudio, unsigned int numSamples) {
// Calculate the number of remaining samples available. The source spatial audio buffer will get
// clipped if there are insufficient samples available in the accumulation buffer.
unsigned int remaining = _spatialAudioRingBuffer.getSampleCapacity() - _spatialAudioRingBuffer.samplesAvailable();
// Locate where in the accumulation buffer the new samples need to go
if (sampleTime >= _spatialAudioFinish) {
if (_spatialAudioStart == _spatialAudioFinish) {
// Nothing in the spatial audio ring buffer yet, Just do a straight copy, clipping if necessary
unsigned int sampleCount = (remaining < numSamples) ? remaining : numSamples;
if (sampleCount) {
_spatialAudioRingBuffer.writeSamples((int16_t*)spatialAudio.data(), sampleCount);
}
_spatialAudioFinish = _spatialAudioStart + sampleCount / _desiredOutputFormat.channelCount();
} else {
// Spatial audio ring buffer already has data, but there is no overlap with the new sample.
// Compute the appropriate time delay and pad with silence until the new start time.
unsigned int delay = sampleTime - _spatialAudioFinish;
unsigned int delayCount = delay * _desiredOutputFormat.channelCount();
unsigned int silentCount = (remaining < delayCount) ? remaining : delayCount;
if (silentCount) {
_spatialAudioRingBuffer.addSilentSamples(silentCount);
}
// Recalculate the number of remaining samples
remaining -= silentCount;
unsigned int sampleCount = (remaining < numSamples) ? remaining : numSamples;
// Copy the new spatial audio to the accumulation ring buffer
if (sampleCount) {
_spatialAudioRingBuffer.writeSamples((int16_t*)spatialAudio.data(), sampleCount);
}
_spatialAudioFinish += (sampleCount + silentCount) / _desiredOutputFormat.channelCount();
}
} else {
// There is overlap between the spatial audio buffer and the new sample, mix the overlap
// Calculate the offset from the buffer's current read position, which should be located at _spatialAudioStart
unsigned int offset = (sampleTime - _spatialAudioStart) * _desiredOutputFormat.channelCount();
unsigned int mixedSamplesCount = (_spatialAudioFinish - sampleTime) * _desiredOutputFormat.channelCount();
mixedSamplesCount = (mixedSamplesCount < numSamples) ? mixedSamplesCount : numSamples;
const int16_t* spatial = reinterpret_cast<const int16_t*>(spatialAudio.data());
for (unsigned int i = 0; i < mixedSamplesCount; i++) {
int existingSample = _spatialAudioRingBuffer[i + offset];
int newSample = spatial[i];
int sumOfSamples = existingSample + newSample;
_spatialAudioRingBuffer[i + offset] = static_cast<int16_t>(glm::clamp<int>(sumOfSamples,
std::numeric_limits<short>::min(), std::numeric_limits<short>::max()));
}
// Copy the remaining unoverlapped spatial audio to the spatial audio buffer, if any
unsigned int nonMixedSampleCount = numSamples - mixedSamplesCount;
nonMixedSampleCount = (remaining < nonMixedSampleCount) ? remaining : nonMixedSampleCount;
if (nonMixedSampleCount) {
_spatialAudioRingBuffer.writeSamples((int16_t*)spatialAudio.data() + mixedSamplesCount, nonMixedSampleCount);
// Extend the finish time by the amount of unoverlapped samples
_spatialAudioFinish += nonMixedSampleCount / _desiredOutputFormat.channelCount();
}
}
}
bool Audio::mousePressEvent(int x, int y) {
if (_iconBounds.contains(x, y)) {
@ -1264,16 +1171,6 @@ void Audio::selectAudioSourceSine440() {
_noiseSourceEnabled = !_toneSourceEnabled;
}
void Audio::toggleAudioSpatialProcessing() {
// spatial audio disabled for now
_processSpatialAudio = false; //!_processSpatialAudio;
if (_processSpatialAudio) {
_spatialAudioStart = 0;
_spatialAudioFinish = 0;
_spatialAudioRingBuffer.reset();
}
}
// Take a pointer to the acquired microphone input samples and add procedural sounds
void Audio::addProceduralSounds(int16_t* monoInput, int numSamples) {
float sample;
@ -1995,11 +1892,6 @@ bool Audio::switchOutputToAudioDevice(const QAudioDeviceInfo& outputDeviceInfo)
_timeSinceLastReceived.start();
// setup spatial audio ringbuffer
int numFrameSamples = _outputFormat.sampleRate() * _desiredOutputFormat.channelCount();
_spatialAudioRingBuffer.resizeForFrameSize(numFrameSamples);
_spatialAudioStart = _spatialAudioFinish = 0;
supportedFormat = true;
}
}
@ -2046,6 +1938,9 @@ int Audio::calculateNumberOfFrameSamples(int numBytes) const {
}
float Audio::getAudioOutputMsecsUnplayed() const {
if (!_audioOutput) {
return 0.0f;
}
int bytesAudioOutputUnplayed = _audioOutput->bufferSize() - _audioOutput->bytesFree();
float msecsAudioOutputUnplayed = bytesAudioOutputUnplayed / (float)_outputFormat.bytesForDuration(USECS_PER_MSEC);
return msecsAudioOutputUnplayed;

View file

@ -115,8 +115,6 @@ public:
int getNetworkSampleRate() { return SAMPLE_RATE; }
int getNetworkBufferLengthSamplesPerChannel() { return NETWORK_BUFFER_LENGTH_SAMPLES_PER_CHANNEL; }
bool getProcessSpatialAudio() const { return _processSpatialAudio; }
float getInputRingBufferMsecsAvailable() const;
float getInputRingBufferAverageMsecsAvailable() const { return (float)_inputRingBufferMsecsAvailableStats.getWindowAverage(); }
@ -131,7 +129,6 @@ public slots:
void addReceivedAudioToStream(const QByteArray& audioByteArray);
void parseAudioStreamStatsPacket(const QByteArray& packet);
void parseAudioEnvironmentData(const QByteArray& packet);
void addSpatialAudioToBuffer(unsigned int sampleTime, const QByteArray& spatialAudio, unsigned int numSamples);
void handleAudioInput();
void reset();
void resetStats();
@ -145,7 +142,6 @@ public slots:
void toggleScopePause();
void toggleStats();
void toggleStatsShowInjectedStreams();
void toggleAudioSpatialProcessing();
void toggleStereoInput();
void selectAudioScopeFiveFrames();
void selectAudioScopeTwentyFrames();
@ -254,11 +250,6 @@ private:
float _iconColor;
qint64 _iconPulseTimeReference;
bool _processSpatialAudio; /// Process received audio by spatial audio hooks
unsigned int _spatialAudioStart; /// Start of spatial audio interval (in sample rate time base)
unsigned int _spatialAudioFinish; /// End of spatial audio interval (in sample rate time base)
AudioRingBuffer _spatialAudioRingBuffer; /// Spatially processed audio
// Process procedural audio by
// 1. Echo to the local procedural output device
// 2. Mix with the audio input

View file

@ -1,868 +0,0 @@
//
// AudioReflector.cpp
// interface
//
// Created by Brad Hefta-Gaub on 4/2/2014
// Copyright (c) 2014 High Fidelity, Inc. All rights reserved.
//
#include <QMutexLocker>
#include "AudioReflector.h"
#include "Menu.h"
const float DEFAULT_PRE_DELAY = 20.0f; // this delay in msecs will always be added to all reflections
const float DEFAULT_MS_DELAY_PER_METER = 3.0f;
const float MINIMUM_ATTENUATION_TO_REFLECT = 1.0f / 256.0f;
const float DEFAULT_DISTANCE_SCALING_FACTOR = 2.0f;
const float MAXIMUM_DELAY_MS = 1000.0 * 20.0f; // stop reflecting after path is this long
const int DEFAULT_DIFFUSION_FANOUT = 5;
const unsigned int ABSOLUTE_MAXIMUM_BOUNCE_COUNT = 10;
const float DEFAULT_LOCAL_ATTENUATION_FACTOR = 0.125;
const float DEFAULT_COMB_FILTER_WINDOW = 0.05f; //ms delay differential to avoid
const float SLIGHTLY_SHORT = 0.999f; // slightly inside the distance so we're on the inside of the reflection point
const float DEFAULT_ABSORPTION_RATIO = 0.125; // 12.5% is absorbed
const float DEFAULT_DIFFUSION_RATIO = 0.125; // 12.5% is diffused
const float DEFAULT_ORIGINAL_ATTENUATION = 1.0f;
const float DEFAULT_ECHO_ATTENUATION = 1.0f;
AudioReflector::AudioReflector(QObject* parent) :
QObject(parent),
_preDelay(DEFAULT_PRE_DELAY),
_soundMsPerMeter(DEFAULT_MS_DELAY_PER_METER),
_distanceAttenuationScalingFactor(DEFAULT_DISTANCE_SCALING_FACTOR),
_localAudioAttenuationFactor(DEFAULT_LOCAL_ATTENUATION_FACTOR),
_combFilterWindow(DEFAULT_COMB_FILTER_WINDOW),
_diffusionFanout(DEFAULT_DIFFUSION_FANOUT),
_absorptionRatio(DEFAULT_ABSORPTION_RATIO),
_diffusionRatio(DEFAULT_DIFFUSION_RATIO),
_originalSourceAttenuation(DEFAULT_ORIGINAL_ATTENUATION),
_allEchoesAttenuation(DEFAULT_ECHO_ATTENUATION),
_withDiffusion(false),
_lastPreDelay(DEFAULT_PRE_DELAY),
_lastSoundMsPerMeter(DEFAULT_MS_DELAY_PER_METER),
_lastDistanceAttenuationScalingFactor(DEFAULT_DISTANCE_SCALING_FACTOR),
_lastLocalAudioAttenuationFactor(DEFAULT_LOCAL_ATTENUATION_FACTOR),
_lastDiffusionFanout(DEFAULT_DIFFUSION_FANOUT),
_lastAbsorptionRatio(DEFAULT_ABSORPTION_RATIO),
_lastDiffusionRatio(DEFAULT_DIFFUSION_RATIO),
_lastDontDistanceAttenuate(false),
_lastAlternateDistanceAttenuate(false)
{
_reflections = 0;
_diffusionPathCount = 0;
_officialAverageAttenuation = _averageAttenuation = 0.0f;
_officialMaxAttenuation = _maxAttenuation = 0.0f;
_officialMinAttenuation = _minAttenuation = 0.0f;
_officialAverageDelay = _averageDelay = 0;
_officialMaxDelay = _maxDelay = 0;
_officialMinDelay = _minDelay = 0;
_inboundEchoesCount = 0;
_inboundEchoesSuppressedCount = 0;
_localEchoesCount = 0;
_localEchoesSuppressedCount = 0;
}
bool AudioReflector::haveAttributesChanged() {
// Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingWithDiffusions);
bool withDiffusion = true;
// Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingDontDistanceAttenuate);
bool dontDistanceAttenuate = false;
//Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingAlternateDistanceAttenuate);
bool alternateDistanceAttenuate = false;
bool attributesChange = (_withDiffusion != withDiffusion
|| _lastPreDelay != _preDelay
|| _lastSoundMsPerMeter != _soundMsPerMeter
|| _lastDistanceAttenuationScalingFactor != _distanceAttenuationScalingFactor
|| _lastDiffusionFanout != _diffusionFanout
|| _lastAbsorptionRatio != _absorptionRatio
|| _lastDiffusionRatio != _diffusionRatio
|| _lastDontDistanceAttenuate != dontDistanceAttenuate
|| _lastAlternateDistanceAttenuate != alternateDistanceAttenuate);
if (attributesChange) {
_withDiffusion = withDiffusion;
_lastPreDelay = _preDelay;
_lastSoundMsPerMeter = _soundMsPerMeter;
_lastDistanceAttenuationScalingFactor = _distanceAttenuationScalingFactor;
_lastDiffusionFanout = _diffusionFanout;
_lastAbsorptionRatio = _absorptionRatio;
_lastDiffusionRatio = _diffusionRatio;
_lastDontDistanceAttenuate = dontDistanceAttenuate;
_lastAlternateDistanceAttenuate = alternateDistanceAttenuate;
}
return attributesChange;
}
void AudioReflector::render() {
// if we're not set up yet, or we're not processing spatial audio, then exit early
if (!_myAvatar || !_audio->getProcessSpatialAudio()) {
return;
}
// use this oportunity to calculate our reflections
calculateAllReflections();
// only render if we've been asked to do so
bool renderPaths = false; // Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingRenderPaths)
if (renderPaths) {
drawRays();
}
}
// delay = 1ms per foot
// = 3ms per meter
float AudioReflector::getDelayFromDistance(float distance) {
float delay = (_soundMsPerMeter * distance);
bool includePreDelay = true; // Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingPreDelay)
if (includePreDelay) {
delay += _preDelay;
}
return delay;
}
// attenuation = from the Audio Mixer
float AudioReflector::getDistanceAttenuationCoefficient(float distance) {
//!Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingDontDistanceAttenuate);
bool doDistanceAttenuation = true;
//!Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingAlternateDistanceAttenuate);
bool originalFormula = true;
float distanceCoefficient = 1.0f;
if (doDistanceAttenuation) {
if (originalFormula) {
const float DISTANCE_SCALE = 2.5f;
const float GEOMETRIC_AMPLITUDE_SCALAR = 0.3f;
const float DISTANCE_LOG_BASE = 2.5f;
const float DISTANCE_SCALE_LOG = logf(DISTANCE_SCALE) / logf(DISTANCE_LOG_BASE);
float distanceSquareToSource = distance * distance;
// calculate the distance coefficient using the distance to this node
distanceCoefficient = powf(GEOMETRIC_AMPLITUDE_SCALAR,
DISTANCE_SCALE_LOG +
(0.5f * logf(distanceSquareToSource) / logf(DISTANCE_LOG_BASE)) - 1);
distanceCoefficient = std::min(1.0f, distanceCoefficient * getDistanceAttenuationScalingFactor());
} else {
// From Fred: If we wanted something that would produce a tail that could go up to 5 seconds in a
// really big room, that would suggest the sound still has to be in the audible after traveling about
// 1500 meters. If its a sound of average volume, we probably have about 30 db, or 5 base2 orders
// of magnitude we can drop down before the sound becomes inaudible. (Thats approximate headroom
// based on a few sloppy assumptions.) So we could try a factor like 1 / (2^(D/300)) for starters.
// 1 / (2^(D/300))
const float DISTANCE_BASE = 2.0f;
const float DISTANCE_DENOMINATOR = 300.0f;
const float DISTANCE_NUMERATOR = 300.0f;
distanceCoefficient = DISTANCE_NUMERATOR / powf(DISTANCE_BASE, (distance / DISTANCE_DENOMINATOR ));
distanceCoefficient = std::min(1.0f, distanceCoefficient * getDistanceAttenuationScalingFactor());
}
}
return distanceCoefficient;
}
glm::vec3 AudioReflector::getFaceNormal(BoxFace face) {
// Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingSlightlyRandomSurfaces);
bool wantSlightRandomness = true;
glm::vec3 faceNormal;
const float MIN_RANDOM_LENGTH = 0.99f;
const float MAX_RANDOM_LENGTH = 1.0f;
const float NON_RANDOM_LENGTH = 1.0f;
float normalLength = wantSlightRandomness ? randFloatInRange(MIN_RANDOM_LENGTH, MAX_RANDOM_LENGTH) : NON_RANDOM_LENGTH;
float remainder = (1.0f - normalLength)/2.0f;
float remainderSignA = randomSign();
float remainderSignB = randomSign();
if (face == MIN_X_FACE) {
faceNormal = glm::vec3(-normalLength, remainder * remainderSignA, remainder * remainderSignB);
} else if (face == MAX_X_FACE) {
faceNormal = glm::vec3(normalLength, remainder * remainderSignA, remainder * remainderSignB);
} else if (face == MIN_Y_FACE) {
faceNormal = glm::vec3(remainder * remainderSignA, -normalLength, remainder * remainderSignB);
} else if (face == MAX_Y_FACE) {
faceNormal = glm::vec3(remainder * remainderSignA, normalLength, remainder * remainderSignB);
} else if (face == MIN_Z_FACE) {
faceNormal = glm::vec3(remainder * remainderSignA, remainder * remainderSignB, -normalLength);
} else if (face == MAX_Z_FACE) {
faceNormal = glm::vec3(remainder * remainderSignA, remainder * remainderSignB, normalLength);
}
return faceNormal;
}
// set up our buffers for our attenuated and delayed samples
const int NUMBER_OF_CHANNELS = 2;
void AudioReflector::injectAudiblePoint(AudioSource source, const AudiblePoint& audiblePoint,
const QByteArray& samples, unsigned int sampleTime, int sampleRate) {
bool wantEarSeparation = true; // Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingSeparateEars);
bool wantStereo = true; // Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingStereoSource);
glm::vec3 rightEarPosition = wantEarSeparation ? _myAvatar->getHead()->getRightEarPosition() :
_myAvatar->getHead()->getPosition();
glm::vec3 leftEarPosition = wantEarSeparation ? _myAvatar->getHead()->getLeftEarPosition() :
_myAvatar->getHead()->getPosition();
int totalNumberOfSamples = samples.size() / sizeof(int16_t);
int totalNumberOfStereoSamples = samples.size() / (sizeof(int16_t) * NUMBER_OF_CHANNELS);
const int16_t* originalSamplesData = (const int16_t*)samples.constData();
QByteArray attenuatedLeftSamples;
QByteArray attenuatedRightSamples;
attenuatedLeftSamples.resize(samples.size());
attenuatedRightSamples.resize(samples.size());
int16_t* attenuatedLeftSamplesData = (int16_t*)attenuatedLeftSamples.data();
int16_t* attenuatedRightSamplesData = (int16_t*)attenuatedRightSamples.data();
// calculate the distance to the ears
float rightEarDistance = glm::distance(audiblePoint.location, rightEarPosition);
float leftEarDistance = glm::distance(audiblePoint.location, leftEarPosition);
float rightEarDelayMsecs = getDelayFromDistance(rightEarDistance) + audiblePoint.delay;
float leftEarDelayMsecs = getDelayFromDistance(leftEarDistance) + audiblePoint.delay;
float averageEarDelayMsecs = (leftEarDelayMsecs + rightEarDelayMsecs) / 2.0f;
bool safeToInject = true; // assume the best
// check to see if this new injection point would be within the comb filter
// suppression window for any of the existing known delays
QMap<float, float>& knownDelays = (source == INBOUND_AUDIO) ? _inboundAudioDelays : _localAudioDelays;
QMap<float, float>::const_iterator lowerBound = knownDelays.lowerBound(averageEarDelayMsecs - _combFilterWindow);
if (lowerBound != knownDelays.end()) {
float closestFound = lowerBound.value();
float deltaToClosest = (averageEarDelayMsecs - closestFound);
if (deltaToClosest > -_combFilterWindow && deltaToClosest < _combFilterWindow) {
safeToInject = false;
}
}
// keep track of any of our suppressed echoes so we can report them in our statistics
if (!safeToInject) {
QVector<float>& suppressedEchoes = (source == INBOUND_AUDIO) ? _inboundEchoesSuppressed : _localEchoesSuppressed;
suppressedEchoes << averageEarDelayMsecs;
} else {
knownDelays[averageEarDelayMsecs] = averageEarDelayMsecs;
_totalDelay += rightEarDelayMsecs + leftEarDelayMsecs;
_delayCount += 2;
_maxDelay = std::max(_maxDelay,rightEarDelayMsecs);
_maxDelay = std::max(_maxDelay,leftEarDelayMsecs);
_minDelay = std::min(_minDelay,rightEarDelayMsecs);
_minDelay = std::min(_minDelay,leftEarDelayMsecs);
int rightEarDelay = rightEarDelayMsecs * sampleRate / MSECS_PER_SECOND;
int leftEarDelay = leftEarDelayMsecs * sampleRate / MSECS_PER_SECOND;
float rightEarAttenuation = audiblePoint.attenuation *
getDistanceAttenuationCoefficient(rightEarDistance + audiblePoint.distance);
float leftEarAttenuation = audiblePoint.attenuation *
getDistanceAttenuationCoefficient(leftEarDistance + audiblePoint.distance);
_totalAttenuation += rightEarAttenuation + leftEarAttenuation;
_attenuationCount += 2;
_maxAttenuation = std::max(_maxAttenuation,rightEarAttenuation);
_maxAttenuation = std::max(_maxAttenuation,leftEarAttenuation);
_minAttenuation = std::min(_minAttenuation,rightEarAttenuation);
_minAttenuation = std::min(_minAttenuation,leftEarAttenuation);
// run through the samples, and attenuate them
for (int sample = 0; sample < totalNumberOfStereoSamples; sample++) {
int16_t leftSample = originalSamplesData[sample * NUMBER_OF_CHANNELS];
int16_t rightSample = leftSample;
if (wantStereo) {
rightSample = originalSamplesData[(sample * NUMBER_OF_CHANNELS) + 1];
}
attenuatedLeftSamplesData[sample * NUMBER_OF_CHANNELS] =
leftSample * leftEarAttenuation * _allEchoesAttenuation;
attenuatedLeftSamplesData[sample * NUMBER_OF_CHANNELS + 1] = 0;
attenuatedRightSamplesData[sample * NUMBER_OF_CHANNELS] = 0;
attenuatedRightSamplesData[sample * NUMBER_OF_CHANNELS + 1] =
rightSample * rightEarAttenuation * _allEchoesAttenuation;
}
// now inject the attenuated array with the appropriate delay
unsigned int sampleTimeLeft = sampleTime + leftEarDelay;
unsigned int sampleTimeRight = sampleTime + rightEarDelay;
_audio->addSpatialAudioToBuffer(sampleTimeLeft, attenuatedLeftSamples, totalNumberOfSamples);
_audio->addSpatialAudioToBuffer(sampleTimeRight, attenuatedRightSamples, totalNumberOfSamples);
_injectedEchoes++;
}
}
void AudioReflector::preProcessOriginalInboundAudio(unsigned int sampleTime,
QByteArray& samples, const QAudioFormat& format) {
if (_originalSourceAttenuation != 1.0f) {
int numberOfSamples = (samples.size() / sizeof(int16_t));
int16_t* sampleData = (int16_t*)samples.data();
for (int i = 0; i < numberOfSamples; i++) {
sampleData[i] = sampleData[i] * _originalSourceAttenuation;
}
}
}
void AudioReflector::processLocalAudio(unsigned int sampleTime, const QByteArray& samples, const QAudioFormat& format) {
bool processLocalAudio = true; // Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingProcessLocalAudio)
if (processLocalAudio) {
const int NUM_CHANNELS_INPUT = 1;
const int NUM_CHANNELS_OUTPUT = 2;
const int EXPECTED_SAMPLE_RATE = 24000;
if (format.channelCount() == NUM_CHANNELS_INPUT && format.sampleRate() == EXPECTED_SAMPLE_RATE) {
QAudioFormat outputFormat = format;
outputFormat.setChannelCount(NUM_CHANNELS_OUTPUT);
QByteArray stereoInputData(samples.size() * NUM_CHANNELS_OUTPUT, 0);
int numberOfSamples = (samples.size() / sizeof(int16_t));
int16_t* monoSamples = (int16_t*)samples.data();
int16_t* stereoSamples = (int16_t*)stereoInputData.data();
for (int i = 0; i < numberOfSamples; i++) {
stereoSamples[i* NUM_CHANNELS_OUTPUT] = monoSamples[i] * _localAudioAttenuationFactor;
stereoSamples[(i * NUM_CHANNELS_OUTPUT) + 1] = monoSamples[i] * _localAudioAttenuationFactor;
}
_localAudioDelays.clear();
_localEchoesSuppressed.clear();
echoAudio(LOCAL_AUDIO, sampleTime, stereoInputData, outputFormat);
_localEchoesCount = _localAudioDelays.size();
_localEchoesSuppressedCount = _localEchoesSuppressed.size();
}
}
}
void AudioReflector::processInboundAudio(unsigned int sampleTime, const QByteArray& samples, const QAudioFormat& format) {
_inboundAudioDelays.clear();
_inboundEchoesSuppressed.clear();
echoAudio(INBOUND_AUDIO, sampleTime, samples, format);
_inboundEchoesCount = _inboundAudioDelays.size();
_inboundEchoesSuppressedCount = _inboundEchoesSuppressed.size();
}
void AudioReflector::echoAudio(AudioSource source, unsigned int sampleTime, const QByteArray& samples, const QAudioFormat& format) {
QMutexLocker locker(&_mutex);
_maxDelay = 0;
_maxAttenuation = 0.0f;
_minDelay = std::numeric_limits<int>::max();
_minAttenuation = std::numeric_limits<float>::max();
_totalDelay = 0.0f;
_delayCount = 0;
_totalAttenuation = 0.0f;
_attenuationCount = 0;
// depending on if we're processing local or external audio, pick the correct points vector
QVector<AudiblePoint>& audiblePoints = source == INBOUND_AUDIO ? _inboundAudiblePoints : _localAudiblePoints;
int injectCalls = 0;
_injectedEchoes = 0;
foreach(const AudiblePoint& audiblePoint, audiblePoints) {
injectCalls++;
injectAudiblePoint(source, audiblePoint, samples, sampleTime, format.sampleRate());
}
/*
qDebug() << "injectCalls=" << injectCalls;
qDebug() << "_injectedEchoes=" << _injectedEchoes;
*/
_averageDelay = _delayCount == 0 ? 0 : _totalDelay / _delayCount;
_averageAttenuation = _attenuationCount == 0 ? 0 : _totalAttenuation / _attenuationCount;
if (_reflections == 0) {
_minDelay = 0.0f;
_minAttenuation = 0.0f;
}
_officialMaxDelay = _maxDelay;
_officialMinDelay = _minDelay;
_officialMaxAttenuation = _maxAttenuation;
_officialMinAttenuation = _minAttenuation;
_officialAverageDelay = _averageDelay;
_officialAverageAttenuation = _averageAttenuation;
}
void AudioReflector::drawVector(const glm::vec3& start, const glm::vec3& end, const glm::vec3& color) {
glDisable(GL_LIGHTING);
glLineWidth(2.0);
// Draw the vector itself
glBegin(GL_LINES);
glColor3f(color.x,color.y,color.z);
glVertex3f(start.x, start.y, start.z);
glVertex3f(end.x, end.y, end.z);
glEnd();
glEnable(GL_LIGHTING);
}
AudioPath::AudioPath(AudioSource source, const glm::vec3& origin, const glm::vec3& direction,
float attenuation, float delay, float distance,bool isDiffusion, int bounceCount) :
source(source),
isDiffusion(isDiffusion),
startPoint(origin),
startDirection(direction),
startDelay(delay),
startAttenuation(attenuation),
lastPoint(origin),
lastDirection(direction),
lastDistance(distance),
lastDelay(delay),
lastAttenuation(attenuation),
bounceCount(bounceCount),
finalized(false),
reflections()
{
}
void AudioReflector::addAudioPath(AudioSource source, const glm::vec3& origin, const glm::vec3& initialDirection,
float initialAttenuation, float initialDelay, float initialDistance, bool isDiffusion) {
AudioPath* path = new AudioPath(source, origin, initialDirection, initialAttenuation, initialDelay,
initialDistance, isDiffusion, 0);
QVector<AudioPath*>& audioPaths = source == INBOUND_AUDIO ? _inboundAudioPaths : _localAudioPaths;
audioPaths.push_back(path);
}
// NOTE: This is a prototype of an eventual utility that will identify the speaking sources for the inbound audio
// stream. It's not currently called but will be added soon.
void AudioReflector::identifyAudioSources() {
// looking for audio sources....
foreach (const AvatarSharedPointer& avatarPointer, _avatarManager->getAvatarHash()) {
Avatar* avatar = static_cast<Avatar*>(avatarPointer.data());
if (!avatar->isInitialized()) {
continue;
}
qDebug() << "avatar["<< avatar <<"] loudness:" << avatar->getAudioLoudness();
}
}
void AudioReflector::calculateAllReflections() {
// only recalculate when we've moved, or if the attributes have changed
// TODO: what about case where new voxels are added in front of us???
bool wantHeadOrientation = true; // Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingHeadOriented);
glm::quat orientation = wantHeadOrientation ? _myAvatar->getHead()->getFinalOrientationInWorldFrame() : _myAvatar->getOrientation();
glm::vec3 origin = _myAvatar->getHead()->getPosition();
glm::vec3 listenerPosition = _myAvatar->getHead()->getPosition();
bool shouldRecalc = _reflections == 0
|| !isSimilarPosition(origin, _origin)
|| !isSimilarOrientation(orientation, _orientation)
|| !isSimilarPosition(listenerPosition, _listenerPosition)
|| haveAttributesChanged();
if (shouldRecalc) {
QMutexLocker locker(&_mutex);
quint64 start = usecTimestampNow();
_origin = origin;
_orientation = orientation;
_listenerPosition = listenerPosition;
analyzePaths(); // actually does the work
quint64 end = usecTimestampNow();
const bool wantDebugging = false;
if (wantDebugging) {
qDebug() << "newCalculateAllReflections() elapsed=" << (end - start);
}
}
}
void AudioReflector::drawRays() {
const glm::vec3 RED(1,0,0);
const glm::vec3 GREEN(0,1,0);
const glm::vec3 BLUE(0,0,1);
const glm::vec3 CYAN(0,1,1);
int diffusionNumber = 0;
QMutexLocker locker(&_mutex);
// draw the paths for inbound audio
foreach(AudioPath* const& path, _inboundAudioPaths) {
// if this is an original reflection, draw it in RED
if (path->isDiffusion) {
diffusionNumber++;
drawPath(path, GREEN);
} else {
drawPath(path, RED);
}
}
bool processLocalAudio = true; // Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingProcessLocalAudio)
if (processLocalAudio) {
// draw the paths for local audio
foreach(AudioPath* const& path, _localAudioPaths) {
// if this is an original reflection, draw it in RED
if (path->isDiffusion) {
diffusionNumber++;
drawPath(path, CYAN);
} else {
drawPath(path, BLUE);
}
}
}
}
void AudioReflector::drawPath(AudioPath* path, const glm::vec3& originalColor) {
glm::vec3 start = path->startPoint;
glm::vec3 color = originalColor;
const float COLOR_ADJUST_PER_BOUNCE = 0.75f;
foreach (glm::vec3 end, path->reflections) {
drawVector(start, end, color);
start = end;
color = color * COLOR_ADJUST_PER_BOUNCE;
}
}
void AudioReflector::clearPaths() {
// clear our inbound audio paths
foreach(AudioPath* const& path, _inboundAudioPaths) {
delete path;
}
_inboundAudioPaths.clear();
_inboundAudiblePoints.clear(); // clear our inbound audible points
// clear our local audio paths
foreach(AudioPath* const& path, _localAudioPaths) {
delete path;
}
_localAudioPaths.clear();
_localAudiblePoints.clear(); // clear our local audible points
}
// Here's how this works: we have an array of AudioPaths, we loop on all of our currently calculating audio
// paths, and calculate one ray per path. If that ray doesn't reflect, or reaches a max distance/attenuation, then it
// is considered finalized.
// If the ray hits a surface, then, based on the characteristics of that surface, it will calculate the new
// attenuation, path length, and delay for the primary path. For surfaces that have diffusion, it will also create
// fanout number of new paths, those new paths will have an origin of the reflection point, and an initial attenuation
// of their diffusion ratio. Those new paths will be added to the active audio paths, and be analyzed for the next loop.
void AudioReflector::analyzePaths() {
clearPaths();
// add our initial paths
glm::vec3 right = glm::normalize(_orientation * IDENTITY_RIGHT);
glm::vec3 up = glm::normalize(_orientation * IDENTITY_UP);
glm::vec3 front = glm::normalize(_orientation * IDENTITY_FRONT);
glm::vec3 left = -right;
glm::vec3 down = -up;
glm::vec3 back = -front;
glm::vec3 frontRightUp = glm::normalize(front + right + up);
glm::vec3 frontLeftUp = glm::normalize(front + left + up);
glm::vec3 backRightUp = glm::normalize(back + right + up);
glm::vec3 backLeftUp = glm::normalize(back + left + up);
glm::vec3 frontRightDown = glm::normalize(front + right + down);
glm::vec3 frontLeftDown = glm::normalize(front + left + down);
glm::vec3 backRightDown = glm::normalize(back + right + down);
glm::vec3 backLeftDown = glm::normalize(back + left + down);
float initialAttenuation = 1.0f;
bool wantPreDelay = true; // Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingPreDelay)
float preDelay = wantPreDelay ? _preDelay : 0.0f;
// NOTE: we're still calculating our initial paths based on the listeners position. But the analysis code has been
// updated to support individual sound sources (which is how we support diffusion), we can use this new paradigm to
// add support for individual sound sources, and more directional sound sources
addAudioPath(INBOUND_AUDIO, _origin, front, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, right, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, up, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, down, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, back, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, left, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, frontRightUp, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, frontLeftUp, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, backRightUp, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, backLeftUp, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, frontRightDown, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, frontLeftDown, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, backRightDown, initialAttenuation, preDelay);
addAudioPath(INBOUND_AUDIO, _origin, backLeftDown, initialAttenuation, preDelay);
// the original paths for the local audio are directional to the front of the origin
addAudioPath(LOCAL_AUDIO, _origin, front, initialAttenuation, preDelay);
addAudioPath(LOCAL_AUDIO, _origin, frontRightUp, initialAttenuation, preDelay);
addAudioPath(LOCAL_AUDIO, _origin, frontLeftUp, initialAttenuation, preDelay);
addAudioPath(LOCAL_AUDIO, _origin, frontRightDown, initialAttenuation, preDelay);
addAudioPath(LOCAL_AUDIO, _origin, frontLeftDown, initialAttenuation, preDelay);
// loop through all our audio paths and keep analyzing them until they complete
int steps = 0;
int acitvePaths = _inboundAudioPaths.size() + _localAudioPaths.size(); // when we start, all paths are active
while(acitvePaths > 0) {
acitvePaths = analyzePathsSingleStep();
steps++;
}
_reflections = _inboundAudiblePoints.size() + _localAudiblePoints.size();
_diffusionPathCount = countDiffusionPaths();
}
int AudioReflector::countDiffusionPaths() {
int diffusionCount = 0;
foreach(AudioPath* const& path, _inboundAudioPaths) {
if (path->isDiffusion) {
diffusionCount++;
}
}
foreach(AudioPath* const& path, _localAudioPaths) {
if (path->isDiffusion) {
diffusionCount++;
}
}
return diffusionCount;
}
int AudioReflector::analyzePathsSingleStep() {
// iterate all the active sound paths, calculate one step per active path
int activePaths = 0;
QVector<AudioPath*>* pathsLists[] = { &_inboundAudioPaths, &_localAudioPaths };
for(unsigned int i = 0; i < sizeof(pathsLists) / sizeof(pathsLists[0]); i++) {
QVector<AudioPath*>& pathList = *pathsLists[i];
foreach(AudioPath* const& path, pathList) {
glm::vec3 start = path->lastPoint;
glm::vec3 direction = path->lastDirection;
OctreeElement* elementHit; // output from findRayIntersection
float distance; // output from findRayIntersection
BoxFace face; // output from findRayIntersection
if (!path->finalized) {
activePaths++;
if (path->bounceCount > ABSOLUTE_MAXIMUM_BOUNCE_COUNT) {
path->finalized = true;
} else if (_voxels->findRayIntersection(start, direction, elementHit, distance, face)) {
// TODO: we need to decide how we want to handle locking on the ray intersection, if we force lock,
// we get an accurate picture, but it could prevent rendering of the voxels. If we trylock (default),
// we might not get ray intersections where they may exist, but we can't really detect that case...
// add last parameter of Octree::Lock to force locking
handlePathPoint(path, distance, elementHit, face);
} else {
// If we didn't intersect, but this was a diffusion ray, then we will go ahead and cast a short ray out
// from our last known point, in the last known direction, and leave that sound source hanging there
if (path->isDiffusion) {
const float MINIMUM_RANDOM_DISTANCE = 0.25f;
const float MAXIMUM_RANDOM_DISTANCE = 0.5f;
float distance = randFloatInRange(MINIMUM_RANDOM_DISTANCE, MAXIMUM_RANDOM_DISTANCE);
handlePathPoint(path, distance, NULL, UNKNOWN_FACE);
} else {
path->finalized = true; // if it doesn't intersect, then it is finished
}
}
}
}
}
return activePaths;
}
void AudioReflector::handlePathPoint(AudioPath* path, float distance, OctreeElement* elementHit, BoxFace face) {
glm::vec3 start = path->lastPoint;
glm::vec3 direction = path->lastDirection;
glm::vec3 end = start + (direction * (distance * SLIGHTLY_SHORT));
float currentReflectiveAttenuation = path->lastAttenuation; // only the reflective components
float currentDelay = path->lastDelay; // start with our delay so far
float pathDistance = path->lastDistance;
pathDistance += glm::distance(start, end);
float toListenerDistance = glm::distance(end, _listenerPosition);
// adjust our current delay by just the delay from the most recent ray
currentDelay += getDelayFromDistance(distance);
// now we know the current attenuation for the "perfect" reflection case, but we now incorporate
// our surface materials to determine how much of this ray is absorbed, reflected, and diffused
SurfaceCharacteristics material = getSurfaceCharacteristics(elementHit);
float reflectiveAttenuation = currentReflectiveAttenuation * material.reflectiveRatio;
float totalDiffusionAttenuation = currentReflectiveAttenuation * material.diffusionRatio;
bool wantDiffusions = true; // Menu::getInstance()->isOptionChecked(MenuOption::AudioSpatialProcessingWithDiffusions);
int fanout = wantDiffusions ? _diffusionFanout : 0;
float partialDiffusionAttenuation = fanout < 1 ? 0.0f : totalDiffusionAttenuation / (float)fanout;
// total delay includes the bounce back to listener
float totalDelay = currentDelay + getDelayFromDistance(toListenerDistance);
float toListenerAttenuation = getDistanceAttenuationCoefficient(toListenerDistance + pathDistance);
// if our resulting partial diffusion attenuation, is still above our minimum attenuation
// then we add new paths for each diffusion point
if ((partialDiffusionAttenuation * toListenerAttenuation) > MINIMUM_ATTENUATION_TO_REFLECT
&& totalDelay < MAXIMUM_DELAY_MS) {
// diffusions fan out from random places on the semisphere of the collision point
for(int i = 0; i < fanout; i++) {
glm::vec3 diffusion;
// We're creating a random normal here. But we want it to be relatively dramatic compared to how we handle
// our slightly random surface normals.
const float MINIMUM_RANDOM_LENGTH = 0.5f;
const float MAXIMUM_RANDOM_LENGTH = 1.0f;
float randomness = randFloatInRange(MINIMUM_RANDOM_LENGTH, MAXIMUM_RANDOM_LENGTH);
float remainder = (1.0f - randomness)/2.0f;
float remainderSignA = randomSign();
float remainderSignB = randomSign();
if (face == MIN_X_FACE) {
diffusion = glm::vec3(-randomness, remainder * remainderSignA, remainder * remainderSignB);
} else if (face == MAX_X_FACE) {
diffusion = glm::vec3(randomness, remainder * remainderSignA, remainder * remainderSignB);
} else if (face == MIN_Y_FACE) {
diffusion = glm::vec3(remainder * remainderSignA, -randomness, remainder * remainderSignB);
} else if (face == MAX_Y_FACE) {
diffusion = glm::vec3(remainder * remainderSignA, randomness, remainder * remainderSignB);
} else if (face == MIN_Z_FACE) {
diffusion = glm::vec3(remainder * remainderSignA, remainder * remainderSignB, -randomness);
} else if (face == MAX_Z_FACE) {
diffusion = glm::vec3(remainder * remainderSignA, remainder * remainderSignB, randomness);
} else if (face == UNKNOWN_FACE) {
float randomnessX = randFloatInRange(MINIMUM_RANDOM_LENGTH, MAXIMUM_RANDOM_LENGTH);
float randomnessY = randFloatInRange(MINIMUM_RANDOM_LENGTH, MAXIMUM_RANDOM_LENGTH);
float randomnessZ = randFloatInRange(MINIMUM_RANDOM_LENGTH, MAXIMUM_RANDOM_LENGTH);
diffusion = glm::vec3(direction.x * randomnessX, direction.y * randomnessY, direction.z * randomnessZ);
}
diffusion = glm::normalize(diffusion);
// add new audio path for these diffusions, the new path's source is the same as the original source
addAudioPath(path->source, end, diffusion, partialDiffusionAttenuation, currentDelay, pathDistance, true);
}
} else {
const bool wantDebugging = false;
if (wantDebugging) {
if ((partialDiffusionAttenuation * toListenerAttenuation) <= MINIMUM_ATTENUATION_TO_REFLECT) {
qDebug() << "too quiet to diffuse";
qDebug() << " partialDiffusionAttenuation=" << partialDiffusionAttenuation;
qDebug() << " toListenerAttenuation=" << toListenerAttenuation;
qDebug() << " result=" << (partialDiffusionAttenuation * toListenerAttenuation);
qDebug() << " MINIMUM_ATTENUATION_TO_REFLECT=" << MINIMUM_ATTENUATION_TO_REFLECT;
}
if (totalDelay > MAXIMUM_DELAY_MS) {
qDebug() << "too delayed to diffuse";
qDebug() << " totalDelay=" << totalDelay;
qDebug() << " MAXIMUM_DELAY_MS=" << MAXIMUM_DELAY_MS;
}
}
}
// if our reflective attenuation is above our minimum, then add our reflection point and
// allow our path to continue
if (((reflectiveAttenuation + totalDiffusionAttenuation) * toListenerAttenuation) > MINIMUM_ATTENUATION_TO_REFLECT
&& totalDelay < MAXIMUM_DELAY_MS) {
// add this location, as the reflective attenuation as well as the total diffusion attenuation
// NOTE: we add the delay to the audible point, not back to the listener. The additional delay
// and attenuation to the listener is recalculated at the point where we actually inject the
// audio so that it can be adjusted to ear position
AudiblePoint point = {end, currentDelay, (reflectiveAttenuation + totalDiffusionAttenuation), pathDistance};
QVector<AudiblePoint>& audiblePoints = path->source == INBOUND_AUDIO ? _inboundAudiblePoints : _localAudiblePoints;
audiblePoints.push_back(point);
// add this location to the path points, so we can visualize it
path->reflections.push_back(end);
// now, if our reflective attenuation is over our minimum then keep going...
if (reflectiveAttenuation * toListenerAttenuation > MINIMUM_ATTENUATION_TO_REFLECT) {
glm::vec3 faceNormal = getFaceNormal(face);
path->lastDirection = glm::normalize(glm::reflect(direction,faceNormal));
path->lastPoint = end;
path->lastAttenuation = reflectiveAttenuation;
path->lastDelay = currentDelay;
path->lastDistance = pathDistance;
path->bounceCount++;
} else {
path->finalized = true; // if we're too quiet, then we're done
}
} else {
const bool wantDebugging = false;
if (wantDebugging) {
if (((reflectiveAttenuation + totalDiffusionAttenuation) * toListenerAttenuation) <= MINIMUM_ATTENUATION_TO_REFLECT) {
qDebug() << "too quiet to add audible point";
qDebug() << " reflectiveAttenuation + totalDiffusionAttenuation=" << (reflectiveAttenuation + totalDiffusionAttenuation);
qDebug() << " toListenerAttenuation=" << toListenerAttenuation;
qDebug() << " result=" << ((reflectiveAttenuation + totalDiffusionAttenuation) * toListenerAttenuation);
qDebug() << " MINIMUM_ATTENUATION_TO_REFLECT=" << MINIMUM_ATTENUATION_TO_REFLECT;
}
if (totalDelay > MAXIMUM_DELAY_MS) {
qDebug() << "too delayed to add audible point";
qDebug() << " totalDelay=" << totalDelay;
qDebug() << " MAXIMUM_DELAY_MS=" << MAXIMUM_DELAY_MS;
}
}
path->finalized = true; // if we're too quiet, then we're done
}
}
// TODO: eventually we will add support for different surface characteristics based on the element
// that is hit, which is why we pass in the elementHit to this helper function. But for now, all
// surfaces have the same characteristics
SurfaceCharacteristics AudioReflector::getSurfaceCharacteristics(OctreeElement* elementHit) {
SurfaceCharacteristics result = { getReflectiveRatio(), _absorptionRatio, _diffusionRatio };
return result;
}
void AudioReflector::setReflectiveRatio(float ratio) {
float safeRatio = std::max(0.0f, std::min(ratio, 1.0f));
float currentReflectiveRatio = (1.0f - (_absorptionRatio + _diffusionRatio));
float halfDifference = (safeRatio - currentReflectiveRatio) / 2.0f;
// evenly distribute the difference between the two other ratios
_absorptionRatio -= halfDifference;
_diffusionRatio -= halfDifference;
}
void AudioReflector::setAbsorptionRatio(float ratio) {
float safeRatio = std::max(0.0f, std::min(ratio, 1.0f));
_absorptionRatio = safeRatio;
const float MAX_COMBINED_RATIO = 1.0f;
if (_absorptionRatio + _diffusionRatio > MAX_COMBINED_RATIO) {
_diffusionRatio = MAX_COMBINED_RATIO - _absorptionRatio;
}
}
void AudioReflector::setDiffusionRatio(float ratio) {
float safeRatio = std::max(0.0f, std::min(ratio, 1.0f));
_diffusionRatio = safeRatio;
const float MAX_COMBINED_RATIO = 1.0f;
if (_absorptionRatio + _diffusionRatio > MAX_COMBINED_RATIO) {
_absorptionRatio = MAX_COMBINED_RATIO - _diffusionRatio;
}
}

View file

@ -1,254 +0,0 @@
//
// AudioReflector.h
// interface
//
// Created by Brad Hefta-Gaub on 4/2/2014
// Copyright (c) 2014 High Fidelity, Inc. All rights reserved.
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
#ifndef interface_AudioReflector_h
#define interface_AudioReflector_h
#include <QMutex>
#include <VoxelTree.h>
#include "Audio.h"
#include "avatar/MyAvatar.h"
#include "avatar/AvatarManager.h"
enum AudioSource {
LOCAL_AUDIO,
INBOUND_AUDIO
};
class AudioPath {
public:
AudioPath(AudioSource source = INBOUND_AUDIO, const glm::vec3& origin = glm::vec3(0.0f),
const glm::vec3& direction = glm::vec3(0.0f), float attenuation = 1.0f,
float delay = 0.0f, float distance = 0.0f, bool isDiffusion = false, int bounceCount = 0);
AudioSource source;
bool isDiffusion;
glm::vec3 startPoint;
glm::vec3 startDirection;
float startDelay;
float startAttenuation;
glm::vec3 lastPoint;
glm::vec3 lastDirection;
float lastDistance;
float lastDelay;
float lastAttenuation;
unsigned int bounceCount;
bool finalized;
QVector<glm::vec3> reflections;
};
class AudiblePoint {
public:
glm::vec3 location; /// location of the audible point
float delay; /// includes total delay including pre delay to the point of the audible location, not to the listener's ears
float attenuation; /// only the reflective & diffusive portion of attenuation, doesn't include distance attenuation
float distance; /// includes total distance to the point of the audible location, not to the listener's ears
};
class SurfaceCharacteristics {
public:
float reflectiveRatio;
float absorptionRatio;
float diffusionRatio;
};
class AudioReflector : public QObject {
Q_OBJECT
public:
AudioReflector(QObject* parent = NULL);
// setup functions to configure the resources used by the AudioReflector
void setVoxels(VoxelTree* voxels) { _voxels = voxels; }
void setMyAvatar(MyAvatar* myAvatar) { _myAvatar = myAvatar; }
void setAudio(Audio* audio) { _audio = audio; }
void setAvatarManager(AvatarManager* avatarManager) { _avatarManager = avatarManager; }
void render(); /// must be called in the application render loop
void preProcessOriginalInboundAudio(unsigned int sampleTime, QByteArray& samples, const QAudioFormat& format);
void processInboundAudio(unsigned int sampleTime, const QByteArray& samples, const QAudioFormat& format);
void processLocalAudio(unsigned int sampleTime, const QByteArray& samples, const QAudioFormat& format);
public slots:
// statistics
int getReflections() const { return _reflections; }
float getAverageDelayMsecs() const { return _officialAverageDelay; }
float getAverageAttenuation() const { return _officialAverageAttenuation; }
float getMaxDelayMsecs() const { return _officialMaxDelay; }
float getMaxAttenuation() const { return _officialMaxAttenuation; }
float getMinDelayMsecs() const { return _officialMinDelay; }
float getMinAttenuation() const { return _officialMinAttenuation; }
float getDelayFromDistance(float distance);
int getDiffusionPathCount() const { return _diffusionPathCount; }
int getEchoesInjected() const { return _inboundEchoesCount + _localEchoesCount; }
int getEchoesSuppressed() const { return _inboundEchoesSuppressedCount + _localEchoesSuppressedCount; }
/// ms of delay added to all echos
float getPreDelay() const { return _preDelay; }
void setPreDelay(float preDelay) { _preDelay = preDelay; }
/// ms per meter that sound travels, larger means slower, which sounds bigger
float getSoundMsPerMeter() const { return _soundMsPerMeter; }
void setSoundMsPerMeter(float soundMsPerMeter) { _soundMsPerMeter = soundMsPerMeter; }
/// scales attenuation to be louder or softer than the default distance attenuation
float getDistanceAttenuationScalingFactor() const { return _distanceAttenuationScalingFactor; }
void setDistanceAttenuationScalingFactor(float factor) { _distanceAttenuationScalingFactor = factor; }
/// scales attenuation of local audio to be louder or softer than the default attenuation
float getLocalAudioAttenuationFactor() const { return _localAudioAttenuationFactor; }
void setLocalAudioAttenuationFactor(float factor) { _localAudioAttenuationFactor = factor; }
/// ms window in which we will suppress echoes to reduce comb filter effects
float getCombFilterWindow() const { return _combFilterWindow; }
void setCombFilterWindow(float value) { _combFilterWindow = value; }
/// number of points of diffusion from each reflection point, as fanout increases there are more chances for secondary
/// echoes, but each diffusion ray is quieter and therefore more likely to be below the sound floor
int getDiffusionFanout() const { return _diffusionFanout; }
void setDiffusionFanout(int fanout) { _diffusionFanout = fanout; }
/// ratio 0.0 - 1.0 of amount of each ray that is absorbed upon hitting a surface
float getAbsorptionRatio() const { return _absorptionRatio; }
void setAbsorptionRatio(float ratio);
// ratio 0.0 - 1.0 of amount of each ray that is diffused upon hitting a surface
float getDiffusionRatio() const { return _diffusionRatio; }
void setDiffusionRatio(float ratio);
// remaining ratio 0.0 - 1.0 of amount of each ray that is cleanly reflected upon hitting a surface
float getReflectiveRatio() const { return (1.0f - (_absorptionRatio + _diffusionRatio)); }
void setReflectiveRatio(float ratio);
// wet/dry mix - these don't affect any reflection calculations, only the final mix volumes
float getOriginalSourceAttenuation() const { return _originalSourceAttenuation; }
void setOriginalSourceAttenuation(float value) { _originalSourceAttenuation = value; }
float getEchoesAttenuation() const { return _allEchoesAttenuation; }
void setEchoesAttenuation(float value) { _allEchoesAttenuation = value; }
signals:
private:
VoxelTree* _voxels; // used to access voxel scene
MyAvatar* _myAvatar; // access to listener
Audio* _audio; // access to audio API
AvatarManager* _avatarManager; // access to avatar manager API
// Helpers for drawing
void drawVector(const glm::vec3& start, const glm::vec3& end, const glm::vec3& color);
// helper for generically calculating attenuation based on distance
float getDistanceAttenuationCoefficient(float distance);
// statistics
int _reflections;
int _diffusionPathCount;
int _delayCount;
float _totalDelay;
float _averageDelay;
float _maxDelay;
float _minDelay;
float _officialAverageDelay;
float _officialMaxDelay;
float _officialMinDelay;
int _attenuationCount;
float _totalAttenuation;
float _averageAttenuation;
float _maxAttenuation;
float _minAttenuation;
float _officialAverageAttenuation;
float _officialMaxAttenuation;
float _officialMinAttenuation;
glm::vec3 _listenerPosition;
glm::vec3 _origin;
glm::quat _orientation;
QVector<AudioPath*> _inboundAudioPaths; /// audio paths we're processing for inbound audio
QVector<AudiblePoint> _inboundAudiblePoints; /// the audible points that have been calculated from the inbound audio paths
QMap<float, float> _inboundAudioDelays; /// delay times for currently injected audio points
QVector<float> _inboundEchoesSuppressed; /// delay times for currently injected audio points
int _inboundEchoesCount;
int _inboundEchoesSuppressedCount;
QVector<AudioPath*> _localAudioPaths; /// audio paths we're processing for local audio
QVector<AudiblePoint> _localAudiblePoints; /// the audible points that have been calculated from the local audio paths
QMap<float, float> _localAudioDelays; /// delay times for currently injected audio points
QVector<float> _localEchoesSuppressed; /// delay times for currently injected audio points
int _localEchoesCount;
int _localEchoesSuppressedCount;
// adds a sound source to begin an audio path trace, these can be the initial sound sources with their directional properties,
// as well as diffusion sound sources
void addAudioPath(AudioSource source, const glm::vec3& origin, const glm::vec3& initialDirection, float initialAttenuation,
float initialDelay, float initialDistance = 0.0f, bool isDiffusion = false);
// helper that handles audioPath analysis
int analyzePathsSingleStep();
void handlePathPoint(AudioPath* path, float distance, OctreeElement* elementHit, BoxFace face);
void clearPaths();
void analyzePaths();
void drawRays();
void drawPath(AudioPath* path, const glm::vec3& originalColor);
void calculateAllReflections();
int countDiffusionPaths();
glm::vec3 getFaceNormal(BoxFace face);
void identifyAudioSources();
void injectAudiblePoint(AudioSource source, const AudiblePoint& audiblePoint, const QByteArray& samples, unsigned int sampleTime, int sampleRate);
void echoAudio(AudioSource source, unsigned int sampleTime, const QByteArray& samples, const QAudioFormat& format);
// return the surface characteristics of the element we hit
SurfaceCharacteristics getSurfaceCharacteristics(OctreeElement* elementHit = NULL);
QMutex _mutex;
float _preDelay;
float _soundMsPerMeter;
float _distanceAttenuationScalingFactor;
float _localAudioAttenuationFactor;
float _combFilterWindow;
int _diffusionFanout; // number of points of diffusion from each reflection point
// all elements have the same material for now...
float _absorptionRatio;
float _diffusionRatio;
float _reflectiveRatio;
// wet/dry mix - these don't affect any reflection calculations, only the final mix volumes
float _originalSourceAttenuation; /// each sample of original signal will be multiplied by this
float _allEchoesAttenuation; /// each sample of all echo signals will be multiplied by this
// remember the last known values at calculation
bool haveAttributesChanged();
bool _withDiffusion;
float _lastPreDelay;
float _lastSoundMsPerMeter;
float _lastDistanceAttenuationScalingFactor;
float _lastLocalAudioAttenuationFactor;
int _lastDiffusionFanout;
float _lastAbsorptionRatio;
float _lastDiffusionRatio;
bool _lastDontDistanceAttenuate;
bool _lastAlternateDistanceAttenuate;
int _injectedEchoes;
};
#endif // interface_AudioReflector_h

View file

@ -95,10 +95,8 @@ void Camera::setFarClip(float f) {
}
PickRay Camera::computePickRay(float x, float y) {
float screenWidth = Application::getInstance()->getGLWidget()->width();
float screenHeight = Application::getInstance()->getGLWidget()->height();
return computeViewPickRay(x / screenWidth, y / screenHeight);
GLCanvas::SharedPointer glCanvas = DependencyManager::get<GLCanvas>();
return computeViewPickRay(x / glCanvas->width(), y / glCanvas->height());
}
PickRay Camera::computeViewPickRay(float xRatio, float yRatio) {

View file

@ -17,11 +17,12 @@
#include <GeometryUtil.h>
#include <PacketHeaders.h>
#include <PathUtils.h>
#include <ProgramObject.h>
#include <SharedUtil.h>
#include "Application.h"
#include "Camera.h"
#include "renderer/ProgramObject.h"
#include "world.h"
#include "Environment.h"
@ -188,7 +189,7 @@ int Environment::parseData(const HifiSockAddr& senderAddress, const QByteArray&
ProgramObject* Environment::createSkyProgram(const char* from, int* locations) {
ProgramObject* program = new ProgramObject();
QByteArray prefix = QString(Application::resourcesPath() + "/shaders/SkyFrom" + from).toUtf8();
QByteArray prefix = QString(PathUtils::resourcesPath() + "/shaders/SkyFrom" + from).toUtf8();
program->addShaderFromSourceFile(QGLShader::Vertex, prefix + ".vert");
program->addShaderFromSourceFile(QGLShader::Fragment, prefix + ".frag");
program->link();
@ -261,7 +262,7 @@ void Environment::renderAtmosphere(Camera& camera, const EnvironmentData& data)
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
Application::getInstance()->getGeometryCache()->renderSphere(1.0f, 100, 50); //Draw a unit sphere
DependencyManager::get<GeometryCache>()->renderSphere(1.0f, 100, 50); //Draw a unit sphere
glDepthMask(GL_TRUE);
program->release();

View file

@ -9,13 +9,13 @@
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
#include <QMainWindow>
#include <QMimeData>
#include <QUrl>
#include <QWindow>
#include "Application.h"
#include "GLCanvas.h"
#include "MainWindow.h"
#include "devices/OculusManager.h"
const int MSECS_PER_FRAME_WHEN_THROTTLED = 66;

View file

@ -15,11 +15,14 @@
#include <QGLWidget>
#include <QTimer>
#include <DependencyManager.h>
/// customized canvas that simply forwards requests/events to the singleton application
class GLCanvas : public QGLWidget {
Q_OBJECT
SINGLETON_DEPENDENCY(GLCanvas)
public:
GLCanvas();
bool isThrottleRendering() const;
int getDeviceWidth() const;
@ -56,6 +59,12 @@ private slots:
void activeChanged(Qt::ApplicationState state);
void throttleRender();
bool eventFilter(QObject*, QEvent* event);
private:
GLCanvas();
~GLCanvas() {
qDebug() << "Deleting GLCanvas";
}
};
#endif // hifi_GLCanvas_h

View file

@ -10,11 +10,14 @@
//
// Creates single flexible verlet-integrated strands that can be used for hair/fur/grass
#include "Hair.h"
#include <gpu/GPUConfig.h>
#include "Util.h"
#include "world.h"
#include "Hair.h"
const float HAIR_DAMPING = 0.99f;
const float CONSTRAINT_RELAXATION = 10.0f;
const float HAIR_ACCELERATION_COUPLING = 0.045f;

View file

@ -31,12 +31,19 @@
#include <AccountManager.h>
#include <AddressManager.h>
#include <XmppClient.h>
#include <DependencyManager.h>
#include <MainWindow.h>
#include <GlowEffect.h>
#include <PathUtils.h>
#include <UUID.h>
#include <UserActivityLogger.h>
#include <XmppClient.h>
#include "Application.h"
#include "AccountManager.h"
#include "devices/Faceshift.h"
#include "devices/OculusManager.h"
#include "devices/Visage.h"
#include "Menu.h"
#include "scripting/LocationScriptingInterface.h"
#include "scripting/MenuScriptingInterface.h"
@ -49,8 +56,6 @@
#include "ui/ModelsBrowser.h"
#include "ui/LoginDialog.h"
#include "ui/NodeBounds.h"
#include "devices/OculusManager.h"
Menu* Menu::_instance = NULL;
@ -184,10 +189,14 @@ Menu::Menu() :
SLOT(toggleAddressBar()));
addDisabledActionAndSeparator(fileMenu, "Upload Avatar Model");
addActionToQMenuAndActionHash(fileMenu, MenuOption::UploadHead, 0, Application::getInstance(), SLOT(uploadHead()));
addActionToQMenuAndActionHash(fileMenu, MenuOption::UploadSkeleton, 0, Application::getInstance(), SLOT(uploadSkeleton()));
addActionToQMenuAndActionHash(fileMenu, MenuOption::UploadHead, 0,
Application::getInstance(), SLOT(uploadHead()));
addActionToQMenuAndActionHash(fileMenu, MenuOption::UploadSkeleton, 0,
Application::getInstance(), SLOT(uploadSkeleton()));
addActionToQMenuAndActionHash(fileMenu, MenuOption::UploadAttachment, 0,
Application::getInstance(), SLOT(uploadAttachment()));
Application::getInstance(), SLOT(uploadAttachment()));
addActionToQMenuAndActionHash(fileMenu, MenuOption::UploadEntity, 0,
Application::getInstance(), SLOT(uploadEntity()));
addDisabledActionAndSeparator(fileMenu, "Settings");
addActionToQMenuAndActionHash(fileMenu, MenuOption::SettingsImport, 0, this, SLOT(importSettings()));
addActionToQMenuAndActionHash(fileMenu, MenuOption::SettingsExport, 0, this, SLOT(exportSettings()));
@ -243,7 +252,7 @@ Menu::Menu() :
connect(&xmppClient, &QXmppClient::connected, this, &Menu::toggleChat);
connect(&xmppClient, &QXmppClient::disconnected, this, &Menu::toggleChat);
QDir::setCurrent(Application::resourcesPath());
QDir::setCurrent(PathUtils::resourcesPath());
// init chat window to listen chat
_chatWindow = new ChatWindow(Application::getInstance()->getWindow());
#endif
@ -421,7 +430,8 @@ Menu::Menu() :
true,
appInstance,
SLOT(setRenderVoxels(bool)));
addCheckableActionToQMenuAndActionHash(renderOptionsMenu, MenuOption::EnableGlowEffect, 0, true);
addCheckableActionToQMenuAndActionHash(renderOptionsMenu, MenuOption::EnableGlowEffect, 0, true,
DependencyManager::get<GlowEffect>().data(), SLOT(toggleGlowEffect(bool)));
addCheckableActionToQMenuAndActionHash(renderOptionsMenu, MenuOption::Wireframe, Qt::ALT | Qt::Key_W, false);
addActionToQMenuAndActionHash(renderOptionsMenu, MenuOption::LodTools, Qt::SHIFT | Qt::Key_L, this, SLOT(lodTools()));
@ -432,12 +442,12 @@ Menu::Menu() :
MenuOption::Faceshift,
0,
true,
appInstance->getFaceshift(),
DependencyManager::get<Faceshift>().data(),
SLOT(setTCPEnabled(bool)));
#endif
#ifdef HAVE_VISAGE
addCheckableActionToQMenuAndActionHash(avatarDebugMenu, MenuOption::Visage, 0, false,
appInstance->getVisage(), SLOT(updateEnabled()));
DependencyManager::get<Visage>().data(), SLOT(updateEnabled()));
#endif
addCheckableActionToQMenuAndActionHash(avatarDebugMenu, MenuOption::RenderSkeletonCollisionShapes);
@ -1046,11 +1056,11 @@ void Menu::bumpSettings() {
void sendFakeEnterEvent() {
QPoint lastCursorPosition = QCursor::pos();
QGLWidget* glWidget = Application::getInstance()->getGLWidget();
GLCanvas::SharedPointer glCanvas = DependencyManager::get<GLCanvas>();
QPoint windowPosition = glWidget->mapFromGlobal(lastCursorPosition);
QPoint windowPosition = glCanvas->mapFromGlobal(lastCursorPosition);
QEnterEvent enterEvent = QEnterEvent(windowPosition, windowPosition, lastCursorPosition);
QCoreApplication::sendEvent(glWidget, &enterEvent);
QCoreApplication::sendEvent(glCanvas.data(), &enterEvent);
}
const float DIALOG_RATIO_OF_WINDOW = 0.30f;
@ -1298,11 +1308,15 @@ void Menu::toggleLoginMenuItem() {
void Menu::bandwidthDetails() {
if (! _bandwidthDialog) {
_bandwidthDialog = new BandwidthDialog(Application::getInstance()->getGLWidget(),
_bandwidthDialog = new BandwidthDialog(DependencyManager::get<GLCanvas>().data(),
Application::getInstance()->getBandwidthMeter());
connect(_bandwidthDialog, SIGNAL(closed()), SLOT(bandwidthDetailsClosed()));
_bandwidthDialog->show();
if (_hmdToolsDialog) {
_hmdToolsDialog->watchWindow(_bandwidthDialog->windowHandle());
}
}
_bandwidthDialog->raise();
}
@ -1384,6 +1398,9 @@ void Menu::toggleConsole() {
void Menu::toggleToolWindow() {
QMainWindow* toolWindow = Application::getInstance()->getToolWindow();
toolWindow->setVisible(!toolWindow->isVisible());
if (_hmdToolsDialog) {
_hmdToolsDialog->watchWindow(toolWindow->windowHandle());
}
}
void Menu::audioMuteToggled() {
@ -1402,10 +1419,13 @@ void Menu::bandwidthDetailsClosed() {
void Menu::octreeStatsDetails() {
if (!_octreeStatsDialog) {
_octreeStatsDialog = new OctreeStatsDialog(Application::getInstance()->getGLWidget(),
_octreeStatsDialog = new OctreeStatsDialog(DependencyManager::get<GLCanvas>().data(),
Application::getInstance()->getOcteeSceneStats());
connect(_octreeStatsDialog, SIGNAL(closed()), SLOT(octreeStatsDetailsClosed()));
_octreeStatsDialog->show();
if (_hmdToolsDialog) {
_hmdToolsDialog->watchWindow(_octreeStatsDialog->windowHandle());
}
}
_octreeStatsDialog->raise();
}
@ -1583,9 +1603,12 @@ bool Menu::shouldRenderMesh(float largestDimension, float distanceToCamera) {
void Menu::lodTools() {
if (!_lodToolsDialog) {
_lodToolsDialog = new LodToolsDialog(Application::getInstance()->getGLWidget());
_lodToolsDialog = new LodToolsDialog(DependencyManager::get<GLCanvas>().data());
connect(_lodToolsDialog, SIGNAL(closed()), SLOT(lodToolsClosed()));
_lodToolsDialog->show();
if (_hmdToolsDialog) {
_hmdToolsDialog->watchWindow(_lodToolsDialog->windowHandle());
}
}
_lodToolsDialog->raise();
}
@ -1600,7 +1623,7 @@ void Menu::lodToolsClosed() {
void Menu::hmdTools(bool showTools) {
if (showTools) {
if (!_hmdToolsDialog) {
_hmdToolsDialog = new HMDToolsDialog(Application::getInstance()->getGLWidget());
_hmdToolsDialog = new HMDToolsDialog(DependencyManager::get<GLCanvas>().data());
connect(_hmdToolsDialog, SIGNAL(closed()), SLOT(hmdToolsClosed()));
}
_hmdToolsDialog->show();

View file

@ -123,7 +123,6 @@ public:
LodToolsDialog* getLodToolsDialog() const { return _lodToolsDialog; }
HMDToolsDialog* getHMDToolsDialog() const { return _hmdToolsDialog; }
int getMaxVoxels() const { return _maxVoxels; }
QAction* getUseVoxelShader() const { return _useVoxelShader; }
bool getShadowsEnabled() const;
@ -304,7 +303,6 @@ private:
float _avatarLODIncreaseFPS;
float _avatarLODDistanceMultiplier;
int _boundaryLevelAdjust;
QAction* _useVoxelShader;
int _maxVoxelPacketsPerSecond;
QString replaceLastOccurrence(QChar search, QChar replace, QString string);
quint64 _lastAdjust;
@ -363,9 +361,6 @@ namespace MenuOption {
const QString Collisions = "Collisions";
const QString Console = "Console...";
const QString ControlWithSpeech = "Control With Speech";
const QString DontCullOutOfViewMeshParts = "Don't Cull Out Of View Mesh Parts";
const QString DontCullTooSmallMeshParts = "Don't Cull Too Small Mesh Parts";
const QString DontReduceMaterialSwitches = "Don't Attempt to Reduce Material Switches";
const QString DontRenderEntitiesAsScene = "Don't Render Entities as Scene";
const QString DontDoPrecisionPicking = "Don't Do Precision Picking";
const QString DecreaseAvatarSize = "Decrease Avatar Size";
@ -487,6 +482,7 @@ namespace MenuOption {
const QString TransmitterDrive = "Transmitter Drive";
const QString TurnWithHead = "Turn using Head";
const QString UploadAttachment = "Upload Attachment Model";
const QString UploadEntity = "Upload Entity Model";
const QString UploadHead = "Upload Head Model";
const QString UploadSkeleton = "Upload Skeleton Model";
const QString UserInterface = "User Interface";

View file

@ -21,17 +21,18 @@
#include <glm/gtx/transform.hpp>
#include <DeferredLightingEffect.h>
#include <GeometryUtil.h>
#include <Model.h>
#include <SharedUtil.h>
#include <MetavoxelMessages.h>
#include <MetavoxelUtil.h>
#include <PathUtils.h>
#include <ScriptCache.h>
#include "Application.h"
#include "MetavoxelSystem.h"
#include "renderer/Model.h"
#include "renderer/RenderUtil.h"
REGISTER_META_OBJECT(DefaultMetavoxelRendererImplementation)
REGISTER_META_OBJECT(SphereRenderer)
@ -63,9 +64,9 @@ void MetavoxelSystem::init() {
_voxelBufferAttribute->setLODThresholdMultiplier(
AttributeRegistry::getInstance()->getVoxelColorAttribute()->getLODThresholdMultiplier());
_baseHeightfieldProgram.addShaderFromSourceFile(QGLShader::Vertex, Application::resourcesPath() +
_baseHeightfieldProgram.addShaderFromSourceFile(QGLShader::Vertex, PathUtils::resourcesPath() +
"shaders/metavoxel_heightfield_base.vert");
_baseHeightfieldProgram.addShaderFromSourceFile(QGLShader::Fragment, Application::resourcesPath() +
_baseHeightfieldProgram.addShaderFromSourceFile(QGLShader::Fragment, PathUtils::resourcesPath() +
"shaders/metavoxel_heightfield_base.frag");
_baseHeightfieldProgram.link();
@ -78,9 +79,9 @@ void MetavoxelSystem::init() {
loadSplatProgram("heightfield", _splatHeightfieldProgram, _splatHeightfieldLocations);
_heightfieldCursorProgram.addShaderFromSourceFile(QGLShader::Vertex, Application::resourcesPath() +
_heightfieldCursorProgram.addShaderFromSourceFile(QGLShader::Vertex, PathUtils::resourcesPath() +
"shaders/metavoxel_heightfield_cursor.vert");
_heightfieldCursorProgram.addShaderFromSourceFile(QGLShader::Fragment, Application::resourcesPath() +
_heightfieldCursorProgram.addShaderFromSourceFile(QGLShader::Fragment, PathUtils::resourcesPath() +
"shaders/metavoxel_cursor.frag");
_heightfieldCursorProgram.link();
@ -88,17 +89,17 @@ void MetavoxelSystem::init() {
_heightfieldCursorProgram.setUniformValue("heightMap", 0);
_heightfieldCursorProgram.release();
_baseVoxelProgram.addShaderFromSourceFile(QGLShader::Vertex, Application::resourcesPath() +
_baseVoxelProgram.addShaderFromSourceFile(QGLShader::Vertex, PathUtils::resourcesPath() +
"shaders/metavoxel_voxel_base.vert");
_baseVoxelProgram.addShaderFromSourceFile(QGLShader::Fragment, Application::resourcesPath() +
_baseVoxelProgram.addShaderFromSourceFile(QGLShader::Fragment, PathUtils::resourcesPath() +
"shaders/metavoxel_voxel_base.frag");
_baseVoxelProgram.link();
loadSplatProgram("voxel", _splatVoxelProgram, _splatVoxelLocations);
_voxelCursorProgram.addShaderFromSourceFile(QGLShader::Vertex, Application::resourcesPath() +
_voxelCursorProgram.addShaderFromSourceFile(QGLShader::Vertex, PathUtils::resourcesPath() +
"shaders/metavoxel_voxel_cursor.vert");
_voxelCursorProgram.addShaderFromSourceFile(QGLShader::Fragment, Application::resourcesPath() +
_voxelCursorProgram.addShaderFromSourceFile(QGLShader::Fragment, PathUtils::resourcesPath() +
"shaders/metavoxel_cursor.frag");
_voxelCursorProgram.link();
}
@ -205,7 +206,7 @@ void MetavoxelSystem::render() {
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
Application::getInstance()->getTextureCache()->setPrimaryDrawBuffers(true, true);
DependencyManager::get<TextureCache>()->setPrimaryDrawBuffers(true, true);
glDisable(GL_BLEND);
glEnable(GL_CULL_FACE);
@ -251,7 +252,7 @@ void MetavoxelSystem::render() {
glPopMatrix();
}
Application::getInstance()->getTextureCache()->setPrimaryDrawBuffers(true, false);
DependencyManager::get<TextureCache>()->setPrimaryDrawBuffers(true, false);
_baseHeightfieldProgram.release();
@ -348,7 +349,7 @@ void MetavoxelSystem::render() {
}
if (!_voxelBaseBatches.isEmpty()) {
Application::getInstance()->getTextureCache()->setPrimaryDrawBuffers(true, true);
DependencyManager::get<TextureCache>()->setPrimaryDrawBuffers(true, true);
glEnableClientState(GL_VERTEX_ARRAY);
glDisable(GL_BLEND);
@ -383,7 +384,7 @@ void MetavoxelSystem::render() {
glDisable(GL_ALPHA_TEST);
glEnable(GL_BLEND);
Application::getInstance()->getTextureCache()->setPrimaryDrawBuffers(true, false);
DependencyManager::get<TextureCache>()->setPrimaryDrawBuffers(true, false);
if (!_voxelSplatBatches.isEmpty()) {
glDepthFunc(GL_LEQUAL);
@ -463,14 +464,14 @@ void MetavoxelSystem::render() {
}
if (!_hermiteBatches.isEmpty() && Menu::getInstance()->isOptionChecked(MenuOption::DisplayHermiteData)) {
Application::getInstance()->getTextureCache()->setPrimaryDrawBuffers(true, true);
DependencyManager::get<TextureCache>()->setPrimaryDrawBuffers(true, true);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glNormal3f(0.0f, 1.0f, 0.0f);
Application::getInstance()->getDeferredLightingEffect()->bindSimpleProgram();
DependencyManager::get<DeferredLightingEffect>()->bindSimpleProgram();
foreach (const HermiteBatch& batch, _hermiteBatches) {
batch.vertexBuffer->bind();
@ -482,11 +483,11 @@ void MetavoxelSystem::render() {
batch.vertexBuffer->release();
}
Application::getInstance()->getDeferredLightingEffect()->releaseSimpleProgram();
DependencyManager::get<DeferredLightingEffect>()->releaseSimpleProgram();
glDisableClientState(GL_VERTEX_ARRAY);
Application::getInstance()->getTextureCache()->setPrimaryDrawBuffers(true, false);
DependencyManager::get<TextureCache>()->setPrimaryDrawBuffers(true, false);
}
_hermiteBatches.clear();
@ -495,7 +496,7 @@ void MetavoxelSystem::render() {
}
void MetavoxelSystem::refreshVoxelData() {
foreach (const SharedNodePointer& node, NodeList::getInstance()->getNodeHash()) {
NodeList::getInstance()->eachNode([](const SharedNodePointer& node){
if (node->getType() == NodeType::MetavoxelServer) {
QMutexLocker locker(&node->getMutex());
MetavoxelSystemClient* client = static_cast<MetavoxelSystemClient*>(node->getLinkedData());
@ -503,7 +504,7 @@ void MetavoxelSystem::refreshVoxelData() {
QMetaObject::invokeMethod(client, "refreshVoxelData");
}
}
}
});
}
class RayVoxelIntersectionVisitor : public RayIntersectionVisitor {
@ -797,7 +798,7 @@ void MetavoxelSystem::applyMaterialEdit(const MetavoxelEditMessage& message, boo
Q_ARG(bool, reliable));
return;
}
QSharedPointer<NetworkTexture> texture = Application::getInstance()->getTextureCache()->getTexture(
QSharedPointer<NetworkTexture> texture = DependencyManager::get<TextureCache>()->getTexture(
material->getDiffuse(), SPLAT_TEXTURE);
if (texture->isLoaded()) {
MetavoxelEditMessage newMessage = message;
@ -818,7 +819,7 @@ MetavoxelClient* MetavoxelSystem::createClient(const SharedNodePointer& node) {
}
void MetavoxelSystem::guideToAugmented(MetavoxelVisitor& visitor, bool render) {
foreach (const SharedNodePointer& node, NodeList::getInstance()->getNodeHash()) {
NodeList::getInstance()->eachNode([&visitor, &render](const SharedNodePointer& node){
if (node->getType() == NodeType::MetavoxelServer) {
QMutexLocker locker(&node->getMutex());
MetavoxelSystemClient* client = static_cast<MetavoxelSystemClient*>(node->getLinkedData());
@ -832,13 +833,13 @@ void MetavoxelSystem::guideToAugmented(MetavoxelVisitor& visitor, bool render) {
}
}
}
}
});
}
void MetavoxelSystem::loadSplatProgram(const char* type, ProgramObject& program, SplatLocations& locations) {
program.addShaderFromSourceFile(QGLShader::Vertex, Application::resourcesPath() +
program.addShaderFromSourceFile(QGLShader::Vertex, PathUtils::resourcesPath() +
"shaders/metavoxel_" + type + "_splat.vert");
program.addShaderFromSourceFile(QGLShader::Fragment, Application::resourcesPath() +
program.addShaderFromSourceFile(QGLShader::Fragment, PathUtils::resourcesPath() +
"shaders/metavoxel_" + type + "_splat.frag");
program.link();
@ -1177,10 +1178,11 @@ void VoxelBuffer::render(bool cursor) {
if (!_materials.isEmpty()) {
_networkTextures.resize(_materials.size());
TextureCache::SharedPointer textureCache = DependencyManager::get<TextureCache>();
for (int i = 0; i < _materials.size(); i++) {
const SharedObjectPointer material = _materials.at(i);
if (material) {
_networkTextures[i] = Application::getInstance()->getTextureCache()->getTexture(
_networkTextures[i] = textureCache->getTexture(
static_cast<MaterialObject*>(material.data())->getDiffuse(), SPLAT_TEXTURE);
}
}
@ -1980,7 +1982,7 @@ void SphereRenderer::render(const MetavoxelLOD& lod, bool contained, bool cursor
glm::vec3 axis = glm::axis(rotation);
glRotatef(glm::degrees(glm::angle(rotation)), axis.x, axis.y, axis.z);
Application::getInstance()->getDeferredLightingEffect()->renderSolidSphere(sphere->getScale(), 32, 32);
DependencyManager::get<DeferredLightingEffect>()->renderSolidSphere(sphere->getScale(), 32, 32);
glPopMatrix();
}
@ -2001,7 +2003,7 @@ void CuboidRenderer::render(const MetavoxelLOD& lod, bool contained, bool cursor
glRotatef(glm::degrees(glm::angle(rotation)), axis.x, axis.y, axis.z);
glScalef(1.0f, cuboid->getAspectY(), cuboid->getAspectZ());
Application::getInstance()->getDeferredLightingEffect()->renderSolidCube(cuboid->getScale() * 2.0f);
DependencyManager::get<DeferredLightingEffect>()->renderSolidCube(cuboid->getScale() * 2.0f);
glPopMatrix();
}
@ -2188,13 +2190,15 @@ void HeightfieldNodeRenderer::render(const HeightfieldNodePointer& node, const g
bufferPair.second.release();
}
if (_heightTextureID == 0) {
// we use non-aligned data for the various layers
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &_heightTextureID);
glBindTexture(GL_TEXTURE_2D, _heightTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
const QVector<quint16>& heightContents = node->getHeight()->getContents();
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16, width, height, 0,
GL_RED, GL_UNSIGNED_SHORT, heightContents.constData());
@ -2229,10 +2233,11 @@ void HeightfieldNodeRenderer::render(const HeightfieldNodePointer& node, const g
const QVector<SharedObjectPointer>& materials = node->getMaterial()->getMaterials();
_networkTextures.resize(materials.size());
TextureCache::SharedPointer textureCache = DependencyManager::get<TextureCache>();
for (int i = 0; i < materials.size(); i++) {
const SharedObjectPointer& material = materials.at(i);
if (material) {
_networkTextures[i] = Application::getInstance()->getTextureCache()->getTexture(
_networkTextures[i] = textureCache->getTexture(
static_cast<MaterialObject*>(material.data())->getDiffuse(), SPLAT_TEXTURE);
}
}
@ -2241,6 +2246,9 @@ void HeightfieldNodeRenderer::render(const HeightfieldNodePointer& node, const g
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 1, 1, 0, GL_RED, GL_UNSIGNED_BYTE, &ZERO_VALUE);
}
glBindTexture(GL_TEXTURE_2D, 0);
// restore the default alignment; it's what Qt uses for image storage
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
}
if (cursor) {

View file

@ -20,8 +20,8 @@
#include <glm/glm.hpp>
#include <MetavoxelClientManager.h>
#include "renderer/ProgramObject.h"
#include <ProgramObject.h>
#include <TextureCache.h>
class HeightfieldBaseLayerBatch;
class HeightfieldSplatBatch;

View file

@ -55,7 +55,8 @@ static const QString MODEL_URL = "/api/v1/models";
static const QString SETTING_NAME = "LastModelUploadLocation";
static const int MAX_SIZE = 10 * 1024 * 1024; // 10 MB
static const int BYTES_PER_MEGABYTES = 1024 * 1024;
static const unsigned long MAX_SIZE = 50 * 1024 * BYTES_PER_MEGABYTES; // 50 GB (Virtually remove limit)
static const int MAX_TEXTURE_SIZE = 1024;
static const int TIMEOUT = 1000;
static const int MAX_CHECK = 30;
@ -63,6 +64,32 @@ static const int MAX_CHECK = 30;
static const int QCOMPRESS_HEADER_POSITION = 0;
static const int QCOMPRESS_HEADER_SIZE = 4;
void ModelUploader::uploadModel(ModelType modelType) {
ModelUploader* uploader = new ModelUploader(modelType);
QThread* thread = new QThread();
thread->connect(uploader, SIGNAL(destroyed()), SLOT(quit()));
thread->connect(thread, SIGNAL(finished()), SLOT(deleteLater()));
uploader->connect(thread, SIGNAL(started()), SLOT(send()));
thread->start();
}
void ModelUploader::uploadHead() {
uploadModel(HEAD_MODEL);
}
void ModelUploader::uploadSkeleton() {
uploadModel(SKELETON_MODEL);
}
void ModelUploader::uploadAttachment() {
uploadModel(ATTACHMENT_MODEL);
}
void ModelUploader::uploadEntity() {
uploadModel(ENTITY_MODEL);
}
ModelUploader::ModelUploader(ModelType modelType) :
_lodCount(-1),
_texturesCount(-1),
@ -148,6 +175,91 @@ bool ModelUploader::zip() {
FBXGeometry geometry = readFBX(fbxContents, QVariantHash());
// make sure we have some basic mappings
populateBasicMapping(mapping, filename, geometry);
// open the dialog to configure the rest
ModelPropertiesDialog properties(_modelType, mapping, basePath, geometry);
if (properties.exec() == QDialog::Rejected) {
return false;
}
mapping = properties.getMapping();
QByteArray nameField = mapping.value(NAME_FIELD).toByteArray();
QString urlBase;
if (!nameField.isEmpty()) {
QHttpPart textPart;
textPart.setHeader(QNetworkRequest::ContentDispositionHeader, "form-data; name=\"model_name\"");
textPart.setBody(nameField);
_dataMultiPart->append(textPart);
urlBase = S3_URL + "/models/" + MODEL_TYPE_NAMES[_modelType] + "/" + nameField;
_url = urlBase + ".fst";
} else {
QMessageBox::warning(NULL,
QString("ModelUploader::zip()"),
QString("Model name is missing in the .fst file."),
QMessageBox::Ok);
qDebug() << "[Warning] " << QString("Model name is missing in the .fst file.");
return false;
}
QByteArray texdirField = mapping.value(TEXDIR_FIELD).toByteArray();
QString texDir;
_textureBase = urlBase + "/textures/";
if (!texdirField.isEmpty()) {
texDir = basePath + "/" + texdirField;
QFileInfo texInfo(texDir);
if (!texInfo.exists() || !texInfo.isDir()) {
QMessageBox::warning(NULL,
QString("ModelUploader::zip()"),
QString("Texture directory could not be found."),
QMessageBox::Ok);
qDebug() << "[Warning] " << QString("Texture directory could not be found.");
return false;
}
}
QVariantHash lodField = mapping.value(LOD_FIELD).toHash();
for (QVariantHash::const_iterator it = lodField.constBegin(); it != lodField.constEnd(); it++) {
QFileInfo lod(basePath + "/" + it.key());
if (!lod.exists() || !lod.isFile()) { // Check existence
QMessageBox::warning(NULL,
QString("ModelUploader::zip()"),
QString("LOD file %1 could not be found.").arg(lod.fileName()),
QMessageBox::Ok);
qDebug() << "[Warning] " << QString("FBX file %1 could not be found.").arg(lod.fileName());
}
// Compress and copy
if (!addPart(lod.filePath(), QString("lod%1").arg(++_lodCount))) {
return false;
}
}
// Write out, compress and copy the fst
if (!addPart(*fst, writeMapping(mapping), QString("fst"))) {
return false;
}
// Compress and copy the fbx
if (!addPart(fbx, fbxContents, "fbx")) {
return false;
}
if (!addTextures(texDir, geometry)) {
return false;
}
QHttpPart textPart;
textPart.setHeader(QNetworkRequest::ContentDispositionHeader, "form-data;"
" name=\"model_category\"");
textPart.setBody(MODEL_TYPE_NAMES[_modelType]);
_dataMultiPart->append(textPart);
_readyToSend = true;
return true;
}
void ModelUploader::populateBasicMapping(QVariantHash& mapping, QString filename, FBXGeometry geometry) {
if (!mapping.contains(NAME_FIELD)) {
mapping.insert(NAME_FIELD, QFileInfo(filename).baseName());
}
@ -162,11 +274,11 @@ bool ModelUploader::zip() {
QVariantHash joints = mapping.value(JOINT_FIELD).toHash();
if (!joints.contains("jointEyeLeft")) {
joints.insert("jointEyeLeft", geometry.jointIndices.contains("jointEyeLeft") ? "jointEyeLeft" :
(geometry.jointIndices.contains("EyeLeft") ? "EyeLeft" : "LeftEye"));
(geometry.jointIndices.contains("EyeLeft") ? "EyeLeft" : "LeftEye"));
}
if (!joints.contains("jointEyeRight")) {
joints.insert("jointEyeRight", geometry.jointIndices.contains("jointEyeRight") ? "jointEyeRight" :
geometry.jointIndices.contains("EyeRight") ? "EyeRight" : "RightEye");
geometry.jointIndices.contains("EyeRight") ? "EyeRight" : "RightEye");
}
if (!joints.contains("jointNeck")) {
joints.insert("jointNeck", geometry.jointIndices.contains("jointNeck") ? "jointNeck" : "Neck");
@ -250,87 +362,6 @@ bool ModelUploader::zip() {
blendshapes.insertMulti("Sneer", QVariantList() << "Squint_Right" << 0.5);
mapping.insert(BLENDSHAPE_FIELD, blendshapes);
}
// open the dialog to configure the rest
ModelPropertiesDialog properties(_modelType, mapping, basePath, geometry);
if (properties.exec() == QDialog::Rejected) {
return false;
}
mapping = properties.getMapping();
QByteArray nameField = mapping.value(NAME_FIELD).toByteArray();
QString urlBase;
if (!nameField.isEmpty()) {
QHttpPart textPart;
textPart.setHeader(QNetworkRequest::ContentDispositionHeader, "form-data; name=\"model_name\"");
textPart.setBody(nameField);
_dataMultiPart->append(textPart);
urlBase = S3_URL + "/models/" + MODEL_TYPE_NAMES[_modelType] + "/" + nameField;
_url = urlBase + ".fst";
} else {
QMessageBox::warning(NULL,
QString("ModelUploader::zip()"),
QString("Model name is missing in the .fst file."),
QMessageBox::Ok);
qDebug() << "[Warning] " << QString("Model name is missing in the .fst file.");
return false;
}
QByteArray texdirField = mapping.value(TEXDIR_FIELD).toByteArray();
QString texDir;
_textureBase = urlBase + "/textures/";
if (!texdirField.isEmpty()) {
texDir = basePath + "/" + texdirField;
QFileInfo texInfo(texDir);
if (!texInfo.exists() || !texInfo.isDir()) {
QMessageBox::warning(NULL,
QString("ModelUploader::zip()"),
QString("Texture directory could not be found."),
QMessageBox::Ok);
qDebug() << "[Warning] " << QString("Texture directory could not be found.");
return false;
}
}
QVariantHash lodField = mapping.value(LOD_FIELD).toHash();
for (QVariantHash::const_iterator it = lodField.constBegin(); it != lodField.constEnd(); it++) {
QFileInfo lod(basePath + "/" + it.key());
if (!lod.exists() || !lod.isFile()) { // Check existence
QMessageBox::warning(NULL,
QString("ModelUploader::zip()"),
QString("LOD file %1 could not be found.").arg(lod.fileName()),
QMessageBox::Ok);
qDebug() << "[Warning] " << QString("FBX file %1 could not be found.").arg(lod.fileName());
}
// Compress and copy
if (!addPart(lod.filePath(), QString("lod%1").arg(++_lodCount))) {
return false;
}
}
// Write out, compress and copy the fst
if (!addPart(*fst, writeMapping(mapping), QString("fst"))) {
return false;
}
// Compress and copy the fbx
if (!addPart(fbx, fbxContents, "fbx")) {
return false;
}
if (!addTextures(texDir, geometry)) {
return false;
}
QHttpPart textPart;
textPart.setHeader(QNetworkRequest::ContentDispositionHeader, "form-data;"
" name=\"model_category\"");
textPart.setBody(MODEL_TYPE_NAMES[_modelType]);
_dataMultiPart->append(textPart);
_readyToSend = true;
return true;
}
void ModelUploader::send() {
@ -466,17 +497,19 @@ void ModelUploader::processCheck() {
_timer.stop();
switch (reply->error()) {
case QNetworkReply::NoError:
case QNetworkReply::NoError: {
QMessageBox::information(NULL,
QString("ModelUploader::processCheck()"),
QString("Your model is now available in the browser."),
QMessageBox::Ok);
Application::getInstance()->getGeometryCache()->refresh(_url);
DependencyManager::get<GeometryCache>()->refresh(_url);
TextureCache::SharedPointer textureCache = DependencyManager::get<TextureCache>();
foreach (const QByteArray& filename, _textureFilenames) {
Application::getInstance()->getTextureCache()->refresh(_textureBase + filename);
textureCache->refresh(_textureBase + filename);
}
deleteLater();
break;
}
case QNetworkReply::ContentNotFoundError:
if (--_numberOfChecks) {
_timer.start(TIMEOUT);
@ -588,9 +621,9 @@ bool ModelUploader::addPart(const QFile& file, const QByteArray& contents, const
if (_totalSize > MAX_SIZE) {
QMessageBox::warning(NULL,
QString("ModelUploader::zip()"),
QString("Model too big, over %1 Bytes.").arg(MAX_SIZE),
QString("Model too big, over %1 MB.").arg(MAX_SIZE / BYTES_PER_MEGABYTES),
QMessageBox::Ok);
qDebug() << "[Warning] " << QString("Model too big, over %1 Bytes.").arg(MAX_SIZE);
qDebug() << "[Warning] " << QString("Model too big, over %1 MB.").arg(MAX_SIZE / BYTES_PER_MEGABYTES);
return false;
}
qDebug() << "Current model size: " << _totalSize;
@ -611,8 +644,8 @@ ModelPropertiesDialog::ModelPropertiesDialog(ModelType modelType, const QVariant
_modelType(modelType),
_originalMapping(originalMapping),
_basePath(basePath),
_geometry(geometry) {
_geometry(geometry)
{
setWindowTitle("Set Model Properties");
QFormLayout* form = new QFormLayout();
@ -627,33 +660,35 @@ ModelPropertiesDialog::ModelPropertiesDialog(ModelType modelType, const QVariant
_scale->setMaximum(FLT_MAX);
_scale->setSingleStep(0.01);
if (_modelType == ATTACHMENT_MODEL) {
QHBoxLayout* translation = new QHBoxLayout();
form->addRow("Translation:", translation);
translation->addWidget(_translationX = createTranslationBox());
translation->addWidget(_translationY = createTranslationBox());
translation->addWidget(_translationZ = createTranslationBox());
form->addRow("Pivot About Center:", _pivotAboutCenter = new QCheckBox());
form->addRow("Pivot Joint:", _pivotJoint = createJointBox());
connect(_pivotAboutCenter, SIGNAL(toggled(bool)), SLOT(updatePivotJoint()));
_pivotAboutCenter->setChecked(true);
} else {
form->addRow("Left Eye Joint:", _leftEyeJoint = createJointBox());
form->addRow("Right Eye Joint:", _rightEyeJoint = createJointBox());
form->addRow("Neck Joint:", _neckJoint = createJointBox());
}
if (_modelType == SKELETON_MODEL) {
form->addRow("Root Joint:", _rootJoint = createJointBox());
form->addRow("Lean Joint:", _leanJoint = createJointBox());
form->addRow("Head Joint:", _headJoint = createJointBox());
form->addRow("Left Hand Joint:", _leftHandJoint = createJointBox());
form->addRow("Right Hand Joint:", _rightHandJoint = createJointBox());
form->addRow("Free Joints:", _freeJoints = new QVBoxLayout());
QPushButton* newFreeJoint = new QPushButton("New Free Joint");
_freeJoints->addWidget(newFreeJoint);
connect(newFreeJoint, SIGNAL(clicked(bool)), SLOT(createNewFreeJoint()));
if (_modelType != ENTITY_MODEL) {
if (_modelType == ATTACHMENT_MODEL) {
QHBoxLayout* translation = new QHBoxLayout();
form->addRow("Translation:", translation);
translation->addWidget(_translationX = createTranslationBox());
translation->addWidget(_translationY = createTranslationBox());
translation->addWidget(_translationZ = createTranslationBox());
form->addRow("Pivot About Center:", _pivotAboutCenter = new QCheckBox());
form->addRow("Pivot Joint:", _pivotJoint = createJointBox());
connect(_pivotAboutCenter, SIGNAL(toggled(bool)), SLOT(updatePivotJoint()));
_pivotAboutCenter->setChecked(true);
} else {
form->addRow("Left Eye Joint:", _leftEyeJoint = createJointBox());
form->addRow("Right Eye Joint:", _rightEyeJoint = createJointBox());
form->addRow("Neck Joint:", _neckJoint = createJointBox());
}
if (_modelType == SKELETON_MODEL) {
form->addRow("Root Joint:", _rootJoint = createJointBox());
form->addRow("Lean Joint:", _leanJoint = createJointBox());
form->addRow("Head Joint:", _headJoint = createJointBox());
form->addRow("Left Hand Joint:", _leftHandJoint = createJointBox());
form->addRow("Right Hand Joint:", _rightHandJoint = createJointBox());
form->addRow("Free Joints:", _freeJoints = new QVBoxLayout());
QPushButton* newFreeJoint = new QPushButton("New Free Joint");
_freeJoints->addWidget(newFreeJoint);
connect(newFreeJoint, SIGNAL(clicked(bool)), SLOT(createNewFreeJoint()));
}
}
QDialogButtonBox* buttons = new QDialogButtonBox(QDialogButtonBox::Ok |
@ -681,38 +716,40 @@ QVariantHash ModelPropertiesDialog::getMapping() const {
}
mapping.insert(JOINT_INDEX_FIELD, jointIndices);
QVariantHash joints = mapping.value(JOINT_FIELD).toHash();
if (_modelType == ATTACHMENT_MODEL) {
glm::vec3 pivot;
if (_pivotAboutCenter->isChecked()) {
pivot = (_geometry.meshExtents.minimum + _geometry.meshExtents.maximum) * 0.5f;
} else if (_pivotJoint->currentIndex() != 0) {
pivot = extractTranslation(_geometry.joints.at(_pivotJoint->currentIndex() - 1).transform);
if (_modelType != ENTITY_MODEL) {
QVariantHash joints = mapping.value(JOINT_FIELD).toHash();
if (_modelType == ATTACHMENT_MODEL) {
glm::vec3 pivot;
if (_pivotAboutCenter->isChecked()) {
pivot = (_geometry.meshExtents.minimum + _geometry.meshExtents.maximum) * 0.5f;
} else if (_pivotJoint->currentIndex() != 0) {
pivot = extractTranslation(_geometry.joints.at(_pivotJoint->currentIndex() - 1).transform);
}
mapping.insert(TRANSLATION_X_FIELD, -pivot.x * _scale->value() + _translationX->value());
mapping.insert(TRANSLATION_Y_FIELD, -pivot.y * _scale->value() + _translationY->value());
mapping.insert(TRANSLATION_Z_FIELD, -pivot.z * _scale->value() + _translationZ->value());
} else {
insertJointMapping(joints, "jointEyeLeft", _leftEyeJoint->currentText());
insertJointMapping(joints, "jointEyeRight", _rightEyeJoint->currentText());
insertJointMapping(joints, "jointNeck", _neckJoint->currentText());
}
mapping.insert(TRANSLATION_X_FIELD, -pivot.x * _scale->value() + _translationX->value());
mapping.insert(TRANSLATION_Y_FIELD, -pivot.y * _scale->value() + _translationY->value());
mapping.insert(TRANSLATION_Z_FIELD, -pivot.z * _scale->value() + _translationZ->value());
} else {
insertJointMapping(joints, "jointEyeLeft", _leftEyeJoint->currentText());
insertJointMapping(joints, "jointEyeRight", _rightEyeJoint->currentText());
insertJointMapping(joints, "jointNeck", _neckJoint->currentText());
}
if (_modelType == SKELETON_MODEL) {
insertJointMapping(joints, "jointRoot", _rootJoint->currentText());
insertJointMapping(joints, "jointLean", _leanJoint->currentText());
insertJointMapping(joints, "jointHead", _headJoint->currentText());
insertJointMapping(joints, "jointLeftHand", _leftHandJoint->currentText());
insertJointMapping(joints, "jointRightHand", _rightHandJoint->currentText());
mapping.remove(FREE_JOINT_FIELD);
for (int i = 0; i < _freeJoints->count() - 1; i++) {
QComboBox* box = static_cast<QComboBox*>(_freeJoints->itemAt(i)->widget()->layout()->itemAt(0)->widget());
mapping.insertMulti(FREE_JOINT_FIELD, box->currentText());
if (_modelType == SKELETON_MODEL) {
insertJointMapping(joints, "jointRoot", _rootJoint->currentText());
insertJointMapping(joints, "jointLean", _leanJoint->currentText());
insertJointMapping(joints, "jointHead", _headJoint->currentText());
insertJointMapping(joints, "jointLeftHand", _leftHandJoint->currentText());
insertJointMapping(joints, "jointRightHand", _rightHandJoint->currentText());
mapping.remove(FREE_JOINT_FIELD);
for (int i = 0; i < _freeJoints->count() - 1; i++) {
QComboBox* box = static_cast<QComboBox*>(_freeJoints->itemAt(i)->widget()->layout()->itemAt(0)->widget());
mapping.insertMulti(FREE_JOINT_FIELD, box->currentText());
}
}
mapping.insert(JOINT_FIELD, joints);
}
mapping.insert(JOINT_FIELD, joints);
return mapping;
}
@ -727,32 +764,35 @@ void ModelPropertiesDialog::reset() {
_scale->setValue(_originalMapping.value(SCALE_FIELD).toDouble());
QVariantHash jointHash = _originalMapping.value(JOINT_FIELD).toHash();
if (_modelType == ATTACHMENT_MODEL) {
_translationX->setValue(_originalMapping.value(TRANSLATION_X_FIELD).toDouble());
_translationY->setValue(_originalMapping.value(TRANSLATION_Y_FIELD).toDouble());
_translationZ->setValue(_originalMapping.value(TRANSLATION_Z_FIELD).toDouble());
_pivotAboutCenter->setChecked(true);
_pivotJoint->setCurrentIndex(0);
} else {
setJointText(_leftEyeJoint, jointHash.value("jointEyeLeft").toString());
setJointText(_rightEyeJoint, jointHash.value("jointEyeRight").toString());
setJointText(_neckJoint, jointHash.value("jointNeck").toString());
}
if (_modelType == SKELETON_MODEL) {
setJointText(_rootJoint, jointHash.value("jointRoot").toString());
setJointText(_leanJoint, jointHash.value("jointLean").toString());
setJointText(_headJoint, jointHash.value("jointHead").toString());
setJointText(_leftHandJoint, jointHash.value("jointLeftHand").toString());
setJointText(_rightHandJoint, jointHash.value("jointRightHand").toString());
while (_freeJoints->count() > 1) {
delete _freeJoints->itemAt(0)->widget();
if (_modelType != ENTITY_MODEL) {
if (_modelType == ATTACHMENT_MODEL) {
_translationX->setValue(_originalMapping.value(TRANSLATION_X_FIELD).toDouble());
_translationY->setValue(_originalMapping.value(TRANSLATION_Y_FIELD).toDouble());
_translationZ->setValue(_originalMapping.value(TRANSLATION_Z_FIELD).toDouble());
_pivotAboutCenter->setChecked(true);
_pivotJoint->setCurrentIndex(0);
} else {
setJointText(_leftEyeJoint, jointHash.value("jointEyeLeft").toString());
setJointText(_rightEyeJoint, jointHash.value("jointEyeRight").toString());
setJointText(_neckJoint, jointHash.value("jointNeck").toString());
}
foreach (const QVariant& joint, _originalMapping.values(FREE_JOINT_FIELD)) {
QString jointName = joint.toString();
if (_geometry.jointIndices.contains(jointName)) {
createNewFreeJoint(jointName);
if (_modelType == SKELETON_MODEL) {
setJointText(_rootJoint, jointHash.value("jointRoot").toString());
setJointText(_leanJoint, jointHash.value("jointLean").toString());
setJointText(_headJoint, jointHash.value("jointHead").toString());
setJointText(_leftHandJoint, jointHash.value("jointLeftHand").toString());
setJointText(_rightHandJoint, jointHash.value("jointRightHand").toString());
while (_freeJoints->count() > 1) {
delete _freeJoints->itemAt(0)->widget();
}
foreach (const QVariant& joint, _originalMapping.values(FREE_JOINT_FIELD)) {
QString jointName = joint.toString();
if (_geometry.jointIndices.contains(jointName)) {
createNewFreeJoint(jointName);
}
}
}
}

View file

@ -33,13 +33,15 @@ class ModelUploader : public QObject {
Q_OBJECT
public:
ModelUploader(ModelType type);
~ModelUploader();
static void uploadModel(ModelType modelType);
public slots:
void send();
static void uploadHead();
static void uploadSkeleton();
static void uploadAttachment();
static void uploadEntity();
private slots:
void send();
void checkJSON(QNetworkReply& requestReply);
void uploadUpdate(qint64 bytesSent, qint64 bytesTotal);
void uploadSuccess(QNetworkReply& requestReply);
@ -48,12 +50,21 @@ private slots:
void processCheck();
private:
ModelUploader(ModelType type);
~ModelUploader();
void populateBasicMapping(QVariantHash& mapping, QString filename, FBXGeometry geometry);
bool zip();
bool addTextures(const QString& texdir, const FBXGeometry& geometry);
bool addPart(const QString& path, const QString& name, bool isTexture = false);
bool addPart(const QFile& file, const QByteArray& contents, const QString& name, bool isTexture = false);
QString _url;
QString _textureBase;
QSet<QByteArray> _textureFilenames;
int _lodCount;
int _texturesCount;
int _totalSize;
unsigned long _totalSize;
ModelType _modelType;
bool _readyToSend;
@ -64,12 +75,6 @@ private:
QDialog* _progressDialog;
QProgressBar* _progressBar;
bool zip();
bool addTextures(const QString& texdir, const FBXGeometry& geometry);
bool addPart(const QString& path, const QString& name, bool isTexture = false);
bool addPart(const QFile& file, const QByteArray& contents, const QString& name, bool isTexture = false);
};
/// A dialog that allows customization of various model properties.

View file

@ -19,12 +19,12 @@
#include <glm/gtx/quaternion.hpp>
#include <glm/detail/func_common.hpp>
#include <SharedUtil.h>
#include <QThread>
#include <SharedUtil.h>
#include <TextRenderer.h>
#include "InterfaceConfig.h"
#include "ui/TextRenderer.h"
#include "VoxelConstants.h"
#include "world.h"
#include "Application.h"
@ -33,12 +33,6 @@
using namespace std;
// no clue which versions are affected...
#define WORKAROUND_BROKEN_GLUT_STROKES
// see http://www.opengl.org/resources/libraries/glut/spec3/node78.html
void renderWorldBox() {
// Show edge of world
float red[] = {1, 0, 0};
@ -71,22 +65,23 @@ void renderWorldBox() {
glPushMatrix();
glTranslatef(MARKER_DISTANCE, 0, 0);
glColor3fv(red);
Application::getInstance()->getGeometryCache()->renderSphere(MARKER_RADIUS, 10, 10);
GeometryCache::SharedPointer geometryCache = DependencyManager::get<GeometryCache>();
geometryCache->renderSphere(MARKER_RADIUS, 10, 10);
glPopMatrix();
glPushMatrix();
glTranslatef(0, MARKER_DISTANCE, 0);
glColor3fv(green);
Application::getInstance()->getGeometryCache()->renderSphere(MARKER_RADIUS, 10, 10);
geometryCache->renderSphere(MARKER_RADIUS, 10, 10);
glPopMatrix();
glPushMatrix();
glTranslatef(0, 0, MARKER_DISTANCE);
glColor3fv(blue);
Application::getInstance()->getGeometryCache()->renderSphere(MARKER_RADIUS, 10, 10);
geometryCache->renderSphere(MARKER_RADIUS, 10, 10);
glPopMatrix();
glPushMatrix();
glColor3fv(gray);
glTranslatef(MARKER_DISTANCE, 0, MARKER_DISTANCE);
Application::getInstance()->getGeometryCache()->renderSphere(MARKER_RADIUS, 10, 10);
geometryCache->renderSphere(MARKER_RADIUS, 10, 10);
glPopMatrix();
}

View file

@ -11,6 +11,8 @@
#include <vector>
#include <gpu/GPUConfig.h>
#include <QDesktopWidget>
#include <QWindow>
@ -20,11 +22,16 @@
#include <glm/gtc/type_ptr.hpp>
#include <glm/gtx/vector_query.hpp>
#include <DeferredLightingEffect.h>
#include <GeometryUtil.h>
#include <GlowEffect.h>
#include <NodeList.h>
#include <PacketHeaders.h>
#include <PathUtils.h>
#include <PerfStat.h>
#include <SharedUtil.h>
#include <TextRenderer.h>
#include <TextureCache.h>
#include "Application.h"
#include "Avatar.h"
@ -36,8 +43,6 @@
#include "Recorder.h"
#include "world.h"
#include "devices/OculusManager.h"
#include "renderer/TextureCache.h"
#include "ui/TextRenderer.h"
using namespace std;
@ -277,43 +282,64 @@ void Avatar::render(const glm::vec3& cameraPosition, RenderMode renderMode, bool
// render pointing lasers
glm::vec3 laserColor = glm::vec3(1.0f, 0.0f, 1.0f);
float laserLength = 50.0f;
if (_handState == HAND_STATE_LEFT_POINTING ||
_handState == HAND_STATE_BOTH_POINTING) {
int leftIndex = _skeletonModel.getLeftHandJointIndex();
glm::vec3 leftPosition;
glm::quat leftRotation;
_skeletonModel.getJointPositionInWorldFrame(leftIndex, leftPosition);
_skeletonModel.getJointRotationInWorldFrame(leftIndex, leftRotation);
glPushMatrix(); {
glTranslatef(leftPosition.x, leftPosition.y, leftPosition.z);
float angle = glm::degrees(glm::angle(leftRotation));
glm::vec3 axis = glm::axis(leftRotation);
glRotatef(angle, axis.x, axis.y, axis.z);
glBegin(GL_LINES);
glColor3f(laserColor.x, laserColor.y, laserColor.z);
glVertex3f(0.0f, 0.0f, 0.0f);
glVertex3f(0.0f, laserLength, 0.0f);
glEnd();
} glPopMatrix();
glm::vec3 position;
glm::quat rotation;
bool havePosition, haveRotation;
if (_handState & LEFT_HAND_POINTING_FLAG) {
if (_handState & IS_FINGER_POINTING_FLAG) {
int leftIndexTip = getJointIndex("LeftHandIndex4");
int leftIndexTipJoint = getJointIndex("LeftHandIndex3");
havePosition = _skeletonModel.getJointPositionInWorldFrame(leftIndexTip, position);
haveRotation = _skeletonModel.getJointRotationInWorldFrame(leftIndexTipJoint, rotation);
} else {
int leftHand = _skeletonModel.getLeftHandJointIndex();
havePosition = _skeletonModel.getJointPositionInWorldFrame(leftHand, position);
haveRotation = _skeletonModel.getJointRotationInWorldFrame(leftHand, rotation);
}
if (havePosition && haveRotation) {
glPushMatrix(); {
glTranslatef(position.x, position.y, position.z);
float angle = glm::degrees(glm::angle(rotation));
glm::vec3 axis = glm::axis(rotation);
glRotatef(angle, axis.x, axis.y, axis.z);
glBegin(GL_LINES);
glColor3f(laserColor.x, laserColor.y, laserColor.z);
glVertex3f(0.0f, 0.0f, 0.0f);
glVertex3f(0.0f, laserLength, 0.0f);
glEnd();
} glPopMatrix();
}
}
if (_handState == HAND_STATE_RIGHT_POINTING ||
_handState == HAND_STATE_BOTH_POINTING) {
int rightIndex = _skeletonModel.getRightHandJointIndex();
glm::vec3 rightPosition;
glm::quat rightRotation;
_skeletonModel.getJointPositionInWorldFrame(rightIndex, rightPosition);
_skeletonModel.getJointRotationInWorldFrame(rightIndex, rightRotation);
glPushMatrix(); {
glTranslatef(rightPosition.x, rightPosition.y, rightPosition.z);
float angle = glm::degrees(glm::angle(rightRotation));
glm::vec3 axis = glm::axis(rightRotation);
glRotatef(angle, axis.x, axis.y, axis.z);
glBegin(GL_LINES);
glColor3f(laserColor.x, laserColor.y, laserColor.z);
glVertex3f(0.0f, 0.0f, 0.0f);
glVertex3f(0.0f, laserLength, 0.0f);
glEnd();
} glPopMatrix();
if (_handState & RIGHT_HAND_POINTING_FLAG) {
if (_handState & IS_FINGER_POINTING_FLAG) {
int rightIndexTip = getJointIndex("RightHandIndex4");
int rightIndexTipJoint = getJointIndex("RightHandIndex3");
havePosition = _skeletonModel.getJointPositionInWorldFrame(rightIndexTip, position);
haveRotation = _skeletonModel.getJointRotationInWorldFrame(rightIndexTipJoint, rotation);
} else {
int rightHand = _skeletonModel.getRightHandJointIndex();
havePosition = _skeletonModel.getJointPositionInWorldFrame(rightHand, position);
haveRotation = _skeletonModel.getJointRotationInWorldFrame(rightHand, rotation);
}
if (havePosition && haveRotation) {
glPushMatrix(); {
glTranslatef(position.x, position.y, position.z);
float angle = glm::degrees(glm::angle(rotation));
glm::vec3 axis = glm::axis(rotation);
glRotatef(angle, axis.x, axis.y, axis.z);
glBegin(GL_LINES);
glColor3f(laserColor.x, laserColor.y, laserColor.z);
glVertex3f(0.0f, 0.0f, 0.0f);
glVertex3f(0.0f, laserLength, 0.0f);
glEnd();
} glPopMatrix();
}
}
}
@ -360,7 +386,7 @@ void Avatar::render(const glm::vec3& cameraPosition, RenderMode renderMode, bool
glm::quat orientation = getOrientation();
foreach (const AvatarManager::LocalLight& light, Application::getInstance()->getAvatarManager().getLocalLights()) {
glm::vec3 direction = orientation * light.direction;
Application::getInstance()->getDeferredLightingEffect()->addSpotLight(position - direction * distance,
DependencyManager::get<DeferredLightingEffect>()->addSpotLight(position - direction * distance,
distance * 2.0f, glm::vec3(), light.color, light.color, 1.0f, 0.5f, 0.0f, direction,
LIGHT_EXPONENT, LIGHT_CUTOFF);
}
@ -394,7 +420,7 @@ void Avatar::render(const glm::vec3& cameraPosition, RenderMode renderMode, bool
} else {
glTranslatef(_position.x, getDisplayNamePosition().y + LOOK_AT_INDICATOR_OFFSET, _position.z);
}
Application::getInstance()->getGeometryCache()->renderSphere(LOOK_AT_INDICATOR_RADIUS, 15, 15);
DependencyManager::get<GeometryCache>()->renderSphere(LOOK_AT_INDICATOR_RADIUS, 15, 15);
glPopMatrix();
}
}
@ -422,7 +448,7 @@ void Avatar::render(const glm::vec3& cameraPosition, RenderMode renderMode, bool
glPushMatrix();
glTranslatef(_position.x, _position.y, _position.z);
glScalef(height, height, height);
Application::getInstance()->getGeometryCache()->renderSphere(sphereRadius, 15, 15);
DependencyManager::get<GeometryCache>()->renderSphere(sphereRadius, 15, 15);
glPopMatrix();
}
@ -643,6 +669,49 @@ glm::vec3 Avatar::getDisplayNamePosition() {
return namePosition;
}
float Avatar::calculateDisplayNameScaleFactor(const glm::vec3& textPosition, bool inHMD) {
// We need to compute the scale factor such as the text remains with fixed size respect to window coordinates
// We project a unit vector and check the difference in screen coordinates, to check which is the
// correction scale needed
// save the matrices for later scale correction factor
// The up vector must be relative to the rotation current rotation matrix:
// we set the identity
glm::vec3 testPoint0 = textPosition;
glm::vec3 testPoint1 = textPosition + (Application::getInstance()->getCamera()->getRotation() * IDENTITY_UP);
double textWindowHeight;
GLCanvas::SharedPointer glCanvas = DependencyManager::get<GLCanvas>();
float windowSizeX = glCanvas->getDeviceWidth();
float windowSizeY = glCanvas->getDeviceHeight();
glm::dmat4 modelViewMatrix;
glm::dmat4 projectionMatrix;
Application::getInstance()->getModelViewMatrix(&modelViewMatrix);
Application::getInstance()->getProjectionMatrix(&projectionMatrix);
glm::dvec4 p0 = modelViewMatrix * glm::dvec4(testPoint0, 1.0);
p0 = projectionMatrix * p0;
glm::dvec2 result0 = glm::vec2(windowSizeX * (p0.x / p0.w + 1.0f) * 0.5f, windowSizeY * (p0.y / p0.w + 1.0f) * 0.5f);
glm::dvec4 p1 = modelViewMatrix * glm::dvec4(testPoint1, 1.0);
p1 = projectionMatrix * p1;
glm::vec2 result1 = glm::vec2(windowSizeX * (p1.x / p1.w + 1.0f) * 0.5f, windowSizeY * (p1.y / p1.w + 1.0f) * 0.5f);
textWindowHeight = abs(result1.y - result0.y);
// need to scale to compensate for the font resolution due to the device
float scaleFactor = QApplication::desktop()->windowHandle()->devicePixelRatio() *
((textWindowHeight > EPSILON) ? 1.0f / textWindowHeight : 1.0f);
if (inHMD) {
const float HMDMODE_NAME_SCALE = 0.65f;
scaleFactor *= HMDMODE_NAME_SCALE;
} else {
scaleFactor *= Application::getInstance()->getRenderResolutionScale();
}
return scaleFactor;
}
void Avatar::renderDisplayName() {
if (_displayName.isEmpty() || _displayNameAlpha == 0.0f) {
@ -674,78 +743,39 @@ void Avatar::renderDisplayName() {
frontAxis = glm::normalize(glm::vec3(frontAxis.z, 0.0f, -frontAxis.x));
float angle = acos(frontAxis.x) * ((frontAxis.z < 0) ? 1.0f : -1.0f);
glRotatef(glm::degrees(angle), 0.0f, 1.0f, 0.0f);
// We need to compute the scale factor such as the text remains with fixed size respect to window coordinates
// We project a unit vector and check the difference in screen coordinates, to check which is the
// correction scale needed
// save the matrices for later scale correction factor
glm::dmat4 modelViewMatrix;
glm::dmat4 projectionMatrix;
GLint viewportMatrix[4];
Application::getInstance()->getModelViewMatrix(&modelViewMatrix);
Application::getInstance()->getProjectionMatrix(&projectionMatrix);
glGetIntegerv(GL_VIEWPORT, viewportMatrix);
GLdouble result0[3], result1[3];
// The up vector must be relative to the rotation current rotation matrix:
// we set the identity
glm::dvec3 testPoint0 = glm::dvec3(textPosition);
glm::dvec3 testPoint1 = glm::dvec3(textPosition) + glm::dvec3(Application::getInstance()->getCamera()->getRotation() * IDENTITY_UP);
bool success;
success = gluProject(testPoint0.x, testPoint0.y, testPoint0.z,
(GLdouble*)&modelViewMatrix, (GLdouble*)&projectionMatrix, viewportMatrix,
&result0[0], &result0[1], &result0[2]);
success = success &&
gluProject(testPoint1.x, testPoint1.y, testPoint1.z,
(GLdouble*)&modelViewMatrix, (GLdouble*)&projectionMatrix, viewportMatrix,
&result1[0], &result1[1], &result1[2]);
float scaleFactor = calculateDisplayNameScaleFactor(textPosition, inHMD);
glScalef(scaleFactor, scaleFactor, 1.0);
glScalef(1.0f, -1.0f, 1.0f); // TextRenderer::draw paints the text upside down in y axis
if (success) {
double textWindowHeight = abs(result1[1] - result0[1]);
// need to scale to compensate for the font resolution due to the device
float scaleFactor = QApplication::desktop()->windowHandle()->devicePixelRatio() *
((textWindowHeight > EPSILON) ? 1.0f / textWindowHeight : 1.0f);
if (inHMD) {
const float HMDMODE_NAME_SCALE = 0.65f;
scaleFactor *= HMDMODE_NAME_SCALE;
} else {
scaleFactor *= Application::getInstance()->getRenderResolutionScale();
}
glScalef(scaleFactor, scaleFactor, 1.0);
glScalef(1.0f, -1.0f, 1.0f); // TextRenderer::draw paints the text upside down in y axis
int text_x = -_displayNameBoundingRect.width() / 2;
int text_y = -_displayNameBoundingRect.height() / 2;
int text_x = -_displayNameBoundingRect.width() / 2;
int text_y = -_displayNameBoundingRect.height() / 2;
// draw a gray background
int left = text_x + _displayNameBoundingRect.x();
int right = left + _displayNameBoundingRect.width();
int bottom = text_y + _displayNameBoundingRect.y();
int top = bottom + _displayNameBoundingRect.height();
const int border = 8;
bottom -= border;
left -= border;
top += border;
right += border;
// draw a gray background
int left = text_x + _displayNameBoundingRect.x();
int right = left + _displayNameBoundingRect.width();
int bottom = text_y + _displayNameBoundingRect.y();
int top = bottom + _displayNameBoundingRect.height();
const int border = 8;
bottom -= border;
left -= border;
top += border;
right += border;
// We are drawing coplanar textures with depth: need the polygon offset
glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(1.0f, 1.0f);
// We are drawing coplanar textures with depth: need the polygon offset
glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(1.0f, 1.0f);
glColor4f(0.2f, 0.2f, 0.2f, _displayNameAlpha * DISPLAYNAME_BACKGROUND_ALPHA / DISPLAYNAME_ALPHA);
renderBevelCornersRect(left, bottom, right - left, top - bottom, 3);
glColor4f(0.93f, 0.93f, 0.93f, _displayNameAlpha);
QByteArray ba = _displayName.toLocal8Bit();
const char* text = ba.data();
glDisable(GL_POLYGON_OFFSET_FILL);
textRenderer(DISPLAYNAME)->draw(text_x, text_y, text);
}
glColor4f(0.2f, 0.2f, 0.2f, _displayNameAlpha * DISPLAYNAME_BACKGROUND_ALPHA / DISPLAYNAME_ALPHA);
renderBevelCornersRect(left, bottom, right - left, top - bottom, 3);
glColor4f(0.93f, 0.93f, 0.93f, _displayNameAlpha);
QByteArray ba = _displayName.toLocal8Bit();
const char* text = ba.data();
glDisable(GL_POLYGON_OFFSET_FILL);
textRenderer(DISPLAYNAME)->draw(text_x, text_y, text);
glPopMatrix();
@ -913,13 +943,13 @@ void Avatar::scaleVectorRelativeToPosition(glm::vec3 &positionToScale) const {
void Avatar::setFaceModelURL(const QUrl& faceModelURL) {
AvatarData::setFaceModelURL(faceModelURL);
const QUrl DEFAULT_FACE_MODEL_URL = QUrl::fromLocalFile(Application::resourcesPath() + "meshes/defaultAvatar_head.fst");
const QUrl DEFAULT_FACE_MODEL_URL = QUrl::fromLocalFile(PathUtils::resourcesPath() + "meshes/defaultAvatar_head.fst");
getHead()->getFaceModel().setURL(_faceModelURL, DEFAULT_FACE_MODEL_URL, true, !isMyAvatar());
}
void Avatar::setSkeletonModelURL(const QUrl& skeletonModelURL) {
AvatarData::setSkeletonModelURL(skeletonModelURL);
const QUrl DEFAULT_SKELETON_MODEL_URL = QUrl::fromLocalFile(Application::resourcesPath() + "meshes/defaultAvatar_body.fst");
const QUrl DEFAULT_SKELETON_MODEL_URL = QUrl::fromLocalFile(PathUtils::resourcesPath() + "meshes/defaultAvatar_body.fst");
_skeletonModel.setURL(_skeletonModelURL, DEFAULT_SKELETON_MODEL_URL, true, !isMyAvatar());
}

View file

@ -230,6 +230,7 @@ protected:
float getPelvisFloatingHeight() const;
glm::vec3 getDisplayNamePosition();
float calculateDisplayNameScaleFactor(const glm::vec3& textPosition, bool inHMD);
void renderDisplayName();
virtual void renderBody(RenderMode renderMode, bool postLighting, float glowLevel = 0.0f);
virtual bool shouldRenderHead(const glm::vec3& cameraPosition, RenderMode renderMode) const;

View file

@ -15,16 +15,17 @@
#include <glm/gtx/string_cast.hpp>
#include <GlowEffect.h>
#include <PerfStat.h>
#include <RegisteredMetaTypes.h>
#include <UUID.h>
#include "Application.h"
#include "Avatar.h"
#include "AvatarManager.h"
#include "Menu.h"
#include "MyAvatar.h"
#include "AvatarManager.h"
// We add _myAvatar into the hash with all the other AvatarData, and we use the default NULL QUid as the key.
const QUuid MY_AVATAR_KEY; // NULL key

View file

@ -12,7 +12,7 @@
#ifndef hifi_FaceModel_h
#define hifi_FaceModel_h
#include "renderer/Model.h"
#include <Model.h>
class Head;

View file

@ -8,18 +8,20 @@
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
#include <gpu/GPUConfig.h> // hack to get windows to build
#include <QImage>
#include <NodeList.h>
#include <GeometryUtil.h>
#include <ProgramObject.h>
#include "Application.h"
#include "Avatar.h"
#include "Hand.h"
#include "Menu.h"
#include "Util.h"
#include "renderer/ProgramObject.h"
using namespace std;
@ -114,7 +116,7 @@ void Hand::render(bool isMine, Model::RenderMode renderMode) {
glPushMatrix();
glTranslatef(position.x, position.y, position.z);
glColor3f(0.0f, 1.0f, 0.0f);
Application::getInstance()->getGeometryCache()->renderSphere(PALM_COLLISION_RADIUS * _owningAvatar->getScale(), 10, 10);
DependencyManager::get<GeometryCache>()->renderSphere(PALM_COLLISION_RADIUS * _owningAvatar->getScale(), 10, 10);
glPopMatrix();
}
}
@ -151,7 +153,7 @@ void Hand::renderHandTargets(bool isMine) {
const float collisionRadius = 0.05f;
glColor4f(0.5f,0.5f,0.5f, alpha);
glutWireSphere(collisionRadius, 10.0f, 10.0f);
DependencyManager::get<GeometryCache>()->renderSphere(collisionRadius, 10, 10, false);
glPopMatrix();
}
}
@ -179,7 +181,7 @@ void Hand::renderHandTargets(bool isMine) {
Avatar::renderJointConnectingCone(root, offsetFromPalm, PALM_DISK_RADIUS, 0.0f);
glPushMatrix();
glTranslatef(root.x, root.y, root.z);
Application::getInstance()->getGeometryCache()->renderSphere(PALM_BALL_RADIUS, 20.0f, 20.0f);
DependencyManager::get<GeometryCache>()->renderSphere(PALM_BALL_RADIUS, 20.0f, 20.0f);
glPopMatrix();
}
}

View file

@ -11,6 +11,8 @@
#ifndef hifi_Hand_h
#define hifi_Hand_h
#include "InterfaceConfig.h"
#include <vector>
#include <QAction>
@ -22,9 +24,8 @@
#include <AvatarData.h>
#include <AudioScriptingInterface.h>
#include <HandData.h>
#include <Model.h>
#include "InterfaceConfig.h"
#include "renderer/Model.h"
#include "world.h"

View file

@ -10,6 +10,8 @@
#include <glm/gtx/quaternion.hpp>
#include <DependencyManager.h>
#include <GlowEffect.h>
#include <NodeList.h>
#include "Application.h"
@ -18,6 +20,8 @@
#include "Head.h"
#include "Menu.h"
#include "Util.h"
#include "devices/DdeFaceTracker.h"
#include "devices/Faceshift.h"
#include "devices/OculusManager.h"
using namespace std;
@ -68,18 +72,19 @@ void Head::reset() {
}
void Head::simulate(float deltaTime, bool isMine, bool billboard) {
if (isMine) {
MyAvatar* myAvatar = static_cast<MyAvatar*>(_owningAvatar);
// Only use face trackers when not playing back a recording.
if (!myAvatar->isPlaying()) {
FaceTracker* faceTracker = Application::getInstance()->getActiveFaceTracker();
if ((_isFaceshiftConnected = faceTracker)) {
DdeFaceTracker::SharedPointer dde = DependencyManager::get<DdeFaceTracker>();
Faceshift::SharedPointer faceshift = DependencyManager::get<Faceshift>();
if ((_isFaceshiftConnected = (faceshift == faceTracker))) {
_blendshapeCoefficients = faceTracker->getBlendshapeCoefficients();
_isFaceshiftConnected = true;
} else if (Application::getInstance()->getDDE()->isActive()) {
faceTracker = Application::getInstance()->getDDE();
} else if (dde->isActive()) {
faceTracker = dde.data();
_blendshapeCoefficients = faceTracker->getBlendshapeCoefficients();
}
}
@ -196,14 +201,14 @@ void Head::simulate(float deltaTime, bool isMine, bool billboard) {
_mouth2 = glm::mix(_audioJawOpen * MMMM_POWER, _mouth2, MMMM_PERIOD + randFloat() * MMMM_RANDOM_PERIOD);
_mouth4 = glm::mix(_audioJawOpen, _mouth4, SMILE_PERIOD + randFloat() * SMILE_RANDOM_PERIOD);
Application::getInstance()->getFaceshift()->updateFakeCoefficients(_leftEyeBlink,
_rightEyeBlink,
_browAudioLift,
_audioJawOpen,
_mouth2,
_mouth3,
_mouth4,
_blendshapeCoefficients);
DependencyManager::get<Faceshift>()->updateFakeCoefficients(_leftEyeBlink,
_rightEyeBlink,
_browAudioLift,
_audioJawOpen,
_mouth2,
_mouth3,
_mouth4,
_blendshapeCoefficients);
} else {
_saccade = glm::vec3();
}
@ -326,7 +331,7 @@ void Head::addLeanDeltas(float sideways, float forward) {
void Head::renderLookatVectors(glm::vec3 leftEyePosition, glm::vec3 rightEyePosition, glm::vec3 lookatPosition) {
Application::getInstance()->getGlowEffect()->begin();
DependencyManager::get<GlowEffect>()->begin();
glLineWidth(2.0);
glBegin(GL_LINES);
@ -340,7 +345,7 @@ void Head::renderLookatVectors(glm::vec3 leftEyePosition, glm::vec3 rightEyePosi
glVertex3f(lookatPosition.x, lookatPosition.y, lookatPosition.z);
glEnd();
Application::getInstance()->getGlowEffect()->end();
DependencyManager::get<GlowEffect>()->end();
}

View file

@ -10,9 +10,8 @@
//
#include <AvatarData.h>
#include <EntityTree.h>
#include "../renderer/Model.h"
#include <Model.h>
#include "ModelReferential.h"

View file

@ -22,12 +22,15 @@
#include <AccountManager.h>
#include <AddressManager.h>
#include <AnimationHandle.h>
#include <DependencyManager.h>
#include <GeometryUtil.h>
#include <NodeList.h>
#include <PacketHeaders.h>
#include <PerfStat.h>
#include <ShapeCollider.h>
#include <SharedUtil.h>
#include <TextRenderer.h>
#include "Application.h"
#include "Audio.h"
@ -39,8 +42,6 @@
#include "Recorder.h"
#include "devices/Faceshift.h"
#include "devices/OculusManager.h"
#include "renderer/AnimationHandle.h"
#include "ui/TextRenderer.h"
using namespace std;
@ -393,7 +394,7 @@ void MyAvatar::renderDebugBodyPoints() {
glPushMatrix();
glColor4f(0, 1, 0, .5f);
glTranslatef(position.x, position.y, position.z);
Application::getInstance()->getGeometryCache()->renderSphere(0.2f, 10.0f, 10.0f);
DependencyManager::get<GeometryCache>()->renderSphere(0.2f, 10.0f, 10.0f);
glPopMatrix();
// Head Sphere
@ -401,7 +402,7 @@ void MyAvatar::renderDebugBodyPoints() {
glPushMatrix();
glColor4f(0, 1, 0, .5f);
glTranslatef(position.x, position.y, position.z);
Application::getInstance()->getGeometryCache()->renderSphere(0.15f, 10.0f, 10.0f);
DependencyManager::get<GeometryCache>()->renderSphere(0.15f, 10.0f, 10.0f);
glPopMatrix();
}
@ -421,8 +422,7 @@ void MyAvatar::render(const glm::vec3& cameraPosition, RenderMode renderMode, bo
}
void MyAvatar::renderHeadMouse(int screenWidth, int screenHeight) const {
Faceshift* faceshift = Application::getInstance()->getFaceshift();
Faceshift::SharedPointer faceshift = DependencyManager::get<Faceshift>();
float pixelsPerDegree = screenHeight / Menu::getInstance()->getFieldOfView();

View file

@ -21,15 +21,6 @@
class ModelItemID;
enum AvatarHandState
{
HAND_STATE_NULL = 0,
HAND_STATE_LEFT_POINTING,
HAND_STATE_RIGHT_POINTING,
HAND_STATE_BOTH_POINTING,
NUM_HAND_STATES
};
class MyAvatar : public Avatar {
Q_OBJECT
Q_PROPERTY(bool shouldRenderLocally READ getShouldRenderLocally WRITE setShouldRenderLocally)

View file

@ -554,6 +554,7 @@ void SkeletonModel::renderRagdoll() {
float radius1 = 0.008f;
float radius2 = 0.01f;
glm::vec3 simulationTranslation = _ragdoll->getTranslationInSimulationFrame();
GeometryCache::SharedPointer geometryCache = DependencyManager::get<GeometryCache>();
for (int i = 0; i < numPoints; ++i) {
glPushMatrix();
// NOTE: ragdollPoints are in simulation-frame but we want them to be model-relative
@ -561,9 +562,9 @@ void SkeletonModel::renderRagdoll() {
glTranslatef(position.x, position.y, position.z);
// draw each point as a yellow hexagon with black border
glColor4f(0.0f, 0.0f, 0.0f, alpha);
Application::getInstance()->getGeometryCache()->renderSphere(radius2, BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
geometryCache->renderSphere(radius2, BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
glColor4f(1.0f, 1.0f, 0.0f, alpha);
Application::getInstance()->getGeometryCache()->renderSphere(radius1, BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
geometryCache->renderSphere(radius1, BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
glPopMatrix();
}
glPopMatrix();
@ -913,7 +914,8 @@ void SkeletonModel::renderBoundingCollisionShapes(float alpha) {
endPoint = endPoint - _translation;
glTranslatef(endPoint.x, endPoint.y, endPoint.z);
glColor4f(0.6f, 0.6f, 0.8f, alpha);
Application::getInstance()->getGeometryCache()->renderSphere(_boundingShape.getRadius(), BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
GeometryCache::SharedPointer geometryCache = DependencyManager::get<GeometryCache>();
geometryCache->renderSphere(_boundingShape.getRadius(), BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
// draw a yellow sphere at the capsule startpoint
glm::vec3 startPoint;
@ -922,7 +924,7 @@ void SkeletonModel::renderBoundingCollisionShapes(float alpha) {
glm::vec3 axis = endPoint - startPoint;
glTranslatef(-axis.x, -axis.y, -axis.z);
glColor4f(0.8f, 0.8f, 0.6f, alpha);
Application::getInstance()->getGeometryCache()->renderSphere(_boundingShape.getRadius(), BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
geometryCache->renderSphere(_boundingShape.getRadius(), BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
// draw a green cylinder between the two points
glm::vec3 origin(0.0f);
@ -948,6 +950,8 @@ void SkeletonModel::renderJointCollisionShapes(float alpha) {
continue;
}
GeometryCache::SharedPointer geometryCache = DependencyManager::get<GeometryCache>();
glPushMatrix();
// shapes are stored in simulation-frame but we want position to be model-relative
if (shape->getType() == SPHERE_SHAPE) {
@ -955,7 +959,7 @@ void SkeletonModel::renderJointCollisionShapes(float alpha) {
glTranslatef(position.x, position.y, position.z);
// draw a grey sphere at shape position
glColor4f(0.75f, 0.75f, 0.75f, alpha);
Application::getInstance()->getGeometryCache()->renderSphere(shape->getBoundingRadius(), BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
geometryCache->renderSphere(shape->getBoundingRadius(), BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
} else if (shape->getType() == CAPSULE_SHAPE) {
CapsuleShape* capsule = static_cast<CapsuleShape*>(shape);
@ -965,7 +969,7 @@ void SkeletonModel::renderJointCollisionShapes(float alpha) {
endPoint = endPoint - simulationTranslation;
glTranslatef(endPoint.x, endPoint.y, endPoint.z);
glColor4f(0.6f, 0.6f, 0.8f, alpha);
Application::getInstance()->getGeometryCache()->renderSphere(capsule->getRadius(), BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
geometryCache->renderSphere(capsule->getRadius(), BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
// draw a yellow sphere at the capsule startpoint
glm::vec3 startPoint;
@ -974,7 +978,7 @@ void SkeletonModel::renderJointCollisionShapes(float alpha) {
glm::vec3 axis = endPoint - startPoint;
glTranslatef(-axis.x, -axis.y, -axis.z);
glColor4f(0.8f, 0.8f, 0.6f, alpha);
Application::getInstance()->getGeometryCache()->renderSphere(capsule->getRadius(), BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
geometryCache->renderSphere(capsule->getRadius(), BALL_SUBDIVISIONS, BALL_SUBDIVISIONS);
// draw a green cylinder between the two points
glm::vec3 origin(0.0f);

View file

@ -12,9 +12,10 @@
#ifndef hifi_SkeletonModel_h
#define hifi_SkeletonModel_h
#include "renderer/Model.h"
#include <CapsuleShape.h>
#include <Model.h>
#include "SkeletonRagdoll.h"
class Avatar;

View file

@ -11,10 +11,10 @@
#include <DistanceConstraint.h>
#include <FixedConstraint.h>
#include <Model.h>
#include "SkeletonRagdoll.h"
#include "MuscleConstraint.h"
#include "../renderer/Model.h"
SkeletonRagdoll::SkeletonRagdoll(Model* model) : Ragdoll(), _model(model) {
assert(_model);

View file

@ -14,10 +14,9 @@
#include <QVector>
#include <JointState.h>
#include <Ragdoll.h>
#include "../renderer/JointState.h"
class MuscleConstraint;
class Model;

View file

@ -14,16 +14,15 @@
#include <QUdpSocket>
#include <DependencyManager.h>
#include "FaceTracker.h"
class DdeFaceTracker : public FaceTracker {
Q_OBJECT
SINGLETON_DEPENDENCY(DdeFaceTracker)
public:
DdeFaceTracker();
DdeFaceTracker(const QHostAddress& host, quint16 port);
~DdeFaceTracker();
//initialization
void init();
void reset();
@ -57,6 +56,10 @@ private slots:
void socketStateChanged(QAbstractSocket::SocketState socketState);
private:
DdeFaceTracker();
DdeFaceTracker(const QHostAddress& host, quint16 port);
~DdeFaceTracker();
float getBlendshapeCoefficient(int index) const;
void decodePacket(const QByteArray& buffer);

View file

@ -23,8 +23,8 @@ class FaceTracker : public QObject {
Q_OBJECT
public:
FaceTracker();
virtual ~FaceTracker() {}
const glm::vec3& getHeadTranslation() const { return _headTranslation; }
const glm::quat& getHeadRotation() const { return _headRotation; }

View file

@ -14,7 +14,6 @@
#include <PerfStat.h>
#include <SharedUtil.h>
#include "Application.h"
#include "Faceshift.h"
#include "Menu.h"
#include "Util.h"

View file

@ -19,15 +19,16 @@
#include <fsbinarystream.h>
#endif
#include <DependencyManager.h>
#include "FaceTracker.h"
/// Handles interaction with the Faceshift software, which provides head position/orientation and facial features.
class Faceshift : public FaceTracker {
Q_OBJECT
SINGLETON_DEPENDENCY(Faceshift)
public:
Faceshift();
void init();
bool isConnectedOrConnecting() const;
@ -87,6 +88,8 @@ private slots:
void readFromSocket();
private:
Faceshift();
virtual ~Faceshift() {}
float getBlendshapeCoefficient(int index) const;

View file

@ -21,6 +21,8 @@
#include <glm/glm.hpp>
#include <GlowEffect.h>
#include <PathUtils.h>
#include <SharedUtil.h>
#include <UserActivityLogger.h>
@ -136,8 +138,8 @@ void OculusManager::connect() {
if (!_programInitialized) {
// Shader program
_programInitialized = true;
_program.addShaderFromSourceFile(QGLShader::Vertex, Application::resourcesPath() + "shaders/oculus.vert");
_program.addShaderFromSourceFile(QGLShader::Fragment, Application::resourcesPath() + "shaders/oculus.frag");
_program.addShaderFromSourceFile(QGLShader::Vertex, PathUtils::resourcesPath() + "shaders/oculus.vert");
_program.addShaderFromSourceFile(QGLShader::Fragment, PathUtils::resourcesPath() + "shaders/oculus.frag");
_program.link();
// Uniforms
@ -447,9 +449,9 @@ void OculusManager::display(const glm::quat &bodyOrientation, const glm::vec3 &p
//Bind our framebuffer object. If we are rendering the glow effect, we let the glow effect shader take care of it
if (Menu::getInstance()->isOptionChecked(MenuOption::EnableGlowEffect)) {
Application::getInstance()->getGlowEffect()->prepare();
DependencyManager::get<GlowEffect>()->prepare();
} else {
Application::getInstance()->getTextureCache()->getPrimaryFramebufferObject()->bind();
DependencyManager::get<TextureCache>()->getPrimaryFramebufferObject()->bind();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
@ -552,16 +554,16 @@ void OculusManager::display(const glm::quat &bodyOrientation, const glm::vec3 &p
//Bind the output texture from the glow shader. If glow effect is disabled, we just grab the texture
if (Menu::getInstance()->isOptionChecked(MenuOption::EnableGlowEffect)) {
QOpenGLFramebufferObject* fbo = Application::getInstance()->getGlowEffect()->render(true);
QOpenGLFramebufferObject* fbo = DependencyManager::get<GlowEffect>()->render(true);
glBindTexture(GL_TEXTURE_2D, fbo->texture());
} else {
Application::getInstance()->getTextureCache()->getPrimaryFramebufferObject()->release();
glBindTexture(GL_TEXTURE_2D, Application::getInstance()->getTextureCache()->getPrimaryFramebufferObject()->texture());
DependencyManager::get<TextureCache>()->getPrimaryFramebufferObject()->release();
glBindTexture(GL_TEXTURE_2D, DependencyManager::get<TextureCache>()->getPrimaryFramebufferObject()->texture());
}
// restore our normal viewport
glViewport(0, 0, Application::getInstance()->getGLWidget()->getDeviceWidth(),
Application::getInstance()->getGLWidget()->getDeviceHeight());
GLCanvas::SharedPointer glCanvas = DependencyManager::get<GLCanvas>();
glViewport(0, 0, glCanvas->getDeviceWidth(), glCanvas->getDeviceHeight());
glMatrixMode(GL_PROJECTION);
glPopMatrix();
@ -579,8 +581,8 @@ void OculusManager::display(const glm::quat &bodyOrientation, const glm::vec3 &p
void OculusManager::renderDistortionMesh(ovrPosef eyeRenderPose[ovrEye_Count]) {
glLoadIdentity();
gluOrtho2D(0, Application::getInstance()->getGLWidget()->getDeviceWidth(), 0,
Application::getInstance()->getGLWidget()->getDeviceHeight());
GLCanvas::SharedPointer glCanvas = DependencyManager::get<GLCanvas>();
glOrtho(0, glCanvas->getDeviceWidth(), 0, glCanvas->getDeviceHeight(), -1.0, 1.0);
glDisable(GL_DEPTH_TEST);

View file

@ -17,7 +17,8 @@
#include <OVR.h>
#endif
#include "renderer/ProgramObject.h"
#include <ProgramObject.h>
#include "ui/overlays/Text3DOverlay.h"
const float DEFAULT_OCULUS_UI_ANGULAR_SIZE = 72.0f;

View file

@ -14,11 +14,11 @@
#include <FBXReader.h>
#include <PerfStat.h>
#include <TextRenderer.h>
#include "Application.h"
#include "PrioVR.h"
#include "scripting/JoystickScriptingInterface.h"
#include "ui/TextRenderer.h"
#ifdef HAVE_PRIOVR
const unsigned int SERIAL_LIST[] = { 0x00000001, 0x00000000, 0x00000008, 0x00000009, 0x0000000A,
@ -215,8 +215,9 @@ void PrioVR::renderCalibrationCountdown() {
static TextRenderer* textRenderer = TextRenderer::getInstance(MONO_FONT_FAMILY, 18, QFont::Bold,
false, TextRenderer::OUTLINE_EFFECT, 2);
QByteArray text = "Assume T-Pose in " + QByteArray::number(secondsRemaining) + "...";
textRenderer->draw((Application::getInstance()->getGLWidget()->width() -
textRenderer->computeWidth(text.constData())) / 2, Application::getInstance()->getGLWidget()->height() / 2,
text);
GLCanvas::SharedPointer glCanvas = DependencyManager::get<GLCanvas>();
textRenderer->draw((glCanvas->width() - textRenderer->computeWidth(text.constData())) / 2,
glCanvas->height() / 2,
text);
#endif
}

View file

@ -461,7 +461,7 @@ void SixenseManager::updateCalibration(const sixenseControllerData* controllers)
void SixenseManager::emulateMouse(PalmData* palm, int index) {
Application* application = Application::getInstance();
MyAvatar* avatar = application->getAvatar();
GLCanvas* widget = application->getGLWidget();
GLCanvas::SharedPointer glCanvas = DependencyManager::get<GLCanvas>();
QPoint pos;
Qt::MouseButton bumperButton;
@ -489,10 +489,10 @@ void SixenseManager::emulateMouse(PalmData* palm, int index) {
float yAngle = 0.5f - ((atan2(direction.z, direction.y) + M_PI_2));
// Get the pixel range over which the xAngle and yAngle are scaled
float cursorRange = widget->width() * getCursorPixelRangeMult();
float cursorRange = glCanvas->width() * getCursorPixelRangeMult();
pos.setX(widget->width() / 2.0f + cursorRange * xAngle);
pos.setY(widget->height() / 2.0f + cursorRange * yAngle);
pos.setX(glCanvas->width() / 2.0f + cursorRange * xAngle);
pos.setY(glCanvas->height() / 2.0f + cursorRange * yAngle);
}

View file

@ -15,6 +15,8 @@
#include <glm/glm.hpp>
#include <GlowEffect.h>
#include "Application.h"
#include "TV3DManager.h"
@ -33,10 +35,10 @@ bool TV3DManager::isConnected() {
}
void TV3DManager::connect() {
Application* app = Application::getInstance();
int width = app->getGLWidget()->getDeviceWidth();
int height = app->getGLWidget()->getDeviceHeight();
Camera& camera = *app->getCamera();
GLCanvas::SharedPointer glCanvas = DependencyManager::get<GLCanvas>();
int width = glCanvas->getDeviceWidth();
int height = glCanvas->getDeviceHeight();
Camera& camera = *Application::getInstance()->getCamera();
configureCamera(camera, width, height);
}
@ -91,7 +93,8 @@ void TV3DManager::display(Camera& whichCamera) {
// left eye portal
int portalX = 0;
int portalY = 0;
QSize deviceSize = Application::getInstance()->getGLWidget()->getDeviceSize() *
GLCanvas::SharedPointer glCanvas = DependencyManager::get<GLCanvas>();
QSize deviceSize = glCanvas->getDeviceSize() *
Application::getInstance()->getRenderResolutionScale();
int portalW = deviceSize.width() / 2;
int portalH = deviceSize.height();
@ -103,7 +106,7 @@ void TV3DManager::display(Camera& whichCamera) {
applicationOverlay.renderOverlay(true);
const bool displayOverlays = Menu::getInstance()->isOptionChecked(MenuOption::UserInterface);
Application::getInstance()->getGlowEffect()->prepare();
DependencyManager::get<GlowEffect>()->prepare();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
@ -169,7 +172,7 @@ void TV3DManager::display(Camera& whichCamera) {
// reset the viewport to how we started
glViewport(0, 0, deviceSize.width(), deviceSize.height());
Application::getInstance()->getGlowEffect()->render();
DependencyManager::get<GlowEffect>()->render();
}
void TV3DManager::overrideOffAxisFrustum(float& left, float& right, float& bottom, float& top, float& nearVal,

View file

@ -11,12 +11,15 @@
#include <QHash>
#include <DependencyManager.h>
#include <FBXReader.h>
#include <PathUtils.h>
#include <PerfStat.h>
#include <SharedUtil.h>
#include <FBXReader.h>
#include "Application.h"
#include "Faceshift.h"
#include "Visage.h"
// this has to go after our normal includes, because its definition of HANDLE conflicts with Qt's
@ -43,12 +46,12 @@ Visage::Visage() :
#ifdef HAVE_VISAGE
#ifdef WIN32
QByteArray licensePath = Application::resourcesPath().toLatin1() + "visage";
QByteArray licensePath = PathUtils::resourcesPath().toLatin1() + "visage";
#else
QByteArray licensePath = Application::resourcesPath().toLatin1() + "visage/license.vlc";
QByteArray licensePath = PathUtils::resourcesPath().toLatin1() + "visage/license.vlc";
#endif
initializeLicenseManager(licensePath.data());
_tracker = new VisageTracker2(Application::resourcesPath().toLatin1() + "visage/tracker.cfg");
_tracker = new VisageTracker2(PathUtils::resourcesPath().toLatin1() + "visage/tracker.cfg");
_data = new FaceData();
#endif
}
@ -119,7 +122,7 @@ static const QMultiHash<QByteArray, QPair<int, float> >& getActionUnitNameMap()
const float TRANSLATION_SCALE = 20.0f;
void Visage::init() {
connect(Application::getInstance()->getFaceshift(), SIGNAL(connectionStateChanged()), SLOT(updateEnabled()));
connect(DependencyManager::get<Faceshift>().data(), SIGNAL(connectionStateChanged()), SLOT(updateEnabled()));
updateEnabled();
}
@ -171,7 +174,7 @@ void Visage::reset() {
void Visage::updateEnabled() {
setEnabled(Menu::getInstance()->isOptionChecked(MenuOption::Visage) &&
!(Menu::getInstance()->isOptionChecked(MenuOption::Faceshift) &&
Application::getInstance()->getFaceshift()->isConnectedOrConnecting()));
DependencyManager::get<Faceshift>()->isConnectedOrConnecting()));
}
void Visage::setEnabled(bool enabled) {

View file

@ -16,6 +16,8 @@
#include <QPair>
#include <QVector>
#include <DependencyManager.h>
#include "FaceTracker.h"
namespace VisageSDK {
@ -24,14 +26,11 @@ namespace VisageSDK {
}
/// Handles input from the Visage webcam feature tracking software.
class Visage : public FaceTracker {
class Visage : public FaceTracker {
Q_OBJECT
SINGLETON_DEPENDENCY(Visage)
public:
Visage();
virtual ~Visage();
void init();
bool isActive() const { return _active; }
@ -44,6 +43,8 @@ public slots:
void updateEnabled();
private:
Visage();
virtual ~Visage();
#ifdef HAVE_VISAGE
VisageSDK::VisageTracker2* _tracker;

View file

@ -1,113 +0,0 @@
//
// RenderableBoxEntityItem.cpp
// interface/src
//
// Created by Brad Hefta-Gaub on 8/6/14.
// Copyright 2014 High Fidelity, Inc.
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
#include <glm/gtx/quaternion.hpp>
#include <FBXReader.h>
#include "InterfaceConfig.h"
#include <BoxEntityItem.h>
#include <ModelEntityItem.h>
#include <PerfStat.h>
#include "Menu.h"
#include "EntityTreeRenderer.h"
#include "RenderableBoxEntityItem.h"
EntityItem* RenderableBoxEntityItem::factory(const EntityItemID& entityID, const EntityItemProperties& properties) {
return new RenderableBoxEntityItem(entityID, properties);
}
void RenderableBoxEntityItem::render(RenderArgs* args) {
PerformanceTimer perfTimer("RenderableBoxEntityItem::render");
assert(getType() == EntityTypes::Box);
glm::vec3 position = getPositionInMeters();
glm::vec3 center = getCenter() * (float)TREE_SCALE;
glm::vec3 dimensions = getDimensions() * (float)TREE_SCALE;
glm::vec3 halfDimensions = dimensions / 2.0f;
glm::quat rotation = getRotation();
const bool useGlutCube = true;
const float MAX_COLOR = 255.0f;
if (useGlutCube) {
glColor4f(getColor()[RED_INDEX] / MAX_COLOR, getColor()[GREEN_INDEX] / MAX_COLOR,
getColor()[BLUE_INDEX] / MAX_COLOR, getLocalRenderAlpha());
glPushMatrix();
glTranslatef(position.x, position.y, position.z);
glm::vec3 axis = glm::axis(rotation);
glRotatef(glm::degrees(glm::angle(rotation)), axis.x, axis.y, axis.z);
glPushMatrix();
glm::vec3 positionToCenter = center - position;
glTranslatef(positionToCenter.x, positionToCenter.y, positionToCenter.z);
glScalef(dimensions.x, dimensions.y, dimensions.z);
Application::getInstance()->getDeferredLightingEffect()->renderSolidCube(1.0f);
glPopMatrix();
glPopMatrix();
} else {
static GLfloat vertices[] = { 1, 1, 1, -1, 1, 1, -1,-1, 1, 1,-1, 1, // v0,v1,v2,v3 (front)
1, 1, 1, 1,-1, 1, 1,-1,-1, 1, 1,-1, // v0,v3,v4,v5 (right)
1, 1, 1, 1, 1,-1, -1, 1,-1, -1, 1, 1, // v0,v5,v6,v1 (top)
-1, 1, 1, -1, 1,-1, -1,-1,-1, -1,-1, 1, // v1,v6,v7,v2 (left)
-1,-1,-1, 1,-1,-1, 1,-1, 1, -1,-1, 1, // v7,v4,v3,v2 (bottom)
1,-1,-1, -1,-1,-1, -1, 1,-1, 1, 1,-1 }; // v4,v7,v6,v5 (back)
// normal array
static GLfloat normals[] = { 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, // v0,v1,v2,v3 (front)
1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, // v0,v3,v4,v5 (right)
0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, // v0,v5,v6,v1 (top)
-1, 0, 0, -1, 0, 0, -1, 0, 0, -1, 0, 0, // v1,v6,v7,v2 (left)
0,-1, 0, 0,-1, 0, 0,-1, 0, 0,-1, 0, // v7,v4,v3,v2 (bottom)
0, 0,-1, 0, 0,-1, 0, 0,-1, 0, 0,-1 }; // v4,v7,v6,v5 (back)
// index array of vertex array for glDrawElements() & glDrawRangeElement()
static GLubyte indices[] = { 0, 1, 2, 2, 3, 0, // front
4, 5, 6, 6, 7, 4, // right
8, 9,10, 10,11, 8, // top
12,13,14, 14,15,12, // left
16,17,18, 18,19,16, // bottom
20,21,22, 22,23,20 }; // back
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glNormalPointer(GL_FLOAT, 0, normals);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glColor4f(getColor()[RED_INDEX] / MAX_COLOR, getColor()[GREEN_INDEX] / MAX_COLOR,
getColor()[BLUE_INDEX] / MAX_COLOR, getLocalRenderAlpha());
Application::getInstance()->getDeferredLightingEffect()->bindSimpleProgram();
glPushMatrix();
glTranslatef(position.x, position.y, position.z);
glm::vec3 axis = glm::axis(rotation);
glRotatef(glm::degrees(glm::angle(rotation)), axis.x, axis.y, axis.z);
glPushMatrix();
glm::vec3 positionToCenter = center - position;
glTranslatef(positionToCenter.x, positionToCenter.y, positionToCenter.z);
// we need to do half the size because the geometry in the VBOs are from -1,-1,-1 to 1,1,1
glScalef(halfDimensions.x, halfDimensions.y, halfDimensions.z);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, indices);
glPopMatrix();
glPopMatrix();
Application::getInstance()->getDeferredLightingEffect()->releaseSimpleProgram();
glDisableClientState(GL_VERTEX_ARRAY); // disable vertex arrays
glDisableClientState(GL_NORMAL_ARRAY);
}
};

View file

@ -1,80 +0,0 @@
//
// PointShader.cpp
// interface/src/renderer
//
// Created by Brad Hefta-Gaub on 10/30/13.
// Copyright 2013 High Fidelity, Inc.
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
// include this before QOpenGLFramebufferObject, which includes an earlier version of OpenGL
#include "InterfaceConfig.h"
#include <QOpenGLFramebufferObject>
#include "Application.h"
#include "PointShader.h"
#include "ProgramObject.h"
#include "RenderUtil.h"
PointShader::PointShader()
: _initialized(false)
{
_program = NULL;
}
PointShader::~PointShader() {
if (_initialized) {
delete _program;
}
}
ProgramObject* PointShader::createPointShaderProgram(const QString& name) {
ProgramObject* program = new ProgramObject();
program->addShaderFromSourceFile(QGLShader::Vertex, Application::resourcesPath() + "shaders/" + name + ".vert" );
program->link();
return program;
}
void PointShader::init() {
if (_initialized) {
qDebug("[ERROR] PointShader is already initialized.");
return;
}
_program = createPointShaderProgram("point_size");
_initialized = true;
}
void PointShader::begin() {
_program->bind();
}
void PointShader::end() {
_program->release();
}
int PointShader::attributeLocation(const char* name) const {
if (_program) {
return _program->attributeLocation(name);
} else {
return -1;
}
}
int PointShader::uniformLocation(const char* name) const {
if (_program) {
return _program->uniformLocation(name);
} else {
return -1;
}
}
void PointShader::setUniformValue(int uniformLocation, float value) {
_program->setUniformValue(uniformLocation, value);
}
void PointShader::setUniformValue(int uniformLocation, const glm::vec3& value) {
_program->setUniformValue(uniformLocation, value.x, value.y, value.z);
}

View file

@ -1,48 +0,0 @@
//
// PointShader.h
// interface/src/renderer
//
// Created by Brad Hefta-Gaub on 10/30/13.
// Copyright 2013 High Fidelity, Inc.
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
#ifndef hifi_PointShader_h
#define hifi_PointShader_h
#include <QObject>
class ProgramObject;
/// A shader program that draws voxels as points with variable sizes
class PointShader : public QObject {
Q_OBJECT
public:
PointShader();
~PointShader();
void init();
/// Starts using the voxel point shader program.
void begin();
/// Stops using the voxel point shader program.
void end();
/// Gets access to attributes from the shader program
int attributeLocation(const char* name) const;
int uniformLocation(const char* name) const;
void setUniformValue(int uniformLocation, float value);
void setUniformValue(int uniformLocation, const glm::vec3& value);
static ProgramObject* createPointShaderProgram(const QString& name);
private:
bool _initialized;
ProgramObject* _program;
};
#endif // hifi_PointShader_h

View file

@ -1,73 +0,0 @@
//
// VoxelShader.cpp
// interface/src/renderer
//
// Created by Brad Hefta-Gaub on 9/22/13.
// Copyright 2013 High Fidelity, Inc.
//
// Distributed under the Apache License, Version 2.0.
// See the accompanying file LICENSE or http://www.apache.org/licenses/LICENSE-2.0.html
//
// include this before QOpenGLFramebufferObject, which includes an earlier version of OpenGL
#include "InterfaceConfig.h"
#include <QOpenGLFramebufferObject>
#include "Application.h"
#include "VoxelShader.h"
#include "ProgramObject.h"
#include "RenderUtil.h"
VoxelShader::VoxelShader()
: _initialized(false)
{
_program = NULL;
}
VoxelShader::~VoxelShader() {
if (_initialized) {
delete _program;
}
}
ProgramObject* VoxelShader::createGeometryShaderProgram(const QString& name) {
ProgramObject* program = new ProgramObject();
program->addShaderFromSourceFile(QGLShader::Vertex, Application::resourcesPath() + "shaders/passthrough.vert" );
program->addShaderFromSourceFile(QGLShader::Geometry, Application::resourcesPath() + "shaders/" + name + ".geom" );
program->setGeometryInputType(GL_POINTS);
program->setGeometryOutputType(GL_TRIANGLE_STRIP);
const int VERTICES_PER_FACE = 4;
const int FACES_PER_VOXEL = 6;
const int VERTICES_PER_VOXEL = VERTICES_PER_FACE * FACES_PER_VOXEL;
program->setGeometryOutputVertexCount(VERTICES_PER_VOXEL);
program->link();
return program;
}
void VoxelShader::init() {
if (_initialized) {
qDebug("[ERROR] TestProgram is already initialized.");
return;
}
_program = createGeometryShaderProgram("voxel");
_initialized = true;
}
void VoxelShader::begin() {
_program->bind();
}
void VoxelShader::end() {
_program->release();
}
int VoxelShader::attributeLocation(const char * name) const {
if (_program) {
return _program->attributeLocation(name);
} else {
return -1;
}
}

Some files were not shown because too many files have changed in this diff Show more