RVSVulkan
This Readme explains the utility of the View Synthesis module called RVSVulkan, how to build, it and how to use it.
The code present in this repository is an reimplementation of the Reference View Synthesis (RVS)[1-3] software (available at https://gitlab.com/mpeg-i-visual/rvs) using the Vulkan library. The inner working of RVS is described with more detail in inTechOpen open article [3].
Utility
The task of the View Synthesis module is to produce virtual views at the position requested by the headset.
To communicate with the dll, a C interface defined by the HoviTron project (SourcesVulkan/HvtStreamingAPI.h
) is used.
Colour and depth formats are listed in SourcesVulkan/HvtStreamingAPI.h
. Depth can either be encoded in meter (only available through using float32
format) or in the MPEG Immersive Video (MIV) depth format.
In the case of the MIV depth format, the equation is the following
$$
d_{mpeg} = \frac {(d_{max}-d_{min}) * D_{real}} { \text{maxValue}(D_{real})}
$$
This reimplementation of the original RVS adds a headset support using the OpenXR standard and can load data from different sources. The RVSVulkan application also supports two extensions of the OpenXR standard that are necessary for the generation of the light field in the HMD ( XR_VARJO_quad_views
, XR_KHR_composition_layer_depth
).
Further implementations of the code aim to limit elongated triangles since they will be present at object boundaries which tend to be noisy for most of depth sensing devices used for this project.
Finally, additional features have been added to the project to facilitate the user-tests. Such features include the ability of certain options via arguments and with UDP messages.
Dependencies
The following 3rd party libraries are used by the project :
- GLFW
- glm
- Vulkan SDK
- JSON for modern C++ (nlhomann/json)
- OpenXR SDK (but only if the project should be build with HMD support)
Build
RVSVulkan can be build on Windows system and on Ubuntu (but only with basic features) using CMake.
Multiple variables in the CmakeLists.txt can be used to adjust the compilation of RVSVulkan.
Variable | Default | Description |
---|---|---|
HVT_UDP_CONTROL |
OFF | Add Control of the virtual view using UDP message (needed by the user-tests) |
OPENXR |
OFF | Add the HMD support |
Specific build instructions related to the platform are described below.
Warning
Please note that all the DLL modules will not work properly with RVSVulkan on Ubuntu platform.
Due to the limitation of proprietary API for Azure Kinect cameras and Raytrix cameras, neither the KiRT module nor the Raytrix Streaming DLL can be used on Ubuntu platforms with RVSVulkan.
Windows
Before starting, make sure that you have: * Vulkan SDK (https://vulkan.lunarg.com/sdk/home#windows with version above 1.2) installed. * Download the GLFW binaries or compile it from source (https://www.glfw.org/download.html). Remember the location of the binaries files
For building this module Visual Studio is the recommended IDE. If you use it, VS is able to support CMake project (more info here) in that case open the folder in VS:
If you prefer to convert the CMakeLists.txt into a .sln
you can use CMake (cmake-gui) (installed if you install CMake).
Inside the CMakeSettings file (in VS, right-click on top most CMakeLists.txt
> CMake Parameters for ...
) (or in cmake-gui):
* Set GLFW_PATH
to the correct location (ex: "C:/Users/Administrateur/Documents/libs/glfw-3.3.6.bin.WIN64")
If you want to build other modules, please install the libraries needed and set the appropriate variables for these modules.
Now you should be able to build your code:
- In VS go to Project
> Configure ProjectName
then Build
> Build all
- If cmake-gui was used, select a build folder then press Configure
then Generate
button. Open the generated .sln
with VS and build the project.
Ubuntu 
Tested for Ubuntu 20.04 LTS
Before building the project, please enter the following commands to download the different dependencies.
The project needs gcc (>= 10) to work. If your distribution doesn't have it, please update it using:
sudo apt install g++-10
Others libraries:
sudo apt-get install build-essential
sudo apt install cmake
sudo apt-get install libglfw3-dev
To install Vulkan (for Ubuntu 20.04, for other versions check: https://vulkan.lunarg.com/doc/view/latest/linux/getting_started_ubuntu.html) :
wget -qO - http://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo apt-key add -
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-focal.list http://packages.lunarg.com/vulkan/lunarg-vulkan-focal.list
sudo apt update
sudo apt install vulkan-sdk
According to OpenXR sources the following dependencies are also needed if you also wish to build OpenXr support:
libgl1-mesa-dev
libvulkan-dev
libx11-xcb-dev
libxcb-dri2-0-dev
libxcb-glx0-dev
libxcb-icccm4-dev
libxcb-keysyms1-dev
libxcb-randr0-dev
libxrandr-dev
libxxf86vm-dev
mesa-common-dev
Usage
Usage:
> RVSVulkan.exe PathToDLL [ --glfw || --openxr][--mirror][--compute][--blendingFactor value] [--start value] [--triangleThreshold value] [--port]
Argument | Description |
---|---|
PathToDLL | The path to the DLL that will provide the inputs |
--glfw | Display the result of the view synthesis on the screen |
--openxr | Trigger HMD mode (tested with Oculus Quest 2, and CREAL headset). This is only available if the project was compiled using OpenXR |
--mirror | Add a mirror to see the image synthesized by the application for the left eye (have to be used in association to --openxr) |
--compute | Trigger an alternative mode for rendering based on compute shader (faster but tend to be less stable) |
--blendingFactor value | Specify the blending factor for RVS |
--start zero/average | Control where the virtual view start. Either from the zero of the array or from the average of the camera positions |
--triangleThreshold value | Set the maximum size of the triangles, more this value is high, more the triangles can be elongated |
--port value | Use the port number specified by value to receive UDP message. If not set, this value will be 60392. (Only available if HVT_UDP_CONTROL was activated in the Cmake settings) |
Configuration files for the dll often need to be specified using an environment variable. Refer to the documentation of the modules for further instructions.
IN MVSC with a cmake project you can specified the arguments and environment variables easely with a launch.vs.json file.
Example of the content of a launch.vs.json file:
{
"version": "0.2.1",
"defaults": {},
"configurations": [
{
"type": "default",
"project": "CMakeLists.txt",
"projectTarget": "",
"name": "test glfw",
"args": [
"../SampleStreamer/SampleStreamer.dll",
"--glfw"
],
"env": {
"HVT_JSON_PATH": "C:\\Users\\Username\\Documents\\dataset\\ULBUnicornV4.json"
}
}
]
}
If the code has been build with UDP support, the app will open the port specified in the argument (if not the port 60392) and will listen for UDP message.
The UDP messages are composed of 18 float and are organised like this:
- The magic number 456123: to ensure that the message was not corrupted
- The mode (0, 1, 2): either the virtual viewpoint is fixed by the headset (mode 0) or by the two next field (mode 1 or 2).
- The fixed view position (3 floats --> x, y, z): determine the position of the fixed view if mode 1 or 2 are requested
- The fixed view rotation (4 floats --> x, y, z, w): determine the rotation (in the form of a quaternion) of the fixed view if mode 1 or 2 are requested
- Camera activation number. Determine which camera are used to synthesize the virtual views.
- Reset (1 if the view must be reset and 0 otherwise), Reset the virtual viewpoint at the position and rotation defined by the following field
- The reset view position (3 floats --> x, y, z): determine the position of the fixed view if reset is trigger
- The reset view rotation (4 floats --> x, y, z, w): determine the rotation (in the form of a quaternion) of the fixed view if reset is trigger
An example of UDP message sender for this feature is done in testScripts/ControlUDP.py file.
Known issues
Warning
yuv format in 10 bit is unsupported by RVSVulkan despite being proposed by the Hovitron Streaming API please do not use it.
References
[1] Bonatto, D., Fachada, S., & Lafruit, G. (2020). RaViS: Real-time accelerated View Synthesizer for immersive video 6DoF VR. Electronic Imaging, 2020(13), 382-1. https://dipot.ulb.ac.be/dspace/bitstream/2013/295062/3/EI2020_RaVIS_Bonatto_Fachada_Lafruit.pdf
[2] Bonatto, D., Fachada, S., Rogge, S., Munteanu, A., & Lafruit, G. (2021). Real-Time Depth Video-Based Rendering for 6-DoF HMD Navigation and Light Field Displays. IEEE access, 9, 146868-146887. https://ieeexplore.ieee.org/iel7/6287639/9312710/09590541.pdf
[3] Fachada, S., Bonatto, D., Teratani, M., & Lafruit, G. (2022). View Synthesis Tool for VR Immersive Video. https://www.intechopen.com/online-first/80515