OpenCVStreamer
The OpenCVStreamer Acquisition DLL is an Aquisition and refinment module of the Hovitron Video Pipeline. The present README explains the utility of such DLL, and how to build and use it with the Hovitron project.
This DLL is targetting Windows OS but is able to compile on Ubuntu.
Utility
SampleStreamer and OpenCVStreamer DLL are destined to read dataset with MIV (MPEG Immersive Video) format stored on the disk. Such DLL were developped in the project to serve a debugging purpose and as example for the other Aquisition and refinment modules.
SamplerStreamer is destined to read dataset with yuv format without any conversions while OpenCVStreamer will use the OpenCV library to convert it to RGB format. The code for OpenCVStreamer and the one for SampleStreamer are very similar.
Warning
RVSVulkan does not support 10 bit yuv.
Therefore if your dataset is using 10 bits yuv (or something other than yuv) the recommanded way to read it is using OpenCVStreamer.dll which is slower to read the dataset unfortunally.
How to build
The OpenCVStreamer DLL uses the following dependencies:
This code can be use on Windows platform ( 10 and 11 ) and has been tested on Ubuntu 20.04 LTS.
Windows
For building this module, Visual Studio with the C++ desktop development kit is the recommended IDE. If you use it, VS is able to support cmake project (more info here) in that case open the folder in VS.
If you prefer to convert the CMakeLists.txt into a .sln
you can use CMake (cmake-gui) (installed if you install CMake).
Once CMake open, put in the source field the path to the folder containing the top-most CMakeLists.txt.
Before continuing make sure that you have:
- Downloaded OpenCV binaries (https://opencv.org/releases/) and remember the location of the files.
- Vulkan SDK (https://vulkan.lunarg.com/sdk/home#windows version above 1.2 is necessary) installed.
If CMake can not locate the OpenCV directory, please specify the path to it using the variable OPENCV_DIR
.
(Example: C:/Users/Username/Documents/libs/opencv/build
).
This variable can be set using cmake-gui or CMakeSetting.json file generated by Visual Studio.
If you want to build other modules, please install the libraries needed and set the appropriate variables for these modules.
Now you should be able to build your code:
- In VS go to Project
> Configure ProjectName
then Build
> Build all
- If cmake-gui was used, select a build folder then press Configure
then Generate
button. Open the generated .sln
with VS and build the project.
Ubuntu
Tested for Ubuntu 20.04 LTS
Before building the project, please enter the following commands to download the different dependencies.
The project necessitates gcc (>= 10) to work. If your distribution doesn't have it, please update it using:
sudo apt install g++-10
Others libraries:
sudo apt-get install build-essential
sudo apt install cmake
sudo apt-get install libopencv-dev
To install Vulkan (for Ubuntu 20.04, for other versions check: https://vulkan.lunarg.com/doc/view/latest/linux/getting_started_ubuntu.html) :
wget -qO - http://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo apt-key add -
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-focal.list http://packages.lunarg.com/vulkan/lunarg-vulkan-focal.list
sudo apt update
sudo apt install vulkan-sdk
Then use the following commands
cmake .
make
Usage
This DLL aims to be used with Hovitron's View Synthesis module. RVSVulkan need the path to the DLL that must be loaded passed as argument. Therefore the command looks like this:
RVSVulkan.exe ../OpenCVStreamer/OpenCVStreamer.dll --glfw
In addition to this, the OpenCVStreamer needs an environment variable HVT_JSON_PATH
to be set to the location of a json file containing the settings for this module.
In visual studio, this can be easely set using the "env"
field of a launch.vs.json file:
Exemple of the content of a launch.vs.json file:
{
"version": "0.2.1",
"defaults": {},
"configurations": [
{
"type": "default",
"project": "CMakeLists.txt",
"projectTarget": "",
"name": "test glfw",
"args": [
"../OpenCVStreamer/OpenCVStreamer.dll",
"--glfw"
],
"env": {
"HVT_JSON_PATH": "C:\\Users\\Username\\Documents\\dataset\\ULBUnicornV4.json"
}
}
]
}
The configuration file defined by HVT_JSON_PATH
follow the same structure that the one used by the original RVS implementation.
The major difference is that it only takes into account fields related to input.
Field | Type | Description |
---|---|---|
Version | String | Indicate the version of the configuration file |
InputCameraParameterFile | String | Path to a json containing the parameters of the reference camera (camera json file) |
InputCameraNames | List of String | Name of the reference cameras |
ViewImageNames | List of String | Paths to the colour images of the camera |
DepthMapNames | List of String | Paths to the colour images of the camera |
StartFrame | int | starting frame |
NumberOfFrames | int | Number of frames |
"Precision","VirtualCameraParameterFile", "VirtualCameraNames", "OutputFiles", "Precision", "ColorSpace", "ViewSynthesisMethod", "BlendingMethod", "BlendingFactor" are unused field.
The camera configuration file follow the structure defined across MIV dataset. Here is the list of the elements taken into account by the application:
Field | Type | Description |
---|---|---|
Version | string | Version of the camera file |
Content_name | string | Name of the dataset |
Fps | Int | Number of frame that should be displayed by seconds |
Frames_number | Int | Number of frame present in this dataset |
sourceCameraNames | list of string | list of camera names |
cameras | list of dict | list of the cameras parameters |
With each cameras being a dictionnary that contain the following field:
Field | Type | Description |
---|---|---|
Name | String | Name of the camera |
Position | array of 3 floats | Position of the camera |
Rotation | array of 3 floats | Rotation of camera (yaw,pitch,roll) in degrees |
Depth_range | array of 2 float | Near and far value of depth |
Resolution | array of 2 int | Resolution used by this camera |
Projection | "Perspective" or "Equirectangular" |
Type of projection used |
Focal (for perspective) or Hor_range (for equirectangular) |
Array of 2 int | Value for focal length \(f_x\) and \(f_y\) or Visible Horizontal range |
Principle_point (for perspective) or Ver_range (for equirectangular) |
Array of 2 int | or |
BitDepthColor | Int | bit depth for the coulour component |
BitDepthDepth | Int | bit depth for the depth map |
ColorSpace | String | Color space used (often "YUV420") |
DepthColorSpace | String | Color space for the depth map (often "YUV420") |
Available online datasets
List of some MIV dataset:
- Toystable ULB https://zenodo.org/record/5055543#.YzvzyEzP2Uk
- ULB ChocoFountainBxl https://zenodo.org/record/5960227#.Yzv1k0zP2Uk (contains 10 bits yuv so it need to be converted or used with OpenCVStreamer)
- MIV dataset https://mpeg-miv.org/index.php/content-database-2/ (contains 10 bits yuv so it need to be converted or used with OpenCVStreamer)