Drawing Point Cloud retrieve from Kinect v2 using OpenCV Viz module

What is OpenCV Viz module?

The Viz module (opencv_viz) is visualization module for three-dimensional data included in OpenCV 2.4.9 or later.
This module is implemented based on VTK (Visualization Toolkit).

How to Build OpenCV Viz module?

But, This Viz module (opencv_viz) is not included in pre-built package that distributed by official.
Therefore, You need to build OpenCV yourself to use Viz module.

The CMake settings for enabling Viz module are as follows.

  • VTK_DIR C:Program FilesVTKlibcmakevtk-7.1
  • BUILD_opencv_viz ☑(check)
  • WITH_VTK ☑(check)

Drawing Point Cloud retrieve from Kinect v2 using OpenCV Viz module

This sample program is published in following.
It is required Kinect SDK v2 and OpenCV (opencv_viz).

You need to include opencv2/viz.hpp in addition to opencv2/opencv.hpp to use Viz module.

app.h#L6-L7

#include <opencv2/opencv.hpp>
#include <opencv2/viz.hpp>

Initialize

You prepare cv::viz::Viz3d, and cv::Mat to store Point Cloud data.

cv::viz::Viz3d is viewer for visualizing the three-dimensional data.
Creates viewer by give the window name.

app.h#L41-L42

cv::viz::Viz3d viewer;
cv::Mat cloudMat;

app.cpp#L121-L131

inline void Kinect::initializePointCloud()
{
    // Create Window
    viewer = cv::viz::Viz3d( "Point Cloud" );
}

Draw

You prepare buffer of float-type 3-channels (cv::Vec3f) for Point Cloud data.
Then, You convert the Depth data to camera coordinate system using MapDepthFrameToCameraSpace of Kinect SDK v2. And, You store it into buffer.
If point doesn’t exist, set NaN (Not a Number) to buffer.

You create cv::Mat from buffer that include the Point Cloud data.
That cv::Mat is float-type 3-channels matrix (CV_32FC3) to stores the Point Cloud data.

It is use the float-type in here.
But, If you need to more precision, Please consider that using the double-type.

app.cpp#L233-L256

inline void Kinect::drawPointCloud()
{
    // Retrieve Mapped Coordinates
    std::vector<CameraSpacePoint> cameraSpacePoints( depthWidth * depthHeight );
    ERROR_CHECK( coordinateMapper->MapDepthFrameToCameraSpace( depthBuffer.size(), &depthBuffer[0], cameraSpacePoints.size(), &cameraSpacePoints[0] ) );

    // Mapping Color to Depth Resolution
    std::vector<cv::Vec3f> buffer( depthWidth * depthHeight, cv::Vec3f::all( std::numeric_limits<float>::quiet_NaN() ) );

    Concurrency::parallel_for( 0, depthHeight, [&]( const int depthY ){
        const unsigned int depthOffset = depthY * depthWidth;
        for( int depthX = 0; depthX < depthWidth; depthX++ ){
            unsigned int depthIndex = depthOffset + depthX;
            UINT16 depth = depthBuffer[depthIndex];
            if( 500 <= depth && depth < 8000 ){
                CameraSpacePoint cameraSpacePoint = cameraSpacePoints[depthIndex];
                buffer[depthIndex] = cv::Vec3f( cameraSpacePoint.X, cameraSpacePoint.Y, cameraSpacePoint.Z );
            }
        }
    } );

    // Create cv::Mat from Coordinate Buffer
    cloudMat = cv::Mat( depthHeight, depthWidth, CV_32FC3, &buffer[0] ).clone();
}

Show

You create widget cv::viz::WCloud for Point Cloud data by give cv::Mat that include Point Cloud data and cv::Mat that include Color data.

That Color data must be the same size (width, height) as Point Cloud data.
Beforehand, You need to stored the color data that converted to depth resolution using ICoordinateMapper::MapDepthFrameToColorSpace() of Kinect SDK v2 to cv::Mat.
(If you want to attach single color to points, You need specify cv::Viz::Color.)

Then, You set the widget name and cv::viz::WCloud that display to cv::viz::Viz3d::showWidget().
And, Draw scene by call cv::viz::Viz3d::spinOnce().

app.cpp#L266-L282

inline void Kinect::showPointCloud()
{
    if( colorMat.empty() ){
        return;
    }

    if( cloudMat.empty() ){
        return;
    }

    // Create Point Cloud
    cv::viz::WCloud cloud( cloudMat, colorMat );
    //cv::viz::WCloud cloud( cloudMat, cv::viz::Color::white() );

    // Show Point Cloud
    viewer.showWidget( "Cloud", cloud );
    viewer.spinOnce();
}

Execution Result

It will be real-time display Point Cloud that retrieved from Kinect v2 when running this sample program.

Reference

 


Leave a Reply

Your email address will not be published. Required fields are marked *