perspective dewarping

Skill level
3 Competent

At Barco Labs, in 2019, I was given the task of evaluating the image quality of projected images in live video stream. To do this, I programmed Python to find the projected image in the scene using BRISK (Binary Robust Invariant Scalable Keypoints) feature detection. Then, programmed the software to do perspective dewarping and scaling on the projected image so the software could perform image quality assessments (IQA) against the original image to measure how much degradation there was in the projected image. From that, the software could automatically determine whether the projected image quality was good enough to keep as-is or if the software had to show the original image in a split view. I tried both the peak signal-to-noise (PSNR) and the structural similarity (SSIM) IQA algorithms but neither of these provided the kind of results I was hoping for mostly because of registration issues after dewarping and color shifts between the projected and original images. This was partially fixed by converting both images to grayscale and equalizing the image histograms.

I soon abandoned this approach in favor of a machine-learning technique that measured image detail in the projected image based on the height of the text found within it. That approach yielded much better results.

I used a related technique in 2010 while at FlyteComm to dewarp groups of map tiles in say a Lambert Conic Conformal projection for a specific meridian and parallel (since conic projections are only accurate over a small areas of the Earth). Then, rewarped into say, the World Mercator cylindrical projection. This was done on USGS topo maps and FAA flight charts, for example.

Experiences using this skill are shown below:



Barco Labs Augmented Reality Prototype

Since I had already computed the homography and object segmentation for the text readability prototype, it was just a matter of using the OpenCV library to apply perspective warping on that video stream (from a computed perspective transform based on the homography matrix) so that one video stream could be seamlessly inserted into the other to create an augmented reality mashup. The result was a much clearer rendering of the projected content as seen in the composite video stream by remote users.