After completing Graphical Content Analysis server for the ISE Expo, the idea of a common operational picture came up. That is, the user is presented with one view for everything where one can seamlessly zoom out to see the big picture or zoom in on a specific area or incident to see those details without ever switching views. And, it just so happened that the Antwerp Port Authority had recently suffered millions in damages due to a large barge colliding into one of the canal locks because someone wasn't paying attention.
Netbeans description TBD...
Experiences using this skill are shown below:
Later on in the DNN anomaly detection project we had the opportunity to temporarily integrate our A.I. technology into a Barco control-room product called OpSpace that would be demoed at the 2018 ISE Expo. The OpSpace system contained a subsystem called the EDP Analytics Service (EAS) which used a graphical IoT builder call Node-RED.
During greater part of 2017 and 2018, our team was doing research in Deep Neural Net (DNN) anomaly detection. The problem we were looking to solve was that control centers usually had far too many surveillance video cameras and control panels to monitor and not enough personal to pay attention to them all. Why not apply machine-learning (ML) anomaly detection on camera and computer monitor feeds to alert control center personal of abnormal events as soon as they occur?
This was a student intern project that I took over to turn into a useful application for use within the company. Even though this wasn't a research project, we thought it would be a good way to make Barco Labs better known throughout the company, as many employees viewed us as an "ivory tower" doing esoteric research of little practical value. The Smart Meeting Room App (SMRA) had the very practical benefit of finding and scheduling meeting rooms on the Barco campus.
By 2019, our machine-learning research project was now integrating and managing multiple cameras and video sources and as a result it was becoming increasingly difficult to configure using config. files only. I was given the task of creating a professional-looking desktop UI that could be accessed from a web browser on company locked-down PCs. It was decided that there would be no support for mobile devices and the GUI would be package as a Docker image.
An earlier version of our machine-learning person tracking software was too slow to keep up on every video frame. (This was before our team attempted to use GPU acceleration with more efficient person tracking software.) This resulted in jumpy video transitions while tracking someone. I was given the task of finding a way of applying video motion smoothing so the resulting video framing would be smooth and professional looking.
Since I had already computed the homography and object segmentation for the text readability prototype, it was just a matter of using the OpenCV library to apply perspective warping on that video stream (from a computed perspective transform based on the homography matrix) so that one video stream could be seamlessly inserted into the other to create an augmented reality mashup. The result was a much clearer rendering of the projected content as seen in the composite video stream by remote users.
I was given the task of researching automatic white-balance algorithms with the goal of calibrating multiple video cameras to the same color balance. The problem was that when switching between multiple cameras covering the same scene, a noticeable color shift was observed in the video stream, especially when the cameras were of different manufacture.
I was given the task of researching and prototyping various computer vision algorithms to determine whether the text appearing within a given video frame was readable or not.
Created several proof-of-concept webapps in 2017 experimenting with ideas that will make virtual meetings more immersive. Our approach was largely inspired by the Star Wars™ films. If you remember, the Jedi Council held meetings in which remote participants were sitting in seats using holographic projections of themselves and vs. versa. I prototyped the same two-way immersive meeting idea using WebGL 3-D and WebRTC in web browsers.
Developed several browser-based video playback and video device management applications. Some examples (in reverse chronological order):