By 2019, our machine-learning research project was now integrating and managing multiple cameras and video sources and as a result it was becoming increasingly difficult to configure using config. files only. I was given the task of creating a professional-looking desktop UI that could be accessed from a web browser on company locked-down PCs. It was decided that there would be no support for mobile devices and the GUI would be package as a Docker image.
One of the more difficult parts of the UI was the handling of low latency RTSP video streams. Web browsers do not support RTSP. Fortunately, the OpenTok developers had created a C++ beta library what would do the hard part for us. The OpenBox developers extracted the WebRTC source code from the open-source Chromium Git repo and repurposed it for use with OpenBox. In order to use it, we required a TokBox cloud subscription and a developer login to access the beta code. One of my colleagues wrote a C++ wrapper around the OpenBox library for another project which converted RTSP-to-WebRTC and vs. versa. I simply leveraged his code for my project.
The result was a very modular ES6 plug-in architecture that dynamically adapted to varying numbers of video cameras of different types along with other video streams. Here is a screen shot of the UI in Google Chrome for 3 PTZ cameras and a remote video stream running in full auto mode.
Because the UI may reveal Barco trade-secret information, the image was reduced to make the text unreadable.
There were multiple versions of some of the plug-in modules so I could experiment with different UI ideas that could easily be swapped in and out as needed. Most modules have dummy equivalents with no UI at all. These dummy (or mock) plug-ins were used for the 4th video stream above, for example.
I added a SVG drawing UI for drawing various configuration regions overlaid on the WebRTC videos. The color of the SVG lines matched the color of the edit buttons in other panels needed for configuring those regions. The observer design pattern was used to maintain tight cohesion with loose coupling. Every reusable panel dynamically rendered its own HTML and behavior independent of all others. A manual override could be invoked if the machine-learning code could not handle a particular situation. This is shown here with the operator selecting the 2nd camera stream as a manual override with the option to go back to auto mode at any time.
Even though the UI is just a prototype, I went ahead and fully documented all the modules and classes in JSDoc. Clicking on one of the buttons in the UI above brings up the JSDoc pages in another browser tab like this.
While parts of the Virtual Director application are trade secret with patents pending, the configuration UI design itself is not since it doesn't break any new ground.
The JSDoc includes a lot of diagrams and images as well as explanatory text all taken from the source code comments. Here is a page describing one of the plug-in modules.
The whole point of all this was to make it easy for another developer to take ownership of this code once I moved on to something else.