During greater part of 2017 and 2018, our team was doing research in Deep Neural Net (DNN) anomaly detection. The problem we were looking to solve was that control centers usually had far too many surveillance video cameras and control panels to monitor and not enough personal to pay attention to them all. Why not apply machine-learning (ML) anomaly detection on camera and computer monitor feeds to alert control center personal of abnormal events as soon as they occur?
So, our team built this multi-video, multi-threaded DNN anomaly detection prototype, called "Alice", that could run a number of different anomaly detectors for OCR (Optical Character Recognition), image differences, numerics, and position/angle/color changes across multiple video streams simultaneously. I mostly worked on the web browser GUI (built with the Angular-Meteor framework) and the recording system for playing back anomalies detected by Alice in a separate Docker image. The GUI Docker interacted with the Alice Docker via MQTT and WebRTC.
The Alice DDNs could either be (1) trained using unlabeled video sources showing "normal" operations; or (2) be setup manually via the GUI. When trained using "normal" operation videos, the DNNs could then detect anomalies when something occurred that the DNNs were not trained to accept as "normal." The training could take minutes to hours. However, after demoing this to potential customers, most requested some means of manually setting anomaly rules without going through the training process. That seemed counter productive to us but we implemented that feature anyway in the form of a Graphical Content Analysis (GCA) server. Here is a video showing how my OCR rules GUI worked on a "fake" SCADA control panel video feed (used for testing purposes) along with how my anomaly timeline and playback GUI worked (with me doing the audio commentary).
SCADA stands for "Supervisory Control And Data Acquisition".
What wasn't shown in this video was that the GUI also supports operator annotations via various text fields entered by the operator at the time the anomaly occurred or later on during playback. These annotations were stored in a Mongo DB front-ended by Angular-Meteor along with IDs to anomaly detection videos stored for playback with the ability to do searches on these annotations. Here is a screen shot showing some operator annotations on an anomaly from a different "fake" SCADA control-panel video feed.
(I was being a little bit silly on the "Actions taken" comment.) In addition, anomalies can be marked as "ignored" which moves these anomalies to a different part of the timeline as was done for this particular anomaly.
One of the more difficult anomaly rules to implement was color. The reason is that the color red, for example, in a video feed is very rarely exactly red (e.g. #FF0000) due to color shifts and various video compression artifacts. Furthermore, a given color will change slightly from one video frame to the next. That meant that we had to provide a way of specifying a color tolerance or difference threshold. Within Alice we implemented color difference thresholds based on these equations:
The problem with this is that it is difficult for humans to visualize what this really means. So, I had to come up with a color GUI control that visualized this RGB difference threshold in a way that made sense to humans. I decided to use an HSL (Hue-Saturation-Lightness) approximation but the conversion from RGB difference thresholds into the HLS color space is non-trivial since they don't map directly. I ended up using some regression approximations that were close enough. Here is some of the math I worked out to solve this problem (formatted with LATEX):
I then implemented the control using 2-D HTML canvas drawing. The control came out looking like this.
In the image above the color difference threshold is set to 40 (in the range 0 to 128). If the threshold is increased to 128, the effect looks like this.
Overall, this was a very interesting research project but we were unable to bring in a partner corporation that would alpha test our DNN technology in a real process control system and help offset our research costs. Therefore, the whole project was put on hold and I moved on to something else.