One of the other machine-learning researchers had created a deep neural net (DNN) using TensorFlow but didn't have the required number of labelled training images to train, test and validate the DNN. I can't say exactly what the DNN does because it is under a pending patent having to do with eye gaze. Getting accurate information about eye gaze from real photos of people's faces is notoriously inaccurate and we needed tens of thousands of accurately labelled eye gaze images.
That's when we came across the idea of using a game engine to produce near-photographic quality images of the human eye along with the facial features around the eye. It turns out Cambridge and Carnegie Mellon Universities had figured out how to use the Unity game engine to do almost what we needed in the form of UnityEyes. So, we download the code and gave it a try. Here is an example of such a UnityEyes generated image annotated with location features.
Unfortunately, the software didn't quite do what we wanted. It was designed to produce a large set of random eye/face images. But what we needed was a step-wise progression of eye/face images. On the good side, UnityEyes did provide accurate labelled gaze data since the game software was synthesizing everything with all the location features we needed but in a separate set of metadata files. We really wanted the images and metadata generated with filenames for easy sorting rather than with sequentially generated filenames of randomly generated images and metadata. And, there was no API that provided any control over eye/face angles unless we hacked the UnityEyes code to make it happen.
However, UnityEyes did provide a way to set eye/face angles via the keyboard and mouse. What they called "interactive" mode. So, I worked out a way to programmatically move the mouse in small, controlled increments and then send mouse clicks and keyboard events to Unity Eyes to generate sorted labelled images and metadata with filename renaming for easy sorting. I took advantage of my previous skills writing GUI apps for Windows to generate the required Window messages. It took some trial and error to figure out the right mouse move increments while keeping the mouse within the UnityEyes app window so it received the mouse/keyboard events instead of some other app. And, things had to run slow enough for the Windows messages to take effect. As a result, it took days to create the 25,000 labelled images on a computer doing nothing else. Anyway, I succeeded and we were able to train the DNN.