Skill level
2 Intermediate skills

I haven't work with OpenGL, or more specifically, OpenGL ES 3 since 2018. This was in the context of creating WebAssemblies for the Chrome and Firefox browsers compiled with the Emscripten framework from source code written in C++. OpenGL ES (ES for Embedded Systems) is a subset of OpenGL which only supports two types of shaders: vertex and fragment shaders. The full OpenGL spec. is not supported in WebAssemblies because OpenGL eventually calls the WebGL 2 API supported by the web browsers. And, WebGL 2 only supports OpenGL ES 3 plus a few extensions specific to web browsers.

I first started using OpenGL 1.0 back in the early 2000's while using Java3D on Sun Microsystem's experimental Looking Glass 3-D desktop. This was when I was consulting for Sun Microsystems and the Burke Institute on the K-Web project. Java3D had a low-level wrapper API around OpenGL 1.0 though it provided a higher-level API on top of that. I mostly used the higher-level API but sometimes I had to drop down to the OpenGL level API. OpenGL 1.0 and later versions of OpenGL use completely different APIs.

In 2016, while working for the Barco New Experiences division, I was given the task of creating a web app that could handle 180° and 360° panoramic video cameras. The cameras generated video frames in 3 different projections: equirectangular, dual-hemisphere and a normal image with a lot of barrel distortion. The only way to crop and dewarp these into a standard rectilinear projection was to use WebGL shader programming. I used the JavaScript library, three.js, as it already had a shader module to handle 360° equirectangular projections. But, three.js did not have built-in shader modules to handle the dual-hemisphere and highly barrel-distorted projections. Therefore, I had to learn how to program the WebGL shaders with custom UV texture maps myself. This was my first experience with WebGL 2 shaders (and by extension OpenGL 3).

In 2017, while at Barco Labs, I needed to determine if WebAssemblies could handle live WebRTC videos as textures within OpenGL ES. The idea was to import live traffic cam streams transcoded to WebRTC then, convert the video frames into OpenGL textures. The video textures would then be rendered onto "billboards" within a 3-D common-operational mashup with fly-over animations and terrain/infrastructure models provided by a game engine. The entire 3-D mashup with videos would then be exported as a WebAssembly that could be rendered directly in the Chrome and Firefox web browsers without any special plug-ins.

Because OpenGL doesn't know anything about web browser specific WebRTC videos, OpenGL lacked the API to pull this off. But, WebGL has an extension that does. Because the OpenGL API within WebAssemblies eventually calls the corresponding WebGL API, it seemed like it would be possible, with some low-level magic, to get around this problem. Without going into details, I eventually succeeded using some undocumented low-level hooks within the Emscripten framework to pull this off. In my proof-of-concept demo, I was able to display a live WebRTC video stream as a texture on the face of a rotating cube with an Emscripten compiled WebAssembly being executed by the Chrome web browser. That success cleared the way for exploring game engines as a means of doing 3-D common-operational mashups combined with WebRTC live videos.

Experiences using this skill are shown below:

Barco Labs (research)

[I know, this section just echos the same stuff as on the résumé. I plan to expand later.] Worked with PhDs, staff and university interns researching disruptive technologies. Barco Labs deliverables are research papers, patents and demos. Any research that might become a viable product in 2 to 5 years is then passed off to one of the product divisions. (Due to the trade secret nature of this research some details cannot be revealed.) Accomplishments: