Whats your background?
I have a mixed background in technology and media from a young age. Like other motion-picture enthusiastic children I gravitated towards visual-effects early on, but didn't really find a path for my creativity until the video revolution in the '80s allowed greater access to movie-making tools. In college I moved from my father's Super-8 camera to video production gear and immediately began hacking on them which pushed me more towards video engineering and my first real job at a large post-production house in downtown Boston. It was around that time that I started doing more serious hardware & software engineering - designing and constructing a realtime motion-control/performance capture system.
After moving to Los Angeles in 1991 to pursue a career in animatronics and motion-control, I watched the industry do a 90 degree turn technology-wise when the "Jurassic Park" go-motion stages were shut down in favor of CG dinosaurs - at that moment I decided to move into cgi-based vfx.
How did you get to DD?
After moving to LA I took a job at Pacific Ocean Post in Santa Monica where I met Price Pethel's son Mike, a telecine colorist there. Price Pethel is the compositing/imaging expert who helped start up Digital Domain. I was hired as DD's video engineer in 1993 and spent the next year or so building the facility.
What did you do at DD pre Nuke?
Aside from video engineering early on I've always worked in vfx production. My work on Nuke throughout my tenure at DD was almost exclusively a side project. I began learning production composting at night during "Apollo13" in 1995 then became a full-fledged compositor on "T23D"in 1996, handing off my video duties. After comping for a couple years on various shows I was given the opportunity to be comp supe on "The Fifth Element" in 1997, then dfx supe for the first time on "Supernova" in 1999. I continued being a dfx/vfx supe until the time I left DD (the first time) in 2005.
What was your contribution to Nuke?
I was co-author of the architectural redesign of Nuke2 along with Bill Spitzak and was the creator of the 3D system from the ground up.
The need for the architectural redesign was born out of comping some of the extremely complex shots (for the time) on "Titanic" where a few key architectural limitations in Nuke2 were highlighted (four channels, no 3D visualization, simple cache management.)
For example, to get beyond the four channel limit on one particular shot required an enormous amount of cloning and duplication of nodes that really highlighted the limitations of restricting the channel set at all (all I needed was one more channel! btw that script is the one reprinted in the back of Ron Brinkmann's compositing book.)
That same script also highlighted the need for a 'real' 3D system - at least one that allowed the user to visualize 3D space from outside the taking camera. There were many groups of photographed extras placed on cards to fill in empty areas of Titanic's dock and since there was no way to orbit around the scene or manipulate 3D axis everything had to be placed numerically.
The 3D system was quite a challenge for someone who had nearly zero 3D experience and I contribute its success to my naivety of what I was really getting myself into... The renderer alone went through at least four major code/design revisions.
When did you start working on Nuke?
I started hacking on Nuke2 around 1998 to help fix a few bugs that were really annoying to me as a user. That gave me a taste for what software engineering on a production app could accomplish. When Bill came back to the software department to help kickstart the Nuke3 development I decided to contribute further by designing and writing the Viewer code to make sure that the knob interface allowed support for real 3D manipulators. That led to designed and writing the entire 3D system.
My first attempt at this was a proof-of-concept app in the mid-'90s called 'Fluke' which was a pen-based OpenGL GUI (which looked suspiciously like Flame) but used Nuke2 as it's back-end engine. This provided insight into how a scanline processing engine could be integrated with a 3D environment.
Did you look at any 3D package for inspiration?
Primarily Houdini as it seemed the closest in concept and execution to Nuke's procedural design. It was/is heavily used at DD and Nuke's camera was specifically made to closely mimic Houdini's to guarantee a close match when sharing camera data. Nuke's internal geometry definition is similar to Houdini's but differs in some important ways.
What is your favorite node/concept?
Well this might seem silly but I'm pretty darn proud of the NEAT - or "Numeric Entry Adjuster/Twiddler" (just kidding, we weren't that vain.) That's the ability to increment/decrement a numeric digit and navigate through its decades using only the cursor keys. This may seem minor but it had a huge impact of the speed of color adjustments and the ability to tweak in general. Coupled with the speed of the new caching system it allowed a user to easily 'walk' a parameter into place without having to look at the keyboard. In Nuke2 you had to select the value, type in the new value then hit return - repetitively. In Flame you could scrub a value easily but it lacked the ability to really fine-tune with tiny increments. This feature became so useful that I really miss it when I use other applications.
Any particular part you are proud of?
Well, the entire 3D system I'm pretty darned proud of despite its limitations. Other than that I think the state hashing mechanism is one of the reasons why Nuke is so fast and is really under-appreciated.
Is there a bug you would most like fixed?
File read error handling is still buggy and impacts our lighters every day. In the 3D system the inefficient memory management of the ScanlineRender node has caused many render memory problems - it needs some internal re-architecting.
Is there a particular feature you would like to see added?
Access to the Qt library in order to build custom knob interfaces, plus moving the existing knob definitions into DDImage so that existing knob types can be extended into larger knob structures.
Where are you now?
ImageMovers Digital for the rest of the year, then moving on to an animation studio in the SF bay area.
How custom is their (IMD's) Nuke install?
Very custom - asset management, custom 3D geometry I/O, geometry processors, shaders, renderer file support, custom renderers etc. There's at least 20+ plugins specifically developed to support our lighting pipeline in order to speed up a lighter's workflow. We use Renderman heavily so there are many plugins dedicated to helping integrate Renderman into Nuke including the ability to read and write .tex , .shad, .dshad, .ptc, and .bkm files. There's even a Renderman-driven replacement for the ScanlineRenderer node.
Any plans to write any plug ins?
I write many, many plugins for the facility I'm working for, but have no plans to market any plugins commercially.