Can you tell us a bit about yourself and Peregrine?
Peregrine is a small company, actually a few companies, located in Dundas, Ontario just west of Toronto. Peregrine Visual Storytelling Ltd. is the parent company focused on producing and helping others to produce original content, Peregrine Labs is the R&D division focused on building technology and Peregrine Visual Effects help make pictures. Peregrine isn't just a company but also a lifestyle experiment, the way the industry works is constantly changing and we'd like to build a more sustainable model.
My focus within the company is to make sure we're solving problems well, if we need to build new technology we will - but it's also nice to be able to leverage existing solutions, there's no right answer but just good decisions.
What is Deep Imaging?
Deep Compositing is probably a more correct name for it and the simplest description is the ability to access multiple layers of pixel data in a single image.
Where did it all start?
All of the technology was developed years ago when someone invented the frame buffer and eventually in 2000 Eric Veach and Tom Lokovic from Pixar published a paper on deep shadow maps. The initial use of deep compositing, as I know it, was using these deep shadow maps to generate hold outs on Day the Earth Stood Still at Weta Digital. A few different studios were converging on the same workflow at the same time so there may have been earlier cases where it was used.
Who has a Deep pipeline now?
It's hard to say, and I'm also probably legally obliged not too - there are a few studios using it in earnest, and quite a few selectively attempting some deep compositing solutions. Because of the lack of a standard format and ease of generating and using the data it's not as accessible at this stage.
As a compositor, why should I care?
If you've ever had to deal with Z depth passes and the issues inherent in using them you should care, and beyond that there's some pretty interesting ways of using deep images to improve your compositing workflow.
How much bigger are the files?
It depends on the exact format, but they can get quite large - there's no golden rule but you're talking magnitudes larger. Be sure your network can cope with the extra traffic too.
What apps support it?
As far as using deep data for compositing, Nuke has embraced it - I believe Eyeon's Fusion has a form of Deep Compositing but I don't know if it's the same as what we've been discussing. We currently write a few plugins that support deep data in and outside the context of Nuke but don't currently plan on expanding on that just yet.
What will Open EXR mean for Deep Images?
Hopefully it will level the playing field and make it more accessible - as it stands every renderer needs to write it's own format which means users will have to build their own readers for Nuke. I think it will help with adoption too - I get the sense a lot of software vendors want to implement support but are just waiting for EXR 2.0 vs rolling their own and muddying the waters even more.
Do you see a common misconception/misunderstanding about Deep Data?
You bet, it's not a silver bullet and it comes with a few caveats. Depending on how the data is written there's a good change the data isn't as deep as you'd hope, it stores everything up until a completely opaque pixel sample so if you have a completely opaque object you can't remove that to reveal the objects behind if it's been rendered that way. If you want to do that you either need a custom display driver that saves out samples prior to hiding ( but you're looking at some massive files ) or render in layers and deep merge them together.
There are also pixel filter issues, at least when generating them via PRMan - deep shadows require 2x2 box filters and this is probably not what you're hero frames have been created with so there's a chance you'll have ghosting issues depending on what you're doing to the data.
Lastly, in many cases, you're just storing deep alpha data - not deep colour.
Are there any dos and donts with Deep Data handling in comp?
Do use it sparingly, don't use it all the time.
Where do you see Deep Compositing going?
Beyond EXR 2.0? It would be really interesting if there was deep support on the GPU. Otherwise, I think it will just take the next evolutionary step - I'm sure at some point there will be some interesting stereo conversion tools that utilize it, maybe a tool to convert RAW files from the plenoptic camera into a deep buffer, and eventually we'll be compositing with point clouds. :)
What role do you see yourself and Peregrine in in the future of Deep Data?
We'd like to make sure tools we develop support deep data if it makes sense - my personal involvement in the bigger picture has been discussing implementation with other software vendors as well as helping studios understand what it is, we're not involved in EXR 2.0 as there are bigger fish swimming in that sea. We'd like to make sure we're apart of what's next - beyond deep compositing, it's much more fun to innovate.