Deep2VP v4.2
This location is for Registered Users Only.
Perhaps you need to login or register.
12.2, 12.1, 12.0, 11.3, 11.2, 11.1, 11.0, 10.5 or later
Linux, Mac, Windows
Video :
https://youtu.be/BXArfKtTvEE
Video :
https://youtu.be/zl6WVG4Mhag
Deep2VP suite is a toolset to convert deep data to world space position. Provide with all the possible tools with this deepPosition data. Matte, Relight and projection in Deep.
Deep2VP 4.0 added DVPColorCorrect for DVP's matte nodes, shader system for relight in deep, estimate deep normal from deep position, some fixes and enhancement.
* This toolset only works on Nuke 11+.
** Nuke 10 can only use Deep2VP and DVPToImage.
changelog
4.2
DVPfresnel & DVPscene
- DVPfresnel rollback to 4.0.
If you encounter nuke crash when creating these node, Please create other node first, such as Deep2VP, DVPsetLight. Then it wont crash anymore.
4.1
DVPfresnel
- create this node will be crash on previous version, it had been fix on this version but without supporting ToonShader.
4.0
Deep2VPosition (Deep2VP)
- added camera setting to metadata
- remove bake and copy buttons
- rename to Deep2VP
- generate/select/import deepNormal in this node
- added generate normal in deep, still keep the previous one.
DVPort (DVPortal)
- renamed to DVPortal
DVPmatte
- removed 'option' knob, 2D matte can use 'open matte' instead
- open matte can choose show matte or color
DVPattern
- internal setup same as DVPmatte
- added rotation knob
- support 'open matte'
DVProjection
- remove bake and copy buttons
- remove all metadata created from Deep2VP
DVPsetLight
- added shader setting and input shader
- not required link camera anymore
- removed deepNormal setup, moved to Deep2VP node
DVPscene
- added multiple output options
- remove all metadata created from DVP's lighting system
DVPrelight
- added specular setup
- added toon shade setup
- fix pointcloud preview with effects
DVPrelightPT
- split up point light from DVPrelight
- fix point light duplicated algorithm
DVPfresnel
- not required link camera anymore
- fix unpremult process, the result was too dark
DVPToImage
- remove all metadata created from Deep2VP
- same as Deep2VP node color
new nodes :
DVPColorCorrect
added shaders :
DVP Shader
DVP Toon Shader
3.8
Deep2VPosition
- added metadata setup for DVPmatte's multi matte color fix.
DVPmatte
- removed 2 impractical operations
- removed falloff type selection, use exponential setting instead
- added metadata setup to fix multi mattes process
DVProjection
- link camera to use world matrix instead of transformation knobs
- removed scale and skew
- supported output deepNormal channel correctly (can be find on Misc tab)
DVPsetLight
- fixed the conflict if deep normal pass under deepNormal channel from input
- fixed 'generated normal' with unpremult, this will fix the normal output in 2D
- added input 2D normal with unpremult, this will fix the normal output in 2D
- input deep normal channel default changed to 'deepNormal'
DVPrelight
- remove unnecessary knobs under different types of light, make interface clear
- point light update more accurate algorithm
- simplify falloff option on point light and spot light
- pointcloud preview update, show the input color
- added world scale unit under point light and spot light, this related to light intensity
- optimize the setup of point light. The node much lighter
DVPfresnel
- fixed fresnel output in screen space
- added unpremult before process
- replace gamma to exponential
DVPscene
- 'mix' knob name changed to 'light_shading'
DVPToImage
- due to DVPsetLight update, it output deepNormal correctly.
- added remove the metadata created from Deep2VPosition
Comments
"DeepExpression 9: nothing named 'status_x' "
I edit expressions from status_x to [value status_x] instead and get it workin now.
Thank you for the report and the fix.
exactly what i needed for my CardToTrack,
Was looking for the way to convert deep to world position.
will it be okay if i will use part of your setup in my gizmo.?
i will leave huge credit backdrop i promise!
Just updated 2.0
the font you use in your 'group' nodes is not supported on linux Mint, maybe it is better to choose "Arial" or some other 'common' font.
i have a question: (in your video 6:45)
after you are using DVPmatte to cut part of the image and grading this patch. in next step you simply combined this 'graded' patch with original deep with 'deep merge'.
it did not work for me since the patch and original do have same deep coordinates, as a work around i had to deep transform slightly my patch by 'z' to put it upfront. i am wondering why you did not have same issue?
thank you very much!
Thanks for your feedback. I did that DVPmatte demo just quick and dirty, the proper way should use the 'invert matte' check box and merge it with the matte area. There might have some gap issue between the matte, so size need to slightly adjust, like how I did on DVPattern's demo.
About why that works on my demo, I think it's because I have anti-aliasing on my whole deep image. If you check out 'DVPToImage' in the video, you can find that I have multiple semi-transparen t deep samples on each pixel. And that's why it works as the demo.
About the fonts, I didn't do anything honestly, just stick with Nuke's default setting, I will take a look.
I see the gap you mentioned,
I think in this case it maybe yes a bit more efficient to transform patch by 'Z' -0.001 before combining. And in Deep Merge check 'Drop hidden samples' to get rid of the deep samples behind the patch.
(it will allow to wrap it to 'DeepCC masked' group as well) fun!
Ok will play with it, thank you Mark.
Cheers!
I've got a new solution to handle colorcorrection with matte on 4.0. I added DVPColorCorrect . It takes the matte from those nodes to do color correction instead of 2 separate inputs and merge. So it solved the gap between 2 mattes.
I just came crossed this. This looks like an amazing tool. I work in VFX using unreal. The #1 problem we have with integrating into a normal pipeline is our lack of rendering a deep pass. BUT you seem to have figured out how to change deep to world space. Now I am curious, you happen to know how I can convert world position to deep info for compositors? In unreal I can render out scene depth and world position but no deep. And our compositors refuse to use anything that doesn't include deep data for blending. If I had a way to take my world pos pass from unreal and convert that into deep data iit would be phenominal. Any time and help is greatly apreciated.
I just came crossed this. This looks like an amazing tool. I work in VFX using unreal. The #1 problem we have with integrating into a normal pipeline is our lack of rendering a deep pass. BUT you seem to have figured out how to change deep to world space. Now I am curious, you happen to know how I can convert world position to deep info for compositors? In unreal I can render out scene depth and world position but no deep. And our compositors refuse to use anything that doesn't include deep data for blending. If I had a way to take my world pos pass from unreal and convert that into deep data iit would be phenominal. Any time and help is greatly apreciated.
http://www.nukepedia.com/gizmos/deep/deepfromposition/finishdown?mjv=1
RSS feed for comments to this post