Working with deep images in Nuke 7

Written by Pradeep M on .

Nuke's powerful deep compositing tools set gives you ability to create high quality digital images faster. Deep compositing is a way to composite images with additional depth data. It helps in eliminating artifacts around the edges of the objects. Also, it reduces the need to re-render the image. You need to render the background once and then you can move the foreground objects at different places and depth in the scene. Deep images contain multiple samples per pixel at various depths. Each sample contains per pixel information about color, opacity, and depth.

DeepRead Node
The DeepRead node is used to read the deep images to the script. In Nuke, you can read deep images in two formats: DTEX(Generated by Pixar's PhotoRealistic Renderman Pro Server) and Scanline OpenEXR 2.0.

Note: The tiled OpenEXR 2.0 files are not supported by Nuke.

The parameters in the DeepRead node properties panel are similar to that of the Read node.

DeepMerge Node
The DeepMerge node is used to merge multiple deep images. It has two inputs: A and B. You can use these inputs to connect the deep images you want to merge. The options in the operation drop-down in the DeepMerge tab of the DeepMerge node properties panel are used to specify the method for combining the deep images. By default, combine is selected in this drop-down. As a result, Nuke combines samples from the A and B inputs. The drop hidden samples check box will be only available, if you select combine from the operation drop-down. When this check box is selected, all the samples that have an alpha value of 1 and are behind other samples will be discarded. If you select holdout from the operation drop-down, the samples from the Binput will be hold out by the samples in the A input. As a result, samples in the B input will be removed or fade out that are occluded by the samples in the A input.

DeepTransform Node
The DeepTransform node is used to re-position the deep data along the x, y, and  z axes. You can use the xy, and z fields corresponding to the translate parameter are used to move the deep data. The zscale parameter is used to scale the z depth values of the samples. If you want to limit the z translate and z scale effects to the non-black areas of the mask, connect an image to the mask input of the DeepTransform node.

DeepReformat Node
The DeepReformat node is the Reformat node for deep images.

DeepSample Node
The DeepSample node is used to sample a pixel in the deep image. When you add a DeepSample node in the Node Graphpanel, a pos widget will be displayed in the Viewer panel. Move the widget in the Viewer panel to display the sample information in the DeepSample node properties panel.

DeepToImage Node
The DeepToImage node is used to flatten a deep image. It converts a deep image to a regular 2D image.

DeepWrite Node
The DeepWrite is the Write node for deep images. It is used to render all upstream deep nodes to OpenEXR 2.0 format. The tiled OpenEXR files are not supported by this node.

DeepColorCorrect Node
The DeepColorCorrect node is the ColorCorrect node for deep images with an additional Masking tab. The options in this tab are used to control the depth range where the effect of the color-correction will be visible. Select the limit_z check box and then adjust the trapezoid curve; the values in the ABC, and fields will change. The value in field indicates the depth where the color correction will start, values in the and fields indicate the range where the color correction will be at full effect, and value in the Dfield indicates the depth where the color-correction effect stops. You can use the mix slider to blend between the color corrected output and the original image.

Note: You can use the DeepSample node to know the precise depth values and then enter them in the A, B, C, and D fields.

DeepToPoints Node
The DepthToPoints node is used to create a point cloud using the deep data. You can use the points for position reference. To create the point cloud, connect the deep input of the DeepToPoint node to the deep image. If you want to view the cloud through a camera, connect the camera input to the Camera node and then switch to 3D view. In the properties panel of theDeepToPoint node, you can use the Point detail and Point size parameters to change the density and size of the points, respectively.

TUTORIAL
Before you starting the tutorial, navigate to the following link and then download the file to your hard drive: http://www.mediafire.com/download/34h9mew93ff6izh/nt008.zip. Next, extract the contents of the zip file.

Step - 1
Create a new script in Nuke.

Step - 2
Open the Project Settings panel and then select NTSC 720x486 0.91 from the full size format drop-down.

Step - 3
Choose the DeepRead option from the Deep menu; the Read File(s) dialog box will be displayed. In this dialog box, select thedeep_bg.exr file. Next, choose the Open button; the DeepRead1 node will be inserted in the Node Graph panel.

Step - 4
Next, press 1; the output of the DeepRead1 node will be displayed in the Viewer1 panel, as shown in Figure nt8-1.

Figure nt8-1

Step - 5
Similarly, read in the deep_tree.exr file. Next, press 1; the output of the DeepRead2 node will be displayed in the Viewer1 panel, as shown in Figure nt8-2.

Figure nt8-2

Step - 6
Select the deep option from the Channel Sets drop-down; the deep data will be displayed in the Viewer1 panel, as shown in Figure nt8-3. Next, select rgba from the Channel Sets drop-down.

Figure nt8-3

Next, you will sample a pixel in the deep image.

Step - 7
Select the DeepRead1 node and then add DeepSample node from the Deep menu; the DeepSample1 node will be connected to the DeepRead1 node. Make sure the DeepRead1 node is selected and then press 1 to connect it to the Viewer.

Step - 8
Move the pos widget in the Viewer. You will notice that the information about the pixel underneath the pos widget is displayed in the DeepSample1 node properties panel, refer to Figure nt8-4.

Figure nt8-4

Step - 9
Delete the DeepSample1 node from the Node Graph panel.

Step - 10
Select the DeepRead2 node and then choose DeepMerge from the Deep menu; the input of the DeepMerge1 node will be connected with the DeepRead2 node.

Step - 11
Make sure the DeepMerge1 node is selected and then press 1 to connect it to the Viewer1 node.

Step - 12
Connect the B input of the DeepMerge1 node with the DeepRead1 node; the output of the DeepMerge1 node will be displayed in the Viewer, refer to Figure nt8-5.

Figure nt8-5

Next, we will move the result of the DeepRead2 node using the DeepTransform node.

Step - 13
Select the DeepRead2 node and then choose DeepTransform from the Deep menu; the DeepTransform1 node will be inserted between the DeepRead2 and DeepTransform1 nodes.

Step - 14
In the DeepTransform tab of the DeepTransform1 node, enter 10 in the y field corresponding to the translate parameter; the tree will move at the new position. Figure nt8-6 and nt8-7 display the position of the tree with the y value set to 10 and 50, respectively. Experiment with different values.

Figure nt8-6
Figure nt8-7

Notice in Figure nt8-7 that the bounding box is outside the frame size. Next, you will use the DeepCrop node to crop the result of the DeepTransform1 node.

Step - 15
Select the DeepTransform1 node and then choose DeepCrop from the Deep menu; the DeepCrop1 node will be inserted between the DeepTransform1 and DeepMerge1 nodes.

You will notice that tree has disappeared. Next, you will fix it.

Step - 16
In the DeepCrop tab of the DeepCrop1 node, select the keep outside zrange check box.

Note: You can also adjust the size of the bounding box. To do so, adjust the crop box in the Viewer. Alternatively, enter values in the x, y, r and t fields corresponding to the bbox parameter. Select the keep outside bboxcheck box to keep the samples outside the bounding box. You can use the xnear and zfar parameters to crop samples in depth. Select the keep outside zrange check box if you want to keep the samples outside the range defined by the xnear and zfar parameters.

Step - 17
In the DeepMerge tab of the DeepMerge1 node properties panel, select holdout from the operation drop-down; a holdout will be created. Figure nt8-8 display the holdout in the alpha channel.

Figure nt8-8

Step - 18
Now, select combine from the operation drop-down.

Next, you will merge a standard image with the deep data. You need to flatten the image using the DeepToImage node.

Step - 19
Select the DeepMerge1 node and then choose DeepToImage from the Deep menu; the DeepToImage1 node is connected with the DeepMerge1 node.

Note: In the DeepToImage tab of the DeepToImage1 node properties panel, the volumetric compositioncheck box is selected by default. On clearing this check box, Nuke assumes that samples do not overlap and it takes only the front depth of each pixel into consideration. Also, the processing time will be reduced. However, if you have overlapping pixels, the output may be different than expected.

Step - 20
Read in the sky.jpg file using the Read node; the Read1 node will be added to the Node Graph panel. Next, press 1 to view its output in the Viewer1 panel.

Step - 21
Make sure the Read1 node is selected and then add a Reformat node from the Transform menu; the Reformat1 node is connected with the Read1 node.

Step - 22
In the Reformat tab of the Reformat1 node properties panel, select height from the resize type drop-down.

Step - 23
Make sure the DeepToImage1 node is selected and then press M; the A input of the Merge1 node is connected with the DeepToImage1 node.

Step - 24
Connect the input of the Merge1 node with the Reformat1 node.

Step - 25
Select the Merge1 node and press 1 to view the output in the Viewer.

Step - 26
Select the Reformat1 node and then press T; the Transform1 node will be inserted between the Reformat1 and Merge1nodes. Next, adjust the position of the clouds using the Transform widget in the Viewer.

Step - 27
In the Merge tab of the Merge1 node properties panel, select A from the set bbox to drop-down. Figure nt8-9 shows the output of the merge operation.

Figure nt8-9

Figure nt8-10 display the network of the nodes in the script.

Figure nt8-10

This tutoral is taken from my Nuke Book published by CADCIM Technologies. 

Comments   

 
0 # Jordan Olson 2013-10-13 23:04
Thanks for the basic overview of the standard Deep nodes in Nuke. I'm still a little unclear on how the DeepSample node works- it gives a range of evenly spaced depth values all containing the same RGB?
 

You have no rights to post comments

We have 3447 guests and 126 members online