10 tips to optimising Nuke and creating efficient workflows

Written by Scott Chambers on .


Keep every layer operation piping in to the B pipe stream (your main branch) - this means, among other benefits, you can dissable the merge and the image stream will still flow. In terms of include/exclude mask ops like the popular in Shake 'inside' and 'outside' - you will have to get used to using 'mask' and 'stencil'.

Make sure you are optimising your bounding box on any element you have in the comp. If the image is full frame, take care it doesn't grow larger (from blurs, transforms etc) than the full format, and if it smaller than full frame make sure you have a bounding box that sits tightly around the element.

When merging it is important to chose the 'set bbox to' option that is the most optimised for what you are trying to achieve, with the goal of the smallest bbox possible as paramount.

If it is a CG pass, your 3D department should be rendering exr's with bounding boxes built in, but if not then you can create them yourself. AutoCrop can analayse zero data pixels in a frame and drawn a bounding box around the element. You will then need to take this AutoCrop data and copy that data into a crop node to make use of this data.
When rendering out and exr file sequence, of say a precomp of an element, you can check the autocrop option on the write node. Bear in mind that his option only appears when you are rendeirng out exrs. This is quite a slow process as it consumes quite a bit of memory, but the beauty is that if you do it once, you won't need to do it again and when you bring in the sequence as a read node you will now have the bounding box baked in.


Geometric transforms should concatenate to retain the integrity of your plates and elements. Why? Ultimately whenever you filter pixels (transforms, convolves / blurs  etc) You are approximating new pixels with filter algorithms that are essentially a visual cheat, and a cheat that degrades the image integrity - albiet normally ever so slightly but if these degradations pile up on top of each other you start to see unwanted artifacts in your plates / elements. Concatenate means that the mathmatics behind multiple transform nodes can be 'folded' into one operation. It is useful to have multiple transform nodes to have the upmost control of transform operations, and the 3d environment in Nuke will concatenate with 2d transforms. Say you wanted move an element around but have independent control on movement in X/Y, scale and rotate. By splitting these operations into three transform nodes you can have total control on adjusting, removing or just quickly disabling these now independent transforms. If Nuke didn't concatenate the three transforms, you would be degrading the image every transform. Luckily it does, but only if you follow the golden rule: keep transforms one after each other and don't 'break' them by placing color correction nodes or merges between them! In Shake, there was a handy green line that would appear connecting transform nodes to give you visual feedback that your transforms were concatenation, alas Nuke doesn't do this (yet? hopefully!).


Use Card3D's when you can instead of cards in a 3D setup with a scanline renderer. Much, much faster to render and you are essentially doing the same thing if you just have a single card.


Although Nuke's Defocus node is pretty fast, a blur beats it for speed. And you should only need to use the defocus node when you want optical 'bokeh' effects (the blooming of highlights when defocused. Don't use defocus nodes on mattes, or to just soften images when you aren't after the said optical effect.

This isn't really an optimisation but remember that the exposure node is only a RGB multiply grade operation like the mult in a grade node, only difference is that the parameters are to an exposure scale. Handy if you are used to working in stops or printer lights, or if you have been instructed by your supervisor / director to take it up/down a stop. This is no magic in there, you can just use a regular grade node if you are color correcting.


 Previewing at lower resolutions / frame rates. In the process of compositing a shot, it's not neccessary to view all your frames at full res all the time. You can get away with working in proxy modes, and also rendering in proxy modes for you just to check how things are going in the comp. Sure when it comes time to submit, or you are getting pretty close then rendering full res is what's required, but for just getting comps up the scratch or problem solving errors rendering proxy sizes is more efficient. The same applies to frames, you can also get away with rendering say every fifth frame instead of every frame in the early stages of your comp work. The render time will be five times faster (10 minutes instead of 50 mintues for instance) and you will able to spot most errors this way. I wouldn't recommend doing this all day, everyday as you will need to see the inbetween frames, but its a very quick way of getting up to speed on your comp and really does save you and the rest of the team wanting to using the render farm time! Most flipbook viewers can play back at various frame rates so if you do render every fifth frame you can play back at %20 of the speed.. sure this will look steppy but you will get the idea of timing. Even if you are rendering every second frame, this is two times faster of course, something to think about it.


 Precomp sections of your script that you aren't changing or have been signed off. Even so called 'still frames' can actually use quite a bit of processing for each frame, so if it really is supposed to be just a 'held' frame or matte painting that has a lot of additional work done in Nuke, it's best to pre render this as a still frame. You can precomp on the fly while rendering you whole script also. If you set render orders on write nodes, you can go down through your tree creating write nodes, and then read nodes straight after, rending in the write nodes. If you haven't rendered these write nodes yet, you will have to manually fill out a read node with the path from the write node, and remember to set the frame range! Nuke will default to 1 during this process. You will also get an error from the read nodes saying it doesn't exist, and it doesn't, yet! So working your way down, set the write node orders so your final main comp has the highest value, say if you had two precomps in your script, the first would be 1, second would be 2 and your main comp write node would be 3. Nuke now has a read check box on write nodes, saving you from creating both read and write nodes for precomps. You will have to write them out first before you select read othewise it will error.

Nuke now also had a 'precomp' node that saves a selected portion of your script into a new script and adds a write node to it. You can also manage versioning of this new script and its output for bringing back into your comp. If you have render an exr sequence from this precomp script, Nuke will be smart enough to realise if the script has been updated but the file sequence that is being read is out of date (Nuke picks this up from the hash information in the EXR). Although you can create these precomp scripts for your own use, I prefer to manage my precomps in the same script with my own read/write breaks. I would keep the precomp scripts function for collaborative work (either with lighting TD's or other compers for example). Refer to the excellent Nuke user manual for more information. Screen_shot_2010-06-29_at_3.15.34_PM


Render locally in the back ground. Most modern workstations have tonnes of RAM and multiple processors - you will probably find that you can get away with rendering in the background via the command line, yet still be able to work pretty comfortably in terms of interactive responsiveness. Especially if your frame ranges are short, and if the farm is clogged or slow to pick up (if you have one!).

Use vector blurs instead of multisampled blurs. Motionblur3D and Motionblur2D. The transform node now has a Motionblur2D setup under the hood, with standard user parameters in the properties tab.

Additional tips regarding Nuke slowdowns and render that fail:
Slow Nuke:
1.    When nuke seems to freeze or is slow always check the terminal for any informations/errors/warnings.
2.    Check input resolution: spatial resolution (format & bbox) and colour resolution, i.e. 32bit exrs instead of 16bit? Avoid tifs, those memory hogs are for print and have nothing to do in a compositing pipeline (have fun explaining that to your matte painter).
3.    Channel output: how many channels are being written to disk, are all output channel actually needed? - use Remove node to control when Write is set to "All"
check size of 3D scenes (if any) -> geo building is single threaded and you won't see a scanline until the geo is generated for a given frame.

Failed Renders:
1.    RGBA output into cineon format (DPX and CINEON file format doesn't officially support alpha channels  - you can run into big trouble by doing this)
2.    Wrong Nuke or plugin version
3.    Conflicting render orders in Write nodes (Read is being used before respective Write was executed)
4.    Missing alpha channel(s) in precomp output (i.e. when using multiple Writes with render orders).
5.    Output directory does not exist
6.    Trying to read images (i.e. cg renders) that have not completed rendering ie. "zlib decompression error"
7.    Trying to render in proxy mode when you haven't set up a file path for the proxy option in the write node.
8.    Time nodes (frame holds, time offsets etc) - usually work great but sometimes they give unpredictable results - jumping to frames that you don't want, be wary, unsure if this is all builds of Nuke or OS dependent, have had trouble on Nuke 6.0 on Vista 64bit (yikes! yeah I know, Vista.. not my choice..)


0 # The Soloman 2010-07-29 18:55
Cool! great stuff to read
+2 # Marcin Kummer 2010-07-30 08:00
Thanks for these great tips!
0 # Yakovlev Vladimir 2010-08-05 15:28
0 # Valerio Oss 2010-08-11 11:41
just an update about the precomp chapter: now the WRITE note INCLUDES a READ node: just use the checkbox readfile at the end of the Write parameters tab and that' is, no need of the READ node anymore, the node writes the data to disk and reads them immediately after!
Two nodes in one!
0 # Scott Chambers 2010-08-11 15:57
Hey Valerio, thanks for the tip! Will update the article soon...
0 # pavan kumar 2010-09-02 06:11
thz a lot
0 # rasika jayawardene 2010-09-12 20:58
i second the tip on channels.. im relatively new to nuke and found the hard way that certain heavy channels should be removed at the beginning on the pipe so that it doesn't create problems at the end. specially when there are a lot of blurs and defocuses and when forgotten to change the channels to RGB instead of all... i find this is very relevant when working with multilayered exr's for cg...use the layers to make the composite, and then remove the extra channels for the pipe so that it doesn't create problems down end!!!
0 # Chidi Ozieh 2011-01-07 00:16
This is very helpful. I am currently work on quite a large comp in Nuke and this has literally saved my project. Thanks!
0 # Tim BOWMAN 2011-02-03 15:54
I'd love to know more about why I should be avoiding TIFFs. Do they hog memory when they're in RAM or when they're on the filesystem?
0 # Scott Chambers 2011-02-03 16:03
As far as I know, TIFF images have to be read and stored in memory in one pass, so you don't get the speed advantages from the adaptive scanline viewer renderer or crops and bounding boxes.
0 # Tim BOWMAN 2011-02-03 16:48
Ahh, good to know. Thanks!
+1 # Chidi Ozieh 2011-02-04 16:49
Hi Scott
was just wondering if you know how to add motion blur to an image. ( png sequence on a 2d card in a scene ). Much appreciated. If this is posted in the wrong place apologies.
0 # Scott Chambers 2011-02-07 02:03
Hey Chidi, this isn't really the forum for questions and answers but happy to help - stay tuned for forums in Nukepedia in the future...

To add motion blur to an image coming out of the scanline renderer you can use multisampling in the scanline renderer to be greater than 1 (gets pretty slow as you go up the samples for higher quality) or use the vector blur straight after the scanline renderer. There is quite a bit of info in the user manual regarding both methods. Good luck!

0 # steve morel 2011-03-02 07:14
Thans a lot for those tips !
I'm pretty new in NUKE, and this community looks really awesome !! :D
Keep up the great work ! : )
0 # Harsh T 2011-03-24 18:02
Thanks for great tips!!
I wonder that if i avoid Tifs then what about targa?
0 # Cédric Maugis 2011-05-19 04:24
A lot of thanks Scott for those tips. Very helpful.

Harsh : As far as i know, targa images have 8 bits limitations. And i'm pretty sure that you can't export additionnal channels
0 # pitha tripetchsomkhun 2011-07-09 02:42
a great article worthwhile for reading :-)
0 # Ildus Gabidullin 2011-11-19 23:57
Thanks great article
0 # Luke di Rago 2011-12-08 09:35
// Avoid tifs, those memory hogs are for print and have nothing to do in a compositing pipeline (have fun explaining that to your matte painter).// Amazing quote! and also great article, of course :D
0 # Chad Ridgeway 2012-02-28 09:04
Nice work! Do color correctors concatenate?
0 # Scott Chambers 2012-02-28 13:04
Yep! Nuke promotes everything to 32bit floating point so essentially all colourcorrectio n concatenate, you don't have any clipping of values like you would in 16 or 8 bit. So in practice there is no need to worry about stacking up colourcorrectio n operators like you would in Shake in 16 or 8 bit data streams.
0 # juan uman 2012-05-09 08:33
Nice tips man.! THk
-1 # XIAObao CHAN 2012-09-09 01:36
Thanks for your articles.
I use Nuke for animation compositing.(Al l images are rendered in maya ,3dsmax and other 3d softwares)
Here are some questions confuse me for a long time.I need help and please let me understand the reasons and knowledges.

Does the "Avoid tifs" mean that if I work with tif format images,every layer is tif format,I will suffer a lower speed and longer processing time?And why my nuke always crashes and automatically turn off?

So besides exr ,which format(s) of images can also be more suitable when I do composition in Nuke?(Without affecting images quality and speed up)

Do you have some advise for rendering passes and composition on animation-makin g?
0 # XIAObao CHAN 2012-09-09 01:39
And sometimes I import many images into nuke and when I complete working, some of images are not used.
Is there a difference between disabling a Read node and delete it?
0 # Darvinius Berar 2012-11-06 06:23
Great tips! I am still learning nuke and i know its optimized for exr files. But i done recently some DCP work for a movie and had to study a bit jpeg 2000. Has nuke support for jpeg 2000 files because i can not load any type of j2k in it. Hdd space is not a problem for me but i find j2k a very good alternative to jpg files, has 16 bit support to,multiple resolution representation and the image does not have such artifacts like normal jpg
Thank you again for this tips.
0 # Alexander Reid 2013-05-03 20:04
Nice work! :-)
0 # randy vellacott 2016-05-16 01:24
It would be nice if The Foundry made Autoplace so that it could rearrange in B stream order instead of or as an option to A stream as it is now. Has anybody ever addressed this issue? I know I haven't received any feedback from The Foundry about it.
0 # eden an 2017-03-20 15:29
Hello, Scott!

I'm eden from South Korea. It was a great tips for compositors.
When I was a compositor, I read this article and it was very helpful.

Would you mind if I translate this article and share it with Korean compositors?
Of course, I'll leave a source :)

You have no rights to post comments

We have 2718 guests and 68 members online