Expression Node 101

Written by Matt Estela and Pedro Andrade on .

Introduction

The expression node in nuke is a bit of a mystery for most people, but it's incredibly powerful. Often its used for patching data passes with simple if/else statements, but here we'll go from first principles, work up to some long forgotten high school maths to do some silly tricks, and finally put it all together to understand how a P_matte gizmo works.
 
There's now a bonus chapter at the end from Pedro Andrade regarding projections without using a C44 node, check it out!

First steps

Create a default constant node, and append an expression node. If its default state it does nothing at all. You can see it has fields for 4 channels at the bottom (rgba), and space for some basic variables at the top, we'll get to that later.

 
01 expression node
 

So, simple expressions to start with. Type '1' into the first channel expression slot. The checkboxes above it say which channel to operate on, by default its just the red channel. Hit enter, and you'll see the entire image go red.

 
02 value of 1
 
All we're doing here is setting every pixel to have its red channel set to 1. Simple enough.

A gradient

This time, type in 'x', hit enter. Depending on your viewer lut you'll either still see red, or a superexposed red. Hover your cursor in the viewer, look at the colour values. You can see that we're mapping the x-coordinate of each pixel directly into the red channel.

 
03 value of x
 
To see this gradient better, we could remap the values so that they stay between 0 and 1. To do that, we need to divide the x coord by the image width. Type that in:
 
x / width
 
Aha! Nice horizontal graident. What if we want a vertical gradient?
 
y / height
 
04 vertical gradient
 
To invert the gradient, subtract it from 1:
 
1 - (y / height)
 
05 invert vertical gradient

Lines

Ok, all makes sense so far. Now, what if we wanted repeating lines? Well, cast your mind back to highschool maths, and what the graph for sin() looks like. If we give it a number that increases forever, it'll return a pulse that goes between -1 and 1. Lets see what that looks like:

sin(x)
 
Lines! If you hover over the viewer, you'll see the values do indeed to between -1 and 1 as predicted. To normalise that, lets add 1, which gets it between 0 and 2:
 
sin(x) + 1
 
and then divide the whole thing by 2, giving us values between 0 and 1:
 
( sin(x) + 1 ) / 2
 
07 sin 0 and 1
 
Mmm, liney. To make wider lines, we want x to increase at a slower rate, so we divide it by some value:
 
( sin(x / 4 ) + 1 ) / 2
 
08 sin wide
 
To change the ratio of the black vs red lines, we could do more expression stuff, or use Nuke to our advantage. Append a grade node, and mess around with the black point. That'll grow and shrink the black areas.
 
09 sin grade
 

Radial Gradient

Right, new challenge. What if we wanted to measure the distance of each pixel from the origin? Back to high school maths again, we can use pythagoras here; we have 2 sides of a right-angle triangle, and we want to measure the 3rd side. We could do the old 'square root of A-squared plus B-squared' thing (make sure you're viewing the expression node):

sqrt(x * x + y * y )
 
Hover over the image, you'll see the red value is 0 in the lower left, and smoothly increasing numbers as you move away from (0,0). It's a radial gradient!
 
10 radial grad
 
To invert it, subtract it from a number, this number will be the width of the gradient:
 
300 - sqrt(x * x + y * y )
 
11 invert radial grad
 
and to normalise it to between 0 and 1, divide the entire expression by the same number:
 
(300 - sqrt( x * x + y * y ) ) / 300
 
12 normalised invert radial grad

Radial Rings

We can do similar tricks to before; take that result, and feed it to sin():

sin(sqrt(x * x + y * y ))
 
13 concentric rings
 
Concentric rings! To scale, divide the inside term to make the distance increase more slowly:

sin(sqrt(x * x + y * y ) / 4)

14 fat concentric rings

View the grade node again, and adjust the ratio of black to red.
 

Better user controls

What if you want to move the center point of these rings? Basically, you want to measure the distance not from (0,0), but from another point. You can subtract that from the x and y values. Eg, lets move the center point to (300,50):

sin(sqrt((x-300) * (x-300) + (y-50) * (y-50) ) / 4)
 
15 offset concentric rings
 
 
Ugh, that works, but it's hard to update. It'd be much nicer to have a visual tool to let us drag this around. Nuke lets us do this. In the properties pane, right click anywhere except over a text field and choose 'Manage User Knobs...'.
 
16 manage knobs
 
Click add, select '2d Position Knob...', name it 'center'. hit ok then done.
 
17a add center knob
 
You'll see you have a new 'User' tab, with an x and y value. In the viewer you'll see a 'center' gizmo in the lower left, you can drag it to anywhere on the image, and see the values update.
 
18 center viewer
 
To use that in your expression, just use center.x and center.y:

sin(sqrt((x-center.x) * (x-center.x) + (y-center.y) * (y-center.y) ) / 4)
 
Oh, while I'm sure you're chuffed for remembering pythagora's theorem, there's a shorter way in Nuke to calculate the hypotenuse:

sin(hypot(x-center.x, y-center.y ) / 4)
 
Let's replace that scale multipler with a slider. Right-click on the expression again, 'manage user knobs', this time add a floating point slider called 'size', set its minimum to 0.1 and maximum to 20. Change the expression to this:

sin(hypot(x-center.x, y-center.y ) / size)
 
Change the size slider, move the center point, see the rings follow. Neat!
 
19 center and scale rings
 
Ok, so that's concentric rings. What about radial rays?

Radial Rays

Back to high school maths again! Remember the relationship between sin/cos/tan and a right angled triangle? Lets focus on tan. In the diagram below, tan is the ratio of the opposite side over the adjacent side.
 
20 tan

We can treat opposite and adjacent as x and y, and use inverse tangent (often called arctangent), to get the arc, or angle. Plugging those numbers in:
 
atan(x - center.x, y - center.y)
 
You'll get an odd looking gradient with a ramp of colour on one side, and black on the other. If you hover the cursor over the image and look at the values, you'll see that the black area is actually negative values. Trace a circle around the center point with your cursor, note that the values go from -1.24 at the bottom, to 0 at the top, to 1.24 at the bottom again. Basically, its tracing out a circle.
 
21 tan default 
 
 
We could reset the range of the circle so that it goes from 0 to 1:

(atan(x - center.x, y - center.y) + 3.14 ) / 2.48

or do the sin trick from before to get radial lines:
 
sin( atan(x - center.x, y - center.y) )
 
22 radial line
 
Well, one giant radial line. With the concentric lines, the problem was the input was increasing too quickly, so we dividied it to slow it down, giving us wider lines. Here, the input is increasing too slowly, so we'll multiply the input by our size attribute to get more lines:

sin( atan(x - center.x, y - center.y) * size)

23 radial lines

Radial lines! Lets add an offset so we can rotate them if we want. Add a floating point slider called 'offset', give it a range of 0 to 50, and add it to the atan result, but before being multiplied by size:

sin( ( atan(x - center.x, y - center.y) + offset) * size)
 
Slide the slider, see the rays spin.

P_mattes

P_mattes, or position mattes, are one of the more useful tricks in nuke comping, it's nice to know how they work under the hood. First, make a simple 3d scene in nuke with some spheres, cylinders, card etc, and a nicely positioned camera. Here's my amazing efforts, you can download a copy here: http://www.nukepedia.com/miscellaneous/3d_scene
 
24 pmatte render
 
So, lets get a p-world pass out of this. select the scanlinerender node, go to the shader tab, enable output vectors, and for surface point create a new rgba channel called 'p'. append a shuffle node, shuffle P-> rgb into rgb, and the rgba->alpha into alpha so its easy to work with. Should get a multicoloured splodge like this, with an alpha channel:
 
25 shuffled P and alpha
 
Append an Expression node, view it. If you hover your cursor over the image and look at the values you can see that this pass encodes the 3d X position into the red channel, Y into green, and Z into blue.
 
First thing we can do is measure each point's distance from the origin. This uses Pythagoras like before, to extend it into 3d we just add the extra blue (z) term. The hypot() command only works in 2d, so its back to old school methods. Type this expression into the 4th expression (so it affects the alpha channel), and view the alpha channel:
 
sqrt(r*r+g*g+b*b)
 
26 p ramp
 
There, a ramp in 3d space! Lets invert it to make it easier to see:

1- sqrt(r*r+g*g+b*b)

26 p ramp invert

We're getting white where the alpha is 0, not nice. A lazy trick is to multiply the whole result by the incomping alpha (hence the shuffle dance from earlier)
 
( 1 - sqrt(r*r+g*g+b*b) ) * a
 
26 p ramp clean
 
Ooo, looking very p_mattey now!
 
So we have a spherical ramp, how do we move this around? Similar to the 2d examples earlier, we'll create a user control called 'center', and subtract that from our result to get the offset. We can select it in 2 ways:
  1. A colour picker on the pbw image
  2. A 3d locator in 3d space.

Lets try the colour picker one first.

Right click on the expression, 'manage user knobs', add an rgb colour knob called 'center'. We then subtract center.r, center.g, center.b from the P rgb values:

(1- sqrt( (r-center.r)*(r-center.r)+(g-center.g)*(g-center.g)+(b-center.b)*(b-center.b))) * a
 

Make sure you're still viewing the alpha channel, go to the user tab, click the first box next to 'center' to enable the colour picker, then control drag in the viewer. Tada, a home grown pmatte!
 
27 pmatte move with colour picker
 
the expression is getting a little messy, it'd be nice to tidy it up as we try more complex things. The 4 fields at the top are variable placeholders, you can put expressions there, name them, and refer to them in your output expression. Lets take the core of this expression and name it 'dist':
 
sqrt( (r-center.r)*(r-center.r)+(g-center.g)*(g-center.g)+(b-center.b)*(b-center.b))
 
and make our pmatte expression use the dist variable instead:

(1-dist ) * a

28 dist expressions

much nicer.

P_matte rings

Time for silly tricks! Remember the trick we did earlier to make concentric rings, we can do exactly the same thing here and make concentric 3d rings:
 
sin(dist) * a
 
Add a multiplier to control the spacing (you remember how to add user knobs right?)

sin(dist * ringScale) * a

29 pmatte rings
 

Rays are done with a similar trick to before, but need a bit of thought. First, atan takes 2 values, but we're in 3d here. We need to pick which 2 channels from our P pass we'll use to define the rays. I want them to appear on the X,Z plane, which means I'll feed atan the red (x) and blue (z) channels. Because we don't specify the green channel at all, it'll basically project this pattern through everything from the top-down. Add similar controls to before to drive the position offset and number of rays, you get this:

sin( atan(r-center.r, b-center.b) * rays ) * a

30 p matte rays

Using red and green will make it project front-to-back, and using green and blue would make it project left-to-right:

31 p matte rays alt

P noise

The nuke expression node has lots of built in functions, including several noise calls. Because these noise functions can take a 3d input, if you feed it the values from a P pass, you'll get 3d noise:

noise(r,g,b)

pnoise

Voxelly P

What if you wanted a pixelly or blocky P pass? What we currently have are smoothly transitioning values, to make it stepped we need to reduce that smoothness. One way to do that would be to truncate any numbers after the decimal point, so 1.25 becomes 1, 4.6 becomes 4 etc. We'll need to do this to each channel seperately, so for the first time we'll use the other slots in the expression so we can treat the red/green/blue in isolation:

trunc(r)
trunc(g)
trunc(b)

Note in the colour picker below, my cursor was over the top of the cylinder, and its returning a value of (1,2,-1).

blocky

 

To adjust the size of the blocks, add a float slider called 'scale', and what we'll do is multiply each channel by the scale, truncate after the decimal point, then divide by the scale so we're back where we started, but now with less numerical precision.

trunc(r*scale)/scale
trunc(g*scale)/scale
trunc(b*scale)/scale

Why would you do this? Well, if you append another expression node, and and make the second one use the noise example from earlier, you'll get blocky noise:

blocky pnoise

In fact, feed this into any of the previous examples, you'll get an adjustable blocky version of that effect.

Dealing with translation, rotation, scale with music videos

That rays example from earlier leads to a question; what if we don't want the rays perpendicular to an xyz axis, but off at some random angle? And what if we want to do a non-linear scale on those rays? 

We've been able to handle translation offsets by subtracting the offset we need from each component in the expression. In theory we could expand on the expression to do an offset scale in each axis, and eventually an offset rotation.

But wait, there's an easier way!

To deal with translation offset, steal this trick from Jonathan Glazer: Don't make a complex expression, just move your set and camera together: http://www.youtube.com/watch?v=4JkIs37a2JE

To deal with rotation offset, steal this trick from Lionel Richie, rotate the set and camera together: http://www.youtube.com/watch?v=OdQDXs75Ulo#t=1m40s

To deal with scale offset, steal this trick from Michel Gondry, and scale the set and camera (well, the image anyway). https://www.youtube.com/watch?v=ANLBu-U8KzE 

Going back to a pmatte at the origin with no offset:

(1 - sqrt( r*r + g*g + b*b ) ) * a

 32 p matte basic

Lets simulate sliding the camera and set together. Insert a (Maths) Add node above the expression, and alter its values while viewing the expression. You'll see the pmatte starts to move around:

33 p matte pre shifted

Insert a (math) multiply node, play with the slider to set an overal scale:

334 p matte linear scale

or hit the '4' button and play with values to get a non-linear scale:

334 p matte non linear scale

Enter the (C44) Matrix

Note that I've avoided how to deal with rotation twice now, once with expressions, once with nodes. The reason is that its easier to deal with rotations, in fact translation, scale AND rotation, using matricies.

Add an axis to your scene, look at its properties, and open up the world matrix section:

axis properties

Type in some translation values, you'll see that the world matrix puts those values into the last column.
Type in some scale values, the world matrix puts those into the first 3 values of the diagonal from top-left to bottom-right.
Type in some rotation values, the world matrix puts a combination of values into the first 3x3 cells.

The world matrix is a transformation matrix. It's a standard way to pack translate/rotate/scale values, and makes the task of manipulating 3d space quick and straightforward. What we need is a transformation matrix node we can apply to our P pass, so we can do the full translate/rotate/scale change in one hit.

Ivan Busquets has provided exactly that! Chances are your workplace has already installed the C44 matrix node, if not, get it here: http://www.nukepedia.com/plugins/colour/c44matrix

As he describes it:

'The main goal of C44Matrix is to make it easier for users to perform transformations on pixels containing position data, such as a world position pass. From arbitrary transformations, to converting between different coordinate systems (world space, camera space, NDC, etc), C44Matrix should make things a little easier by not having to resort to complex expressions and multiple nodes to apply a 4x4 matrix to pixels.'

Sounds like what we need! Clear any nodes betwheen your shuffle and expression, put a C44Matrix there instead, put a '1' into every cell on the diagonal, click 'transpose', and view the rgb:

c44 matrix

Setting the diagonal cells to 1 is the same as setting scale to 1, and transpose means it'll behave the same as the axis world matrix. If you type numbers into the last column, you'll see the P pass translate around. Put numbers into the first 3x3 cells, you'll see it rotate and scale. While interesting, this isn't user friendly. We'll use a handy feature of the C44 matrix that lets you drive its values from a camera.

Change the matrix input to 'from camera input', turn invert on, transpose off, create a camera,and attach it to the C44Matrix. Now if you translate/rotate/scale the camera, the P colours will be adjusted to match, meaning our p_matte gradient will also match.

c44 matrix with cam

P-world to P-object

You might have spotted that the above trick can be used to make a P-object pass. A P-object pass is useful in that you can p_matte on a rendered object, and the p_matte stick to it as the object moves around. Usually you'd render this out of your 3d package, but if you only have a p-world pass and not much time, you just need a transform of the moving object. You would parent a null to the object in maya, export that locator to nuke, parent a camera beneath that locator, and use the camera as an input to your C44 matrix.

Doing the above trick, without a C44 matrix

It's possible but messy.

Nuke has a 3x3 ColorMatrix node, which you can use to do the scale and rotation offset, and use an Add node like before to do the translation offset. There's no easy way to drive the values from an input like the C44matrix node, so you need to expression link each cell to the world matrix of your axis or camera, click the 'invert' button on the colourmatrix to make it apply the rotation and scale in the right direction, and put a minus (-) in front of the values in the Add node to make it match your input. A C44 is easier. :)

A box P matte

Ok, lets get back to expressions! The P_matte gizmo can function as a sphere or box, lets work out how to emulate the box feature.

Reset your P pass so its at the center, and uniform scale it up by 4, it'll make it easier to see what's going on.

Lets work out what we need in one axis first, say the red (x) channel.

So, we have values increasing positively to the right, and negatively to the left. To make them both positive, we can take the absolute value, or abs:

abs(r)

box abs

do our usual invert trick:

1 - abs(r)

box invert

The invert lets the numbers go negative beyond the area we're interested in, so we'll clamp the result to be between 0 and 1:

clamp( 1 - abs(r) )

box clamped

We can do the same thing in the green and blue channels, and multiply those 3 results together. Areas with a value of 0 will multiply to 0, areas with 1 will multiply to 1, and other areas will merge together. We'll get an intersection of the r,g,b regions, meaning we get a box:

clamp(1 - abs(r)) * clamp(1 - abs(g)) * clamp(1 - abs(b))

box bad

A box! Well, sort of. Yes its defined a box, but it has a noticable cross in the middle of it which is pretty ugly.

Lets go back to the single axis, and see what needs to be done. Looking at it, its the linear falloff that's causing the artifact. Ideally we'd smooth that off into a rounded curve.

The smoothstep function does this. You tell it what defines the start and end point, and it'll try and draw a smooth S curve between those values when given a linear input:

so

clamp( 1 - abs(r) )

becomes

smoothstep(0,1, clamp(1-abs(r)) )

box smoothstep

much nicer! lets apply this to all 3 channels, and multiply them together:

smoothstep(0,1, clamp(1-abs(r)) ) * smoothstep(0,1, clamp(1-abs(g)) ) * smoothstep(0,1, clamp(1-abs(b)) )

box smoothstep default

Smooth! Too smooth? Maybe we should push the smooth out to the edges a little, get more of the boxy shape back. To do this, rather than smooth between 0 and 1, smooth between 0 and 0.2, pushing it to the edge of the box:

smoothstep(0,0.2, clamp(1-abs(r)) ) * smoothstep(0,0.2, clamp(1-abs(g)) ) * smoothstep(0,0.2, clamp(1-abs(b)) )

box smoothstep tighter

Or better, replace the start and end values with floating point sliders, so we have easy control over the softness of the box (I've made 2 sliders named 'start' and 'end'):

smoothstep(start,end, clamp(1-abs(r)) ) * smoothstep(start,end, clamp(1-abs(g)) ) * smoothstep(start,end, clamp(1-abs(b)) )

box smoothstep parameters

Moving/rotating/scaling the camera input to the C44matrix shows we can move this wherever we want.

box smoothstep rotated

Next steps, final thoughts

The full list of functions you can call is in the Nuke user guide, 'Adding Mathematical Functions to Expressions', page 528. Lots of things to try out and explore.

Bonus round: P project

Someone asked, Ivan was kind enough to explain...

The C44Matrix node has a 'project' mode, which as it implies, will transform your p pass into the projected space of the camera. Another way to think about it is that its taking the input P passs and camera, turn the camera into a video projector, and shine a UV map onto your scene. You can then feed this result into a STMap node. There's some small extra things we need to do to make this work, will cover it along the way.

Here's our original clean P-pass, I've added a cube to make checking the alignment easier.

p project 01

I've created a camera which I'll use for my projection, set its horizontal and vertical aperture to 24 so its a square format, focal length 50, and moved it so its framing exactly onto the front of the cube.

p project 02

I then add a C44matrix node, set matrix input to 'from camera input', matrix type 'transform', connet the camera, invert enabled. The P pass is adjusted so (0,0,0) is at the center of the camera:

p project 03

We'll append a second C44 matrix node, matrix input 'from camera input', matrix type 'projection', invert OFF, w_divide ON. Connect the projection camera. We'll also put a shuffle node before it, setting the alpha to white.

Why all this stuff? Previously when using the C44 node we've not used the last row of the matrix, a 3x4 is all we need to define translate/rotate/scale. Now that we're doing projections, we need a way to store the projection information (the relationship of the focal length to aperture to window scale). This information is stored in that last row of the transformation matrix, called the w_component. When w_divide is ON, it will do the perspective transform. With it off, it behaves like an orthographic camera.

Also, the C44 assumes  the incoming P-pass has stored the w-component values in its alpha, so to make sure its correct we'll force it to white with a shuffle node.

The result looks like a warped camera position pass, because that's basically what it is:

p project 04

Now we're basically looking at a projected UV pass, so we're only concerned with the red and green channels. It goes from (-1,-1) in the lower left of the projection to (1,1) in the top right. If we use this with an ST map, it needs to be between (0,0) and (1,1). How to remap those values. With an expression node of course!

(r+1 )/ 2

(g+1 )/ 2

0

p project 05

That's more like a UV map I recognise!

All we need to do now is get an image to project, append a reformat with 'black outside' enabled to clip off the border, and feed that and our uv image to an STMap:

p project 06

And that's it! Now you can move the camera, change it's focal length, and see the projection update. Here's the finished node network:

p project 07

But is it finished? One thing that bugs me here is that the projection appears on the back-faces too. Can we limit it to just stuff that faces the camera?

Limit the projection to stuff that faces the camera

The visual analogy here is that we'll stick a light where the camera is, generate a simple white lighting pass, and multiply that against the projection we've just made.

We'll need a normals pass, so make sure your scanline render is outputting surface normal, in this case going to the 'n' channel.

scanline normals

The simplest way to describe a lighting pass is to compare the normal at a point to the direction of a light. If they face exactly opposite, then the light is facing directly at the light, and should get full brightness. If they face the same direction, then the surface is on the opposite side, and should be black. Values in between should get, well, values in between.

There's a mathematical way to describe the relationship of 2 directions (or vectors) like this, called the dot product. For us, that means we take the rgb of the surface vector (the normal), convert the rotation of the light into an rgb value, and multiply the r, g, b components together.

We already have the normals expressed as color in the n pass, we need to get the camera rotation (which we're treating as a light) as a colour too. Ie, we need to take the upper-left 3x3 of the world matrix of the camera, and convert it into an rgb value.

Luckily, this easier than it seems. We'd need to express the rotation relative to what axis the camera looks at when its rotated at (0,0,0). For nuke cameras, that means the camera is pointing down the -z axis. That means we actually don't have to do any work at all, we just need the 3rd row of the transfomration matrix (that column expreses scale and rotation relative to the z axis, just as we need it). If we scale the camera there might be issues, but lets ignore that for now).

matrix rotation

To make this easy to visualise (and easy to make an expression for later), we'll feed these values directly into a constant. Make a constant, and ctrl-drag the top/middle/bottom matrix rotation values to the red/green/blue of the constant:

link colour

Now if you rotate your camera, you'll see the constant colour change. Right, lets do the cross product, which basically means a merge in multiply mode, and then a desaturate to get it to be pure grayscale (I've shuffled the n to rgb cos I'm lazy)

dotproduct result

Now we can multiply this against our stmap, and get a cleaner p-projection. note that it can't do shadows, that's a step too far for me and this tutorial. :)

pproject final final

Projections without the C44 plugin

by Pedro Andrade

Although the available C44 matrix does a great job converting different coordinate spaces, it’s written as a plugin, which means that it’s not possible to look into the code to understand what’s behind the curtains and hopefully give the artist / TD ideas to eventually come up with similar tools that use the same concepts.

I wanted to break in! I knew that this had to be possible to use Nuke’s native nodes, so I reverse-engineered the tool to figure out what was going on.

First of all, this has to do with matrices, which is a mathematical concept that I had some background on (I’m originally a mechanical engineer), but not only I had forgotten all of that, but also, there were some types of matrices that were completely new to me and that were paramount to understand, as the projection matrix. 

One of my main initial doubts was, how to create a 4x4 Color Matrix when there’s no such thing in Nuke?! There are 4x4 convolution matrices, but those have nothing to do with color, and I knew that color manipulation is what I was looking for, as position or normal passes are nothing but color representation (R, G, B) of vector spaces and these were the ones that needed to be modified somehow. Also, whatever matrix I was looking for, it had to deal with camera components like focal length and aperture as well as position in space. 

So, I had to go on the hard path of researching and after many hours, I found a couple of articles that explained this stuff in detail. With this, I was now a step closer. After reading them all, I had found what I was looking for with a very good explanation of each of its components. 

First of all, there’s a hierarchy of vector coordinates that we need to follow until we get to the projection matrix and even after that, to use it to something useful, there are further vector coordinates that we need to dig out. 

The hierarchy is the following:

 

 

The first 3 coordinates systems were covered already here by Matt, which means that, following the chart, the Camera Coordinates (Camera space) is our starting point to dig into the projection matrix. It's important to remember that Camera Space uses the same concepts as the Object Space, which is commonly known as PRef pass. The only difference is instead of using an axis / locator, we will use the camera as if it were an axis (it’s the position in space that matters). Going back to the chart, to get to Camera Space we need to use primarily the World Space.


So, the projection matrix is indeed a 4x4 matrix and it’s written like this (you can find how each component is achieved by looking at the docs):

This can be re-written to make the aperture component more clear:

So, taking the last matrix as our main one, let’s first split it in a 3*3 matrix (we'll ignore its last column and last row for now). By looking at it and ignoring the ar component from the previous matrix, we have concepts that we can relate to and that is also related with camera’s components, like FOV’s and Far/Near clipping planes. That’s good! So let’s figure out the FOV. 

The FOV of a camera is an angle and it can be calculated like so:

 

fov expression

This can be split into its horizontal and vertical components:

fov_expression_invert.png

As it’s not possible to do a 4x4 Color Matrix in Nuke, I split it into 2:
_one 3x3 Color Matrix
_one 3x1 Color Matrix


Let’s focus for now on the first type. With some math manipulation I came up with the following:

Which is the same as this, by swapping all the already known variables:

With this we have our first 3x3 Color Matrix solved!

Now, let’s solve the 3x1 matrix:

matrix5.png

Now that we have the 2 different matrices, how can we join them so they behave as a single matrix? Simply, put the values of the 3x1 into an Add operation after the 3x3 matrix. Here's how that looks in Nuke:

screenshot_matricies.png

This means that in reality, we’re not using a 4x4 matrix! We're ignoring the last row of the projection matrix, so we end up using a 3x4 matrix instead:

 

Now, the so-called ‘w’ component (more about this ahead) lies on the blue channel with the correct values, right after the Color Matrix , not after the Add operation. This means that every channel (R, G, B) needs to be divided by those values. Therefore we need to copy that blue channel so it can be available after the Add operation, then use it to divide the RGB values. To divide them, we can use a simple expression node, like so:

blue_w_component.png

After this, as Matt pointed out before, we need to use this tweaked color information for further tweaking. Meaning, first normalizing those values, ignoring the blue channel. We do this because we want to transform this into a STMap:​

stmap_screenshot.png

After this, we can use an STMap node to drive an image to be remapped. So in a way, this looks like a cheat, as we’re not really projecting, instead, we are mapping something with a STMap driven by our camera values. Although this result is exactly the same as a projection (and that’s how a ‘real’ projection works in CG anyway), but for us compositors, this feels a bit different as we have more direct ways to do a projections that mimics the real world.

We must not lose track of the reason we're doing all this work (remember, you could achieve something similar with projecting onto 3d geometry and a scanline render node).

The big advantage here is the ability to do projections with just our PWorld pass, ie, no geometry is needed. Therefore this is much lighter on the CPU, meaning you as the compositor get more freedom and control.

Think about this example: You have 50 creatures running in a forest and you want to project leaves' shadows onto them all without having that render from 3D. Traditionally, you would be able to do it if you would load the geo for those 50 creatures and project onto them, but you can imagine that will slow down Nuke substantially. With this approach you can do it and play it with the same speed as your footage!

After all those steps, the last thing to do is to isolate the area that is being projected, as Matt already covered earlier.

So, important things to remember in order to achieve this:

  • Hierarchy of Vector Spaces (World Space > Cam / Object Space > Projection Space)
  • Normalizing / Transforming Projection Matrix into STMaps
  • Isolate area that is being projected

All these concepts can be compiled in a tool / gizmo that will do all these calculations automatically based in the camera’s input (it will serve as a projector). In this link you can see the tool that I wrote that does exactly that: https://www.youtube.com/watch?v=IaJCSpM76V4

WHAT IS THE SO CALLED ‘w’ COMPONENT AND WHY DO WE NEED TO DO A DIVISION WITH IT? 

To understand this part, we need to delve a bit deeper on the projection matrix concept and its intricacies. 

So, by now we know that a Projection matrix it’s a 4x4 matrix and as we labelled a 3x3 matrix inputs as x,y,z, we will have to add a fourth component to define a 4x4 matrix, hence the w component (4x4 matrix components: x,y,z,w). 

Let’s go back a step and recap how we got here. 

To achieve a Projection Space, we first need to transform World Space into Object Space (also known as PRef if we’re really dealing with an object in space). Although an Object Space can be the same as Camera Space if we put our camera’s 3D position and rotation as inputs:

 wcomp01

This World Space vector is our initial PWorld. To go from World Space to Object/Camera Space, we’ve multiplied every vector from the World Space, by the Object/Camera Position Matrix, meaning we did it for every single pixel in one operation, hence the ColorMatrix node (yes, every single pixel! Each pixel is a [x,y,z] vector). Although the ColorMatrix node does the correct type of multiplication, it’s maybe worth pointing out that a matrix multiplication is not a standard multiplication - search for ‘matrix multiplication’. 

The result of this multiplication is another vector that is now in our Object/Camera Vector Space:

wcomp02

Graphically, this is again just colors, but each pixel of that overall graphical representation is one of these Object/Camera vectors. 

Next, what we did was, we’ve multiplied each and every one of these new vectors by our Projection matrix - constructed by our camera’s components. 

At that time, we ignored its last row so, instead of considering it as a true 4x4 matrix we ended up with a 3x4 matrix. We did it not only because it was irrelevant for our considerations at the time, but also because we were not adding a fourth component to the vector that was being multiplied by this matrix. This decision did not affect the functionality of the system we were after, but mathematically speaking lacks precision. But as just mentioned, for the type of functionality we were after at the time, the lack of this component is more than OK. 

For the sake if continuing with this explanation, let’s consider for now the inclusion of a fourth component ‘w’ as neutral (search for ‘identity matrix’ for more on this), so it becomes this:

 

Although this last row does not influence much the result of this matrix (for the PositionProjector tool example), we will need to consider it right now that we want to transform a 3D point into a projected 2D point. We will need to do something to flatten a 3D object into a 2D version of it that will live in our screen, meaning on a 2D dimension reality: x and y. That is when this w component needs to be considered! 

Therefore, to be possible to multiply these 2 matrices (Object/Camera Vector by Projection Matrix) we also need to add a fourth component to every Object/Camera Vector and we consider its value to be 1:

wcomp04

 So, this operation goes like this:

wpos pm x ppm pv

Again, even though this is written with as common multiplication, matrix multiplication is not done is the same ‘standard’ way. For more on this search for ‘matrix multiplication’.

This ‘Projected Vector’ does not yet have any real representation with the screen space we’re after. Remember that we’re after a 2D flatten image of a 3D object, and this matrix is not yet what we’re looking for but, we’re very close to achieve that!

Since we’re looking for the 2-dimensional object that is a Projection converted to Screen Space, this is where the division for the ‘so called w component’ comes in! Like so:

wcomp06

We finally have what we need! This will give us the x and y coordinates of our screen that makes a flattened 2D representation of our 3D object. 

Now, back to Nuke! 

Because we’re using a method of color to transform different spaces we’re always limited to 3 components: R, G, B. Therefore, we will continue to ‘ignore’ the true w component from any transformation and we’re going to use only these 3 dimensions. Therefore the w ‘will become’ z which is our Blue color in our graphical representation. Remember: x = Red, y = Green, z = Blue 

So, after we’ve got to our Projection Matrix color representation ( ColorMatrix1 ), we’ve then divided it by its Z (Blue) component: 

 wcomp07

To know more about this operation, search for ‘Perspective Divide’. 

It’s worth mentioning again that, mathematically speaking, to be completely accurate and precise we would need to consider the true w component. Although, for what the PositionProjector tool is intended, this becomes irrelevant as the usage of this tool is for creative purposes. The tool will give you more than enough precision for what its intention is. You wouldn’t notice much difference in that context. 

Although, there are situations in which we will need to be that precise when applying the same concept. In the next chapter I will give you an example of that.

SAME CONCEPT TO ACHIEVE ANOTHER COOL AND VERY USEFUL THING

As mentioned in the beginning, it’s paramount that we understand these concepts well, not only to build a tool like the PositionProjector, but also to be able to troubleshoot it. Equally important, if not more, is to really understand all of this to be able to come up with possibly new and/or better ideas that can use the same concepts, but possibly applied differently and/or in conjunction with other bits and pieces.

As we saw above, we now know how to transform a point in 3D space, with 3 coordinates (x, y, z) into a flattened 2D image (x, y). These are arguably the fundamentals of all computer graphics: translate a 3D object to a 2D equivalent representation of that same object.

So, why not use this knowledge to transform a 3D system, like a projection, stabilization or match-move, into an equivalent 2D system like a 2D tracker? The advantages in doing that would be the lightness of the 2D solution in comparison with the 3D one! That sounds cool, especially when we have many projections in our comp script. 

Well, that’s actually what the Reconcile3D node does! To have the same result as a projection, we would need to use a CornerPin node (translation, rotation, scale and shear) for which its 4 points would have to be calculated one by one with a Reconcile3D node.

Although, the Reconcile3D does this operation point by point and it has to calculate it ‘live’ for each frame at a time.

Well, that’s an option, but it’s not a very good one as it’s a very slow process for what out intention is. So, this option it’s not worth the time or effort, if we need to do it like this for every instance!

Luckily for us, Nuke has a math python library that we can take advantage of in order to overcome the speed issue and, at the same time, be super precise. To be this precise, there are other coordinates spaces that we need to take in consideration such as Normalized Device Coordinates Space (NDC) and Raster Coordinate Space. Fortunately, the math python library and the nukescripts module will take care of this too!

So, using the same concepts as explained above, with the inclusion of the math python module, some coding and some imagination, we can write a tool that can achieve exactly that! We can instantly transform any 3D position into its equivalent 2D representation. This has a significant impact on the speed on how we can very quickly extract 2D tracks out of ANY 3D point in our scene. And we can do it at lightning speed! Have a look: https://youtu.be/SQXhiaaxBnw

 

 

 

 

References

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Comments   

 
+2 # Adrian Sutherland 2013-12-04 15:29
This is great! Takes a lot of the mystery out of P matte. Can't wait for pbw -> pbo. Cheers mate!
 
 
+4 # matt estela 2013-12-08 08:51
Done, lemme know if it's not clear.
 
 
+1 # Adrian Sutherland 2013-12-09 15:32
Everything is very clear and concise. This is great! Thanks again, mate.
 
 
+1 # Ruochen Wang 2013-12-05 16:36
Hey, matt, great to know how P matte works!
BTW, Do you know how P projection or P texture works? I saw some people use these P tools, but not sure how it works... It would be great if you can talk about them.

Thanks
 
 
+4 # matt estela 2013-12-08 08:51
Quoting Ruochen Wang:
, Do you know how P projection or P texture works?


Done!
 
 
+1 # Ruochen Wang 2013-12-10 05:46
Wow, awesome!! Thanks matt, you are a hero, i have never seen any tutorial like this :)
 
 
+6 # Falk Hofmann 2013-12-06 17:56
Hey Matt,
this is clearly one of the best tutorials i have ever read/saw. really appreciate the time and effort you put into this.
well done!
 
 
0 # matt estela 2013-12-08 07:36
Thanks. :)
 
 
0 # Kyle Rottman 2013-12-10 06:51
Very nice!
 
 
+1 # srinivasan CC 2013-12-14 06:52
Awesome tutorial Thanks for sharing.
 
 
0 # Alexandre Sirois-Vigneux 2013-12-20 19:22
THANKS MAN!!!! I've been looking so dam hard for these explanations! You just made my day.
 
 
0 # ivan cipriani 2014-01-16 10:38
Thank you so much mate! it´s really clear and well explained... Thanks for taking the time to post this... was looking for something like this for a long time! Keep up the good work! ;)
 
 
0 # Kristijan M 2014-02-01 18:14
Great read! Thank you for creating this.
:-)
 
 
+1 # Ian Parra 2014-02-13 02:51
Amazing Tutorial! thank you for sharing
 
 
0 # louis corr 2014-03-04 02:35
This is fantastic Matt. Thanks heaps.
 
 
0 # Alexey Kuchinski 2014-09-09 22:39
.......speechle ss......
 
 
0 # matt estela 2014-09-09 23:34
Quoting Alexey Kuchinski:
.......speechless......


Heh, in a good way I hope!
 
 
0 # Alexey Kuchinski 2014-09-11 15:45
Sorry for being not clear,
..........speec hless in goodiest way ever!
 
 
0 # Pedro Andrade 2016-02-03 12:46
Thanks so much for this. Although I didn't quite get how to avoid the C44 matrix to calculate the projection matrix and then dividing it by the w component. Can anyone expand on that point please?
 
 
0 # chris Browne 2016-03-28 23:17
agreed, I would very much like to know this. I am currently working on a show that can't use external plugins like C44 and would love to know how to do this manually.
 
 
0 # matt estela 2016-03-28 23:49
I'm afraid I don't know; that's more maths than my feeble brain can handle, and C44 is so essential we've been able to get it installed in every place I've worked.

Demand it, it's hard to do anything constructive with p passes and whatnot without it. :)
 
 
0 # chris Browne 2016-03-29 00:30
Ok Thanks Matt. I just want to check though. Do you know how to create the UV Map from the 3d scene? Perhaps that's all I need for now and can perhaps figure out the rest.

It seems as though that is what this plugin is doing?
http://www.nukepedia.com/gizmos/other/reproject3d

Thanks again
 
 
0 # chris Browne 2016-03-29 01:11
or maybe a better question is, can 'Project3d' be used on 'PositionTopoin ts' object in the 3d scene in nuke? I can't seem to get it to work and thought there would perhaps be a work around?

I really just want to project an image onto the surface of a set.
 
 
0 # Pedro Andrade 2016-03-29 09:18
Hello guys. Actually I was able to figure it out a couple of days after I had posted that comment, but I forgot to share it here afterwards. Anyway, I will try to find some time to make a proper explanation soon to share with everyone. Meanwhile, you can see it in action here: https://youtu.be/IaJCSpM76V4?list=PLJRg5bvDO9opYdv3VF_lqmDEN0Pz1p2qu
For other tools that I did regarding these concepts and others you can check this as well: https://vimeo.com/159628938
 
 
0 # matt estela 2016-03-29 12:03
Nice! Look forward to reading about your solution!
 
 
0 # chris Browne 2016-03-29 17:02
hey Pedro, that's awesome! I am guessing that is the gizmo you made 'PositionProjec tor'? I'm dying to try and figure out how you are able to do the projection without use the C44 Matrix plugin.

Pls do post the tutorial as soon as you can. This looks really great.

thanks
 
 
0 # chris Browne 2016-03-29 17:04
Pedro, again nice work! If there was any chance you could provide a quick explanation of how you set up that gizmo 'PositionProjec tor;? Thanks again :)
 
 
0 # Pedro Andrade 2016-03-29 18:54
Thanks guys. I'll try to do a written explanation one of these next days. Another slight advantage of this version, is the fact that you can control the vertical aperture form the projector itself and that will be translated into the visual result. On normal projections in nuke, the vertical aperture won't change the projection itself whatsoever, only the horizontal aperture. I believe the C44 is not changing it either. So in this version, you'll also have that extra control too.
 
 
0 # Frank Rueter 2016-03-29 21:01
Well, I for one will be downloading that gizmo in a heart beat if it becomes available :)
 
 
0 # Frank Rueter 2016-03-29 21:02
In fact I will add it to the new wishlist, haha
 
 
0 # chris Browne 2016-03-29 22:01
awesome! I'd really like to get an understanding about how you mapped the projection onto the point position pass?

Did you use a matrix and connect it to the camera? If so how?

Are you using an STMap?

Any hints would be most welcome :)

and very excited about the gizmo too!
 
 
0 # Frank Rueter 2016-03-29 22:42
add your votes to the wish list item, I just created one
 
 
0 # Pedro Andrade 2016-03-31 09:09
Patience guys it's on its way :) Actually, when I'm done with it, it would probably make more sense if the article would be added to the main one on this page, instead on creating another one with it. Can you do it Matt Estela? What do you think guys?
 
 
0 # matt estela 2016-03-31 10:03
Sure thing! :)
 
 
0 # Frank Rueter 2016-03-31 20:12
Good stuff, tanks guys. It's great to see people collaborate here!
 
 
0 # Pedro Andrade 2016-04-06 12:08
No problem Frank. I finished the article and already emailed it to Matt, so it should be available here very soon. Once it's available let me know if any of you have any doubts. ;)
 
 
0 # matt estela 2016-04-06 21:45
Done! Awesome work Pedro!
 
 
0 # louis corr 2016-03-30 18:17
"It seems as though that is what this plugin is doing?"
http://www.nukepedia.com/gizmos/other/reproject3d

I built a pworld camera projector a while back. I used some of the math from this gizmo to avoid using the c44 matrix. It worked great but took me a bit of time to get my head around. Unfortunately I never got my tool to work on multiple resolutions so I never posted it.
 
 
0 # Sangram Patil 2016-05-12 10:35
This is great!
thank you for sharing
 
 
+1 # sridharan K 2016-10-14 15:30
Awesome work :)
 
 
+1 # Lukas Fabian 2019-04-07 12:55
Thank you a lot for this tutorial! I used the expression node a lot lately and came back to this site for answers over and over again and I was able to achieve a lot of cool things I wanted to do for a long time.
Thanks for taking the time to invest into this article!
Cheers
 
 
0 # Tallulah Yang 2019-08-16 03:59
:lol:

Great!!!
 
 
0 # Pedro Andrade 2020-01-27 14:44
Hello everyone.

In the next days, I will update my bit of this already historic article.

It will contain revisions and expanded content (a few other bits and pieces and one cool 'trick') so the matrices part becomes hopefully more clear for everyone.

Stay tuned!
 
 
+1 # Pedro Andrade 2020-01-30 12:59
Updated! Thanks Matt for facilitating this!
 
 
0 # Indrajeet Sisodiya 2022-02-25 23:20
Hey guys! Thank you for an excellent article. Read it and loved it! I am trying to use the radial rings effect to make water ripples. I set it up as far as you showed in this article. Could you please tell what expression do I have to write to make the rings evolve outwards at user defined speed? Also if there is a way to do falloff on the ring so it isn't so sharp, that would be awesome!!!
Please please your help will go a long way!
 
 
+2 # Pedro Andrade 2022-04-15 21:40
Hey! You will find the answer to this on this Comp Lair's episode during its Tech Corner: https://youtu.be/jyt-armUYpY

Hope this helps!
 

You have no rights to post comments

We have 6436 guests and 68 members online