Expression Node 101
Introduction
First steps
Create a default constant node, and append an expression node. If its default state it does nothing at all. You can see it has fields for 4 channels at the bottom (rgba), and space for some basic variables at the top, we'll get to that later.
So, simple expressions to start with. Type '1' into the first channel expression slot. The checkboxes above it say which channel to operate on, by default its just the red channel. Hit enter, and you'll see the entire image go red.
A gradient
This time, type in 'x', hit enter. Depending on your viewer lut you'll either still see red, or a superexposed red. Hover your cursor in the viewer, look at the colour values. You can see that we're mapping the x-coordinate of each pixel directly into the red channel.
x / width
y / height
1 - (y / height)
Lines
sin(x)
sin(x) + 1
( sin(x) + 1 ) / 2
( sin(x / 4 ) + 1 ) / 2
Radial Gradient
sqrt(x * x + y * y )
300 - sqrt(x * x + y * y )
(300 - sqrt( x * x + y * y ) ) / 300
Radial Rings
sin(sqrt(x * x + y * y ))
sin(sqrt(x * x + y * y ) / 4)
Better user controls
sin(sqrt((x-300) * (x-300) + (y-50) * (y-50) ) / 4)
sin(sqrt((x-center.x) * (x-center.x) + (y-center.y) * (y-center.y) ) / 4)
sin(hypot(x-center.x, y-center.y ) / 4)
sin(hypot(x-center.x, y-center.y ) / size)
Radial Rays
We can treat opposite and adjacent as x and y, and use inverse tangent (often called arctangent), to get the arc, or angle. Plugging those numbers in:
atan(x - center.x, y - center.y)
(atan(x - center.x, y - center.y) + 3.14 ) / 2.48
sin( atan(x - center.x, y - center.y) )
sin( atan(x - center.x, y - center.y) * size)
sin( ( atan(x - center.x, y - center.y) + offset) * size)
P_mattes
sqrt(r*r+g*g+b*b)
1- sqrt(r*r+g*g+b*b)
( 1 - sqrt(r*r+g*g+b*b) ) * a
- A colour picker on the pbw image
- A 3d locator in 3d space.
Lets try the colour picker one first.
(1- sqrt( (r-center.r)*(r-center.r)+(g-center.g)*(g-center.g)+(b-center.b)*(b-center.b))) * a
sqrt( (r-center.r)*(r-center.r)+(g-center.g)*(g-center.g)+(b-center.b)*(b-center.b))
(1-dist ) * a
P_matte rings
sin(dist) * a
sin(dist * ringScale) * a
Rays are done with a similar trick to before, but need a bit of thought. First, atan takes 2 values, but we're in 3d here. We need to pick which 2 channels from our P pass we'll use to define the rays. I want them to appear on the X,Z plane, which means I'll feed atan the red (x) and blue (z) channels. Because we don't specify the green channel at all, it'll basically project this pattern through everything from the top-down. Add similar controls to before to drive the position offset and number of rays, you get this:
sin( atan(r-center.r, b-center.b) * rays ) * a
Using red and green will make it project front-to-back, and using green and blue would make it project left-to-right:
P noise
The nuke expression node has lots of built in functions, including several noise calls. Because these noise functions can take a 3d input, if you feed it the values from a P pass, you'll get 3d noise:
noise(r,g,b)
Voxelly P
What if you wanted a pixelly or blocky P pass? What we currently have are smoothly transitioning values, to make it stepped we need to reduce that smoothness. One way to do that would be to truncate any numbers after the decimal point, so 1.25 becomes 1, 4.6 becomes 4 etc. We'll need to do this to each channel seperately, so for the first time we'll use the other slots in the expression so we can treat the red/green/blue in isolation:
trunc(r)
trunc(g)
trunc(b)
Note in the colour picker below, my cursor was over the top of the cylinder, and its returning a value of (1,2,-1).
To adjust the size of the blocks, add a float slider called 'scale', and what we'll do is multiply each channel by the scale, truncate after the decimal point, then divide by the scale so we're back where we started, but now with less numerical precision.
trunc(r*scale)/scale
trunc(g*scale)/scale
trunc(b*scale)/scale
Why would you do this? Well, if you append another expression node, and and make the second one use the noise example from earlier, you'll get blocky noise:
In fact, feed this into any of the previous examples, you'll get an adjustable blocky version of that effect.
Dealing with translation, rotation, scale with music videos
That rays example from earlier leads to a question; what if we don't want the rays perpendicular to an xyz axis, but off at some random angle? And what if we want to do a non-linear scale on those rays?
We've been able to handle translation offsets by subtracting the offset we need from each component in the expression. In theory we could expand on the expression to do an offset scale in each axis, and eventually an offset rotation.
But wait, there's an easier way!
To deal with translation offset, steal this trick from Jonathan Glazer: Don't make a complex expression, just move your set and camera together: http://www.youtube.com/watch?v=4JkIs37a2JE
To deal with rotation offset, steal this trick from Lionel Richie, rotate the set and camera together: http://www.youtube.com/watch?v=OdQDXs75Ulo#t=1m40s
To deal with scale offset, steal this trick from Michel Gondry, and scale the set and camera (well, the image anyway). https://www.youtube.com/watch?v=ANLBu-U8KzE
Going back to a pmatte at the origin with no offset:
(1 - sqrt( r*r + g*g + b*b ) ) * a
Lets simulate sliding the camera and set together. Insert a (Maths) Add node above the expression, and alter its values while viewing the expression. You'll see the pmatte starts to move around:
Insert a (math) multiply node, play with the slider to set an overal scale:
or hit the '4' button and play with values to get a non-linear scale:
Enter the (C44) Matrix
Note that I've avoided how to deal with rotation twice now, once with expressions, once with nodes. The reason is that its easier to deal with rotations, in fact translation, scale AND rotation, using matricies.
Add an axis to your scene, look at its properties, and open up the world matrix section:
Type in some translation values, you'll see that the world matrix puts those values into the last column.
Type in some scale values, the world matrix puts those into the first 3 values of the diagonal from top-left to bottom-right.
Type in some rotation values, the world matrix puts a combination of values into the first 3x3 cells.
The world matrix is a transformation matrix. It's a standard way to pack translate/rotate/scale values, and makes the task of manipulating 3d space quick and straightforward. What we need is a transformation matrix node we can apply to our P pass, so we can do the full translate/rotate/scale change in one hit.
Ivan Busquets has provided exactly that! Chances are your workplace has already installed the C44 matrix node, if not, get it here: http://www.nukepedia.com/plugins/colour/c44matrix
As he describes it:
'The main goal of C44Matrix is to make it easier for users to perform transformations on pixels containing position data, such as a world position pass. From arbitrary transformations, to converting between different coordinate systems (world space, camera space, NDC, etc), C44Matrix should make things a little easier by not having to resort to complex expressions and multiple nodes to apply a 4x4 matrix to pixels.'
Sounds like what we need! Clear any nodes betwheen your shuffle and expression, put a C44Matrix there instead, put a '1' into every cell on the diagonal, click 'transpose', and view the rgb:
Setting the diagonal cells to 1 is the same as setting scale to 1, and transpose means it'll behave the same as the axis world matrix. If you type numbers into the last column, you'll see the P pass translate around. Put numbers into the first 3x3 cells, you'll see it rotate and scale. While interesting, this isn't user friendly. We'll use a handy feature of the C44 matrix that lets you drive its values from a camera.
Change the matrix input to 'from camera input', turn invert on, transpose off, create a camera,and attach it to the C44Matrix. Now if you translate/rotate/scale the camera, the P colours will be adjusted to match, meaning our p_matte gradient will also match.
P-world to P-object
You might have spotted that the above trick can be used to make a P-object pass. A P-object pass is useful in that you can p_matte on a rendered object, and the p_matte stick to it as the object moves around. Usually you'd render this out of your 3d package, but if you only have a p-world pass and not much time, you just need a transform of the moving object. You would parent a null to the object in maya, export that locator to nuke, parent a camera beneath that locator, and use the camera as an input to your C44 matrix.
Doing the above trick, without a C44 matrix
It's possible but messy.
Nuke has a 3x3 ColorMatrix node, which you can use to do the scale and rotation offset, and use an Add node like before to do the translation offset. There's no easy way to drive the values from an input like the C44matrix node, so you need to expression link each cell to the world matrix of your axis or camera, click the 'invert' button on the colourmatrix to make it apply the rotation and scale in the right direction, and put a minus (-) in front of the values in the Add node to make it match your input. A C44 is easier. :)
A box P matte
Ok, lets get back to expressions! The P_matte gizmo can function as a sphere or box, lets work out how to emulate the box feature.
Reset your P pass so its at the center, and uniform scale it up by 4, it'll make it easier to see what's going on.
Lets work out what we need in one axis first, say the red (x) channel.
So, we have values increasing positively to the right, and negatively to the left. To make them both positive, we can take the absolute value, or abs:
abs(r)
do our usual invert trick:
1 - abs(r)
The invert lets the numbers go negative beyond the area we're interested in, so we'll clamp the result to be between 0 and 1:
clamp( 1 - abs(r) )
We can do the same thing in the green and blue channels, and multiply those 3 results together. Areas with a value of 0 will multiply to 0, areas with 1 will multiply to 1, and other areas will merge together. We'll get an intersection of the r,g,b regions, meaning we get a box:
clamp(1 - abs(r)) * clamp(1 - abs(g)) * clamp(1 - abs(b))
A box! Well, sort of. Yes its defined a box, but it has a noticable cross in the middle of it which is pretty ugly.
Lets go back to the single axis, and see what needs to be done. Looking at it, its the linear falloff that's causing the artifact. Ideally we'd smooth that off into a rounded curve.
The smoothstep function does this. You tell it what defines the start and end point, and it'll try and draw a smooth S curve between those values when given a linear input:
so
clamp( 1 - abs(r) )
becomes
smoothstep(0,1, clamp(1-abs(r)) )
much nicer! lets apply this to all 3 channels, and multiply them together:
smoothstep(0,1, clamp(1-abs(r)) ) * smoothstep(0,1, clamp(1-abs(g)) ) * smoothstep(0,1, clamp(1-abs(b)) )
Smooth! Too smooth? Maybe we should push the smooth out to the edges a little, get more of the boxy shape back. To do this, rather than smooth between 0 and 1, smooth between 0 and 0.2, pushing it to the edge of the box:
smoothstep(0,0.2, clamp(1-abs(r)) ) * smoothstep(0,0.2, clamp(1-abs(g)) ) * smoothstep(0,0.2, clamp(1-abs(b)) )
Or better, replace the start and end values with floating point sliders, so we have easy control over the softness of the box (I've made 2 sliders named 'start' and 'end'):
smoothstep(start,end, clamp(1-abs(r)) ) * smoothstep(start,end, clamp(1-abs(g)) ) * smoothstep(start,end, clamp(1-abs(b)) )
Moving/rotating/scaling the camera input to the C44matrix shows we can move this wherever we want.
Next steps, final thoughts
The full list of functions you can call is in the Nuke user guide, 'Adding Mathematical Functions to Expressions', page 528. Lots of things to try out and explore.
Bonus round: P project
Someone asked, Ivan was kind enough to explain...
The C44Matrix node has a 'project' mode, which as it implies, will transform your p pass into the projected space of the camera. Another way to think about it is that its taking the input P passs and camera, turn the camera into a video projector, and shine a UV map onto your scene. You can then feed this result into a STMap node. There's some small extra things we need to do to make this work, will cover it along the way.
Here's our original clean P-pass, I've added a cube to make checking the alignment easier.
I've created a camera which I'll use for my projection, set its horizontal and vertical aperture to 24 so its a square format, focal length 50, and moved it so its framing exactly onto the front of the cube.
I then add a C44matrix node, set matrix input to 'from camera input', matrix type 'transform', connet the camera, invert enabled. The P pass is adjusted so (0,0,0) is at the center of the camera:
We'll append a second C44 matrix node, matrix input 'from camera input', matrix type 'projection', invert OFF, w_divide ON. Connect the projection camera. We'll also put a shuffle node before it, setting the alpha to white.
Why all this stuff? Previously when using the C44 node we've not used the last row of the matrix, a 3x4 is all we need to define translate/rotate/scale. Now that we're doing projections, we need a way to store the projection information (the relationship of the focal length to aperture to window scale). This information is stored in that last row of the transformation matrix, called the w_component. When w_divide is ON, it will do the perspective transform. With it off, it behaves like an orthographic camera.
Also, the C44 assumes the incoming P-pass has stored the w-component values in its alpha, so to make sure its correct we'll force it to white with a shuffle node.
The result looks like a warped camera position pass, because that's basically what it is:
Now we're basically looking at a projected UV pass, so we're only concerned with the red and green channels. It goes from (-1,-1) in the lower left of the projection to (1,1) in the top right. If we use this with an ST map, it needs to be between (0,0) and (1,1). How to remap those values. With an expression node of course!
(r+1 )/ 2
(g+1 )/ 2
0
That's more like a UV map I recognise!
All we need to do now is get an image to project, append a reformat with 'black outside' enabled to clip off the border, and feed that and our uv image to an STMap:
And that's it! Now you can move the camera, change it's focal length, and see the projection update. Here's the finished node network:
But is it finished? One thing that bugs me here is that the projection appears on the back-faces too. Can we limit it to just stuff that faces the camera?
Limit the projection to stuff that faces the camera
The visual analogy here is that we'll stick a light where the camera is, generate a simple white lighting pass, and multiply that against the projection we've just made.
We'll need a normals pass, so make sure your scanline render is outputting surface normal, in this case going to the 'n' channel.
The simplest way to describe a lighting pass is to compare the normal at a point to the direction of a light. If they face exactly opposite, then the light is facing directly at the light, and should get full brightness. If they face the same direction, then the surface is on the opposite side, and should be black. Values in between should get, well, values in between.
There's a mathematical way to describe the relationship of 2 directions (or vectors) like this, called the dot product. For us, that means we take the rgb of the surface vector (the normal), convert the rotation of the light into an rgb value, and multiply the r, g, b components together.
We already have the normals expressed as color in the n pass, we need to get the camera rotation (which we're treating as a light) as a colour too. Ie, we need to take the upper-left 3x3 of the world matrix of the camera, and convert it into an rgb value.
Luckily, this easier than it seems. We'd need to express the rotation relative to what axis the camera looks at when its rotated at (0,0,0). For nuke cameras, that means the camera is pointing down the -z axis. That means we actually don't have to do any work at all, we just need the 3rd row of the transfomration matrix (that column expreses scale and rotation relative to the z axis, just as we need it). If we scale the camera there might be issues, but lets ignore that for now).
To make this easy to visualise (and easy to make an expression for later), we'll feed these values directly into a constant. Make a constant, and ctrl-drag the top/middle/bottom matrix rotation values to the red/green/blue of the constant:
Now if you rotate your camera, you'll see the constant colour change. Right, lets do the cross product, which basically means a merge in multiply mode, and then a desaturate to get it to be pure grayscale (I've shuffled the n to rgb cos I'm lazy)
Now we can multiply this against our stmap, and get a cleaner p-projection. note that it can't do shadows, that's a step too far for me and this tutorial. :)
Projections without the C44 plugin
by Pedro Andrade
Although the available C44 matrix does a great job converting different coordinate spaces, it’s written as a plugin, which means that it’s not possible to look into the code to understand what’s behind the curtains and hopefully give the artist / TD ideas to eventually come up with similar tools that use the same concepts.
I wanted to break in! I knew that this had to be possible to use Nuke’s native nodes, so I reverse-engineered the tool to figure out what was going on.
First of all, this has to do with matrices, which is a mathematical concept that I had some background on (I’m originally a mechanical engineer), but not only I had forgotten all of that, but also, there were some types of matrices that were completely new to me and that were paramount to understand, as the projection matrix.
One of my main initial doubts was, how to create a 4x4 Color Matrix when there’s no such thing in Nuke?! There are 4x4 convolution matrices, but those have nothing to do with color, and I knew that color manipulation is what I was looking for, as position or normal passes are nothing but color representation (R, G, B) of vector spaces and these were the ones that needed to be modified somehow. Also, whatever matrix I was looking for, it had to deal with camera components like focal length and aperture as well as position in space.
So, I had to go on the hard path of researching and after many hours, I found a couple of articles that explained this stuff in detail. With this, I was now a step closer. After reading them all, I had found what I was looking for with a very good explanation of each of its components.
First of all, there’s a hierarchy of vector coordinates that we need to follow until we get to the projection matrix and even after that, to use it to something useful, there are further vector coordinates that we need to dig out.
The hierarchy is the following:
The first 3 coordinates systems were covered already here by Matt, which means that, following the chart, the Camera Coordinates (Camera space) is our starting point to dig into the projection matrix. It's important to remember that Camera Space uses the same concepts as the Object Space, which is commonly known as PRef pass. The only difference is instead of using an axis / locator, we will use the camera as if it were an axis (it’s the position in space that matters). Going back to the chart, to get to Camera Space we need to use primarily the World Space.
So, the projection matrix is indeed a 4x4 matrix and it’s written like this (you can find how each component is achieved by looking at the docs):
This can be re-written to make the aperture component more clear:
So, taking the last matrix as our main one, let’s first split it in a 3*3 matrix (we'll ignore its last column and last row for now). By looking at it and ignoring the ar component from the previous matrix, we have concepts that we can relate to and that is also related with camera’s components, like FOV’s and Far/Near clipping planes. That’s good! So let’s figure out the FOV.
The FOV of a camera is an angle and it can be calculated like so:
This can be split into its horizontal and vertical components:
As it’s not possible to do a 4x4 Color Matrix in Nuke, I split it into 2:
_one 3x3 Color Matrix
_one 3x1 Color Matrix
Let’s focus for now on the first type. With some math manipulation I came up with the following:
Which is the same as this, by swapping all the already known variables:
With this we have our first 3x3 Color Matrix solved!
Now, let’s solve the 3x1 matrix:
Now that we have the 2 different matrices, how can we join them so they behave as a single matrix? Simply, put the values of the 3x1 into an Add operation after the 3x3 matrix. Here's how that looks in Nuke:
This means that in reality, we’re not using a 4x4 matrix! We're ignoring the last row of the projection matrix, so we end up using a 3x4 matrix instead:
Now, the so-called ‘w’ component (more about this ahead) lies on the blue channel with the correct values, right after the Color Matrix , not after the Add operation. This means that every channel (R, G, B) needs to be divided by those values. Therefore we need to copy that blue channel so it can be available after the Add operation, then use it to divide the RGB values. To divide them, we can use a simple expression node, like so:
After this, as Matt pointed out before, we need to use this tweaked color information for further tweaking. Meaning, first normalizing those values, ignoring the blue channel. We do this because we want to transform this into a STMap:
After this, we can use an STMap node to drive an image to be remapped. So in a way, this looks like a cheat, as we’re not really projecting, instead, we are mapping something with a STMap driven by our camera values. Although this result is exactly the same as a projection (and that’s how a ‘real’ projection works in CG anyway), but for us compositors, this feels a bit different as we have more direct ways to do a projections that mimics the real world.
We must not lose track of the reason we're doing all this work (remember, you could achieve something similar with projecting onto 3d geometry and a scanline render node).
The big advantage here is the ability to do projections with just our PWorld pass, ie, no geometry is needed. Therefore this is much lighter on the CPU, meaning you as the compositor get more freedom and control.
Think about this example: You have 50 creatures running in a forest and you want to project leaves' shadows onto them all without having that render from 3D. Traditionally, you would be able to do it if you would load the geo for those 50 creatures and project onto them, but you can imagine that will slow down Nuke substantially. With this approach you can do it and play it with the same speed as your footage!
After all those steps, the last thing to do is to isolate the area that is being projected, as Matt already covered earlier.
So, important things to remember in order to achieve this:
- Hierarchy of Vector Spaces (World Space > Cam / Object Space > Projection Space)
- Normalizing / Transforming Projection Matrix into STMaps
- Isolate area that is being projected
All these concepts can be compiled in a tool / gizmo that will do all these calculations automatically based in the camera’s input (it will serve as a projector). In this link you can see the tool that I wrote that does exactly that: https://www.youtube.com/watch?v=IaJCSpM76V4
WHAT IS THE SO CALLED ‘w’ COMPONENT AND WHY DO WE NEED TO DO A DIVISION WITH IT?
To understand this part, we need to delve a bit deeper on the projection matrix concept and its intricacies.
So, by now we know that a Projection matrix it’s a 4x4 matrix and as we labelled a 3x3 matrix inputs as x,y,z, we will have to add a fourth component to define a 4x4 matrix, hence the w component (4x4 matrix components: x,y,z,w).
Let’s go back a step and recap how we got here.
To achieve a Projection Space, we first need to transform World Space into Object Space (also known as PRef if we’re really dealing with an object in space). Although an Object Space can be the same as Camera Space if we put our camera’s 3D position and rotation as inputs:
This World Space vector is our initial PWorld. To go from World Space to Object/Camera Space, we’ve multiplied every vector from the World Space, by the Object/Camera Position Matrix, meaning we did it for every single pixel in one operation, hence the ColorMatrix node (yes, every single pixel! Each pixel is a [x,y,z] vector). Although the ColorMatrix node does the correct type of multiplication, it’s maybe worth pointing out that a matrix multiplication is not a standard multiplication - search for ‘matrix multiplication’.
The result of this multiplication is another vector that is now in our Object/Camera Vector Space:
Graphically, this is again just colors, but each pixel of that overall graphical representation is one of these Object/Camera vectors.
Next, what we did was, we’ve multiplied each and every one of these new vectors by our Projection matrix - constructed by our camera’s components.
At that time, we ignored its last row so, instead of considering it as a true 4x4 matrix we ended up with a 3x4 matrix. We did it not only because it was irrelevant for our considerations at the time, but also because we were not adding a fourth component to the vector that was being multiplied by this matrix. This decision did not affect the functionality of the system we were after, but mathematically speaking lacks precision. But as just mentioned, for the type of functionality we were after at the time, the lack of this component is more than OK.
For the sake if continuing with this explanation, let’s consider for now the inclusion of a fourth component ‘w’ as neutral (search for ‘identity matrix’ for more on this), so it becomes this:
Although this last row does not influence much the result of this matrix (for the PositionProjector tool example), we will need to consider it right now that we want to transform a 3D point into a projected 2D point. We will need to do something to flatten a 3D object into a 2D version of it that will live in our screen, meaning on a 2D dimension reality: x and y. That is when this w component needs to be considered!
Therefore, to be possible to multiply these 2 matrices (Object/Camera Vector by Projection Matrix) we also need to add a fourth component to every Object/Camera Vector and we consider its value to be 1:
So, this operation goes like this:
Again, even though this is written with as common multiplication, matrix multiplication is not done is the same ‘standard’ way. For more on this search for ‘matrix multiplication’.
This ‘Projected Vector’ does not yet have any real representation with the screen space we’re after. Remember that we’re after a 2D flatten image of a 3D object, and this matrix is not yet what we’re looking for but, we’re very close to achieve that!
Since we’re looking for the 2-dimensional object that is a Projection converted to Screen Space, this is where the division for the ‘so called w component’ comes in! Like so:
We finally have what we need! This will give us the x and y coordinates of our screen that makes a flattened 2D representation of our 3D object.
Now, back to Nuke!
Because we’re using a method of color to transform different spaces we’re always limited to 3 components: R, G, B. Therefore, we will continue to ‘ignore’ the true w component from any transformation and we’re going to use only these 3 dimensions. Therefore the w ‘will become’ z which is our Blue color in our graphical representation. Remember: x = Red, y = Green, z = Blue
So, after we’ve got to our Projection Matrix color representation ( ColorMatrix1 ), we’ve then divided it by its Z (Blue) component:
To know more about this operation, search for ‘Perspective Divide’.
It’s worth mentioning again that, mathematically speaking, to be completely accurate and precise we would need to consider the true w component. Although, for what the PositionProjector tool is intended, this becomes irrelevant as the usage of this tool is for creative purposes. The tool will give you more than enough precision for what its intention is. You wouldn’t notice much difference in that context.
Although, there are situations in which we will need to be that precise when applying the same concept. In the next chapter I will give you an example of that.
SAME CONCEPT TO ACHIEVE ANOTHER COOL AND VERY USEFUL THING
As mentioned in the beginning, it’s paramount that we understand these concepts well, not only to build a tool like the PositionProjector, but also to be able to troubleshoot it. Equally important, if not more, is to really understand all of this to be able to come up with possibly new and/or better ideas that can use the same concepts, but possibly applied differently and/or in conjunction with other bits and pieces.
As we saw above, we now know how to transform a point in 3D space, with 3 coordinates (x, y, z) into a flattened 2D image (x, y). These are arguably the fundamentals of all computer graphics: translate a 3D object to a 2D equivalent representation of that same object.
So, why not use this knowledge to transform a 3D system, like a projection, stabilization or match-move, into an equivalent 2D system like a 2D tracker? The advantages in doing that would be the lightness of the 2D solution in comparison with the 3D one! That sounds cool, especially when we have many projections in our comp script.
Well, that’s actually what the Reconcile3D node does! To have the same result as a projection, we would need to use a CornerPin node (translation, rotation, scale and shear) for which its 4 points would have to be calculated one by one with a Reconcile3D node.
Although, the Reconcile3D does this operation point by point and it has to calculate it ‘live’ for each frame at a time.
Well, that’s an option, but it’s not a very good one as it’s a very slow process for what out intention is. So, this option it’s not worth the time or effort, if we need to do it like this for every instance!
Luckily for us, Nuke has a math python library that we can take advantage of in order to overcome the speed issue and, at the same time, be super precise. To be this precise, there are other coordinates spaces that we need to take in consideration such as Normalized Device Coordinates Space (NDC) and Raster Coordinate Space. Fortunately, the math python library and the nukescripts module will take care of this too!
So, using the same concepts as explained above, with the inclusion of the math python module, some coding and some imagination, we can write a tool that can achieve exactly that! We can instantly transform any 3D position into its equivalent 2D representation. This has a significant impact on the speed on how we can very quickly extract 2D tracks out of ANY 3D point in our scene. And we can do it at lightning speed! Have a look: https://youtu.be/SQXhiaaxBnw
References
- http://www.codinglabs.net/article_world_view_projection_matrix.aspx
- http://ogldev.atspace.co.uk/www/tutorial12/tutorial12.html
- http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
- https://www.scratchapixel.com/lessons/3d-basic-rendering/computing-pixel-coordinates-of-3d-point/mathematics-computing-2d-coordinates-of-3d-points
- http://www.nukepedia.com/written-tutorials/using-the-nukemath-python-module-to-do-vector-and-matrix-operations/all
Comments
BTW, Do you know how P projection or P texture works? I saw some people use these P tools, but not sure how it works... It would be great if you can talk about them.
Thanks
Done!
this is clearly one of the best tutorials i have ever read/saw. really appreciate the time and effort you put into this.
well done!
Heh, in a good way I hope!
..........speec hless in goodiest way ever!
Demand it, it's hard to do anything constructive with p passes and whatnot without it. :)
It seems as though that is what this plugin is doing?
http://www.nukepedia.com/gizmos/other/reproject3d
Thanks again
I really just want to project an image onto the surface of a set.
For other tools that I did regarding these concepts and others you can check this as well: https://vimeo.com/159628938
Pls do post the tutorial as soon as you can. This looks really great.
thanks
Did you use a matrix and connect it to the camera? If so how?
Are you using an STMap?
Any hints would be most welcome :)
and very excited about the gizmo too!
http://www.nukepedia.com/gizmos/other/reproject3d
I built a pworld camera projector a while back. I used some of the math from this gizmo to avoid using the c44 matrix. It worked great but took me a bit of time to get my head around. Unfortunately I never got my tool to work on multiple resolutions so I never posted it.
thank you for sharing
Thanks for taking the time to invest into this article!
Cheers
Great!!!
In the next days, I will update my bit of this already historic article.
It will contain revisions and expanded content (a few other bits and pieces and one cool 'trick') so the matrices part becomes hopefully more clear for everyone.
Stay tuned!
Please please your help will go a long way!
Hope this helps!
RSS feed for comments to this post