Motion Vector Driven Occlusion

November 14th, 2006 by Guillaume Laforge - Viewed 17785 times -




Stefano Jannuzzo’s article on Adaptive Occlusion gave me an idea: to control occlusion sampling based on motion samples.

Occlusion techniques can be really fast nowadays for static objects. You can bake it or use Final Gathering occlusion and store it in a Final Gathering Map for example. But with moving objects, those techniques can’t be used.

For objects with a lot of geometry details (by displacement map for example), self occlusion can be very long. A solution is to lower the occlusion sampling parameter. While the object is moving, you can motion blur it so you won’t notice the “low sampled” occlusion, but as soon as it stops, a lot of noise due to low sampling will appear.

The idea behind the following technique is to use a “Min Level Occlusion Samples” and a “Max Level Occlusion Samples” with a threshold to switch between min or max samples (like sampling parameters in the render option). The threshold will be associated with the “intensity of the motion”. Under a given intensity, only “Min Level Occlusion” will be used and vice versa. This way, the more the object is moving, faster should be the render and if the object stops moving, no noise should appear.

The Motion Vector Driven Occlusion Render Tree

The key node will be the Motion Vector shader, so turn on motion blur in your render option and set “Shutter close” and “Shutter open” to zero. This way render times will not change but Mental Ray will return motion vectors to our shaders.

The idea is to use the motion vector length value to define the motion intensity.

The white parts are moving faster and don’t need high occlusion sampling. You control the intensity of the effect with the “Max_Vector_Length” node.
From here, we could use a Mix2Color node to mix low and high sampled ambient occlusion. It would produce a smooth transition but the two occlusion shaders would have to be calculated and render times would be worse than with a simple occlusion node!

We must use a Color_Math_Logic node in order to compute only one occlusion shader by sample if a threshold color is greater than the “motion vector color”.

In this example a RGB threshold of 0.189 will output fast moving parts (the feet, forarms and head) in black. Those parts will use a very low occlusion sampling as they will be motion blurred a lot. The threshold behaves the opposite way than the one found in the render option. A low threshold will return more moving sample in black and render time will be faster.

We add the low and high occlusion shader and thats it!

We could also use only one occlusion shader and drive its sampling parameter with our render tree. Unfortunately, in XSI’s Ambient Occlusion shader, the “number of samples” parameter is not texturable (maybe possible with a modified SPDL ?). You can try with the “mib_amb_occlusion” as it supports shader to control the Samples parameter.

You can download the preset with standard XSI nodes.

The Proof is in the Pudding

I tortured Jaiqua with a strong displacement map to test render time improvement. Motion blur is done in compositing with ReelSmart MotionBlur.

Here is an animation test.

This shader should be interesting when you need occlusion for fights animation, rigid bodies explosions, fast camera motion etc…
For a romantic scene, I’m not sure it will be really useful ;-)

I hope you find this article interesting.

Cheers.

6 Responses to “Motion Vector Driven Occlusion”

  1. Brilliant!
    May I just object that the threshold test should be performed someway in raster space, as for instance a huge xyz motion may result in a tiny raster motion. Too bad the standard vector transformation node does not include a “To Raster” option. Probably you could use the lm shader. Or, subtract from the motion vector its projected component along the ray (view) direction, before testing its length.

  2. Thanks Stefano.

    My first test was made with the lm2DMV shader (and also the mib_amb_occlusion). I rebuild it from scratch for the “public preset” with only built in shader to be sure any readers could try it without installing other shaders. But you are right, it needs to be “rasterised”. I will try to send an update with projected vectors on the camera plane as soon as I find the time ! It looks like a good render tree exercice :-).

  3. I also agree on the factory-nodes-only rule.

  4. Hi,

    Here is an updated “Motion Vector Driven Occlusion” preset :
    http://www.vol2nuit.fr/guillaume//articles/AO_MV/2dMVDO.Preset

    And the updated part of the render tree :
    http://www.vol2nuit.fr/guillaume//articles/AO_MV/2DvectorLength.gif

    The motion vector origin is now the camera. It is projected on the camera plane to evalute the length in 2d space.
    This way, the new preset will return correct vector length if the object move very fast on the z camera axis for example.

    Cheers

    Guillaume Laforge

  5. George R says:

    Where do I get mib_amb_occlusion? Google isn’t helping me out.