Fast Radiosity using Diffuse convolved environment map

March 21st, 2009 by Harry Bardak - Viewed 34284 times -

What you need to reproduce theses examples :

HDRShop plugin Diffuse_SH.


I used to work for a 2 years at Framestore-CFC in London. That mean I use Maya and PRMan as my main application. During this year I learn a lot of thing but also hear a lot of false assumtion about rendering in general. Typically the common comment was that PRman was more suitable to render complex character and that MR was very slow for this kind of job. In fact people arguing about this doesn’t not really know what are the status of modern rendering such as MR, V-ray or any others assuming everything was frozen since 5 years or just repeating what the veteran keep repeating.

But I am not engaging a new flame wars between renderer. Actually I don’t care which is the better today in 2008 you can produce beautiful picture with any renderer available in the market ( and yes that include maya’s software render well known to be a piece of crap ;) ) The most important is the guys behind the scene. But I need to put everything in their respective context to elaborate and may be justify what and why I will describe later.

Actually mental ray is very fast, if you do things correctly. If you don’t do things correctly ( such as massive spatial and/or temporal oversampling ) it as well as any other renderer will be very slow. But I always get this "PRMan displaces faster" stuff. Of course it does…. until you actually trace a ray.
You have to know that PRMan lives in a mindset where raytracing is so slow that you avoid it at all cost. The thing is that in PRMan, if you try to shoot a ray, it too has to do all those things that a raytracer do by nature. So the minute you actually shoot a single ray, PRMan has to do what mental ray always does…. and the comparasion suddenly isn’t so much in favour of PRMan any more.

So people using PRman developed a completely different approach to same problem : Lighting and Rendering a scene while trying to avoid at all cost raytracing. That what is interesting. What will happen if I use these approach with Mentalray within XSI ? Maybe I can have a hybrid solution keep the best of the two world to render my character(s) ?


Diffuse Convolution on Environment map

What’s that ? To keep it simple you can assume that a diffuse convolved map is envmap where you apply a smart blur that will give you the result of sampling this envmap with an infinite number of rays with the Final Gathering on a perfect lambertian surface. For more information you can refer to Debevec’s research ( the father of HDR images ).

Having your envmap pre-blurred is a great advantage because you don’t need to sample it several times to get the correct illumination. Only one sample suffice to get the illumination on your surface. Actually you must cast only one ray. Otherwise you will average twice the map and the illumination will not be correct any more.

The ray direction that you will cast must also be in the same direction as your surface normal. The Xsi’s Ambient Occlusion setup correctly will do perfectly the job. You need one sample , the spread will be very small to not deviate from the normal ( 0.01 ) and the mode should be set to environement sampling. Obviously you can code a shader that do the correct environement lookup. I am using the XSI’s AO shader because it’s available out of the box.


Render Balls simple test.

Env sampling brute vs FG vs Diffuse Conv

Env sampling brute vs FG vs Diffuse Conv


The way I set up the scene is very simple. I tried to get the contribution of the envmap once convolved and only this. So there is no illumination model just different approach.
The HDR used for this test the one called beach.hdr that you can find at debevec’s website. The map that you download is an angular map so you have to convert it to spherical coordinate system. The easiest way is to HDRshop and convert to angular map to longitute/latitude map. Once the map ready, I put it in my scene as an environement map.
The first sphere is the result of an AO shader set to environement sampling mode with a spread of 0.8. I had to crank up the sampling to 1024 to get a smooth solution. AO env sampling is brute force approach. There is no importance sampling so that mean you have cast a lot of ray to get a smooth result specially with high dynamic range image. And obviously that be very slow. It’s the slowest render.
With the second sphere I used to FG to sample env. Same settings as before except that I use a shader that return irradiance and activate FG. FG is a lot faster that the previous method because it doesn’t sample every point with 1024 rays. Instead FG will sample a certain amount of point and interpolate the result between the point calculated. But for this comparaison I pushed the number of rays to 4096 to make sure I got enough accuracy to have fair comparaison with the quality of the next approach. Even by pushing the number of rays to 4096, it was still faster to render with FG.

With last sphere I made a diffuse convolution in HDRshop ( using the SH_diffuse plug in because it ‘s a lot faster than HDRshop’s function), applied the resulting image as an environement map and use the XSI’s AO shader with the sampling set to 1 and the spread to 0.01.
The render time is ultra fast and match FG solution. It ‘s actually realtime because the convolution has been already calculated once in HDRshop.


Render Balls too simple let’s try with something that got balls.

We got a exact match with the FG for a fraction of time. We can conclude that the diffuse convolution is good enough to simulate FG. But in our test we were using simple sphere. This an ideal case but we needed to check first if the lookup was correct before to move on something more serious.

I like buddha. I like him because it’s a one million polygon model that can be compared to any displaced model push out from Zbrush or Mudbox.
I have simply applied what I did with the spheres.
The first render is with an diffuse convolved envmap. It took approximately 50 sec to render with a large part of preprocessing the scene. If I had a realtime shader could get the same result ( minus anti aliasing ) in realtime. Obviously there is no occlusion calculated but that due to fact I asked only an environment lookup around the normal of the surface.
The second image is what you get if you were using FG and third one is a difference done in shake
to highlights the difference between the two renders.

As you can see only the occlusion and the FG colored bounce are missing. If you apply a simple occlusion on the top you will endup with an image that is fairly close to the FG solution but with only a fraction of the time involved by the process.


OK but in production is it worth to use it ?

Well I will say yes and no. It depand the number of shot you have to do and the time you got to complete them. This technique involve a bit of setup at the shading level while the using FG is straight foward. So if you are working on movie that need a 2K ( or more ) render then yes. Memory foot print is very low and it’s damn fast to render it specially with very heavy object like displaced object or Hair object..
At the moment you need to set this at the shader for every object. Ideally the best will be to set this as a global ambience. Unfortunately you can’t plug anything in the global ambience parameter. So the best if to use a light that cast only ambient light. And for that you need to code it so ask your favourites shader writer to do it.


Publications :

An Efficient Representation for Irradiance Environment Maps


13 Responses to “Fast Radiosity using Diffuse convolved environment map”

  1. Ahmidou says:

    Happy to see the blog still alive, and thanks for the article Harry!

  2. rlenz says:

    thank you!


  3. Jason Dexter says:

    A few years ago I did a similar pass (I dubbed it a Probe Pass) for an indy film. It was based upon the the same principle using Muh’s Dirtmap shader set to Sample Environment but the subject matter moved through different lighting scenarios through each shot. We also did not have any appropriate (or at least adequate) HDR data for the shot, so I used what little we had and LUT’d film scans to create a very simple 3D environment with approximately the correct LAB values in the correct spatial areas. I then used a animated UV’d sphere (the Probe) to move through the scene and lightmap to file a highly blurred reflection (glossy samples) of this simple environment (to Float). This resulted in an animated, close to convolved environment map, that was applied to a separate pass using this technique. It worked like a charm, though the compositors were a bit baffled by this obscure pass at first.

  4. Fabio Leporelli says:

    I`ve used a very similar techinque for a project a few years ago and it was super fast and the final result was pretty good.
    Unfortunately the set up I’ve used didn’t work with fur … but for everything else it’s just perfect.
    And It’s absolutely flicker free :)

    Thanks a lot for the Article Harry

  5. Stefano says:

    There is an extra trick you can add, to also save the occlusion ray. Instead of the occlusion, use a constant material, with reflectivity set to Environment Only. Since the env sampling is taken along the direction specular to the incoming (eye) ray, you must then bend the normal, so to have it pointing halfway between the incoming ray and the original normal. This way, the reflected vector will match the direction of the original normal.
    To do so, in the rendertree take the eye vector, negate it, add it to the normal, normalize the result and plug the normalized vector into the Bump Map slot.

  6. M says:

    Very interesting article – Thanks for that.

    For some reason, I always get some black areas in my renders. If the object has a lot of details, and has concave areas or pieces sticking out, some areas remain completely black. Probably because the single ray hits the object itself and not the environment. If I increase the spread from 0.01 to .5, it solves the problem but then I have to also increase the samples, which defeats the whole thing.

    I noticed in your buddha example, you don’t seem to have this problem. Am I missing something?

    Stefano, would using your method solve the problem? However I’m not sure which nodes in the rendertree I should be using.

    Any help appreciated!


  7. Harry Bardak says:


    Yep to make it work with fur you need to get a fur shader that compute normal correctly to be able to use this. When I say normally the fur should be considered like a tube and not just a ribbon. Usually the normal of the fur is bent towards a the light source. So if you a light that read a diff conv map it should work.


    I set a very small distance for the AO shader to emulate what stefano discribed and hit the environement as quick as we can but there is always a chance to hit some geo. I choose to use an AO shader because it’s a lot more easy, but maybe bad for education purpose.

  8. M says:

    Hi Harry-

    Thanks for your response. I redid the test with exactly the same model and env map as you have, the Buddha model and the beach.hdr image. I have the same settings, 1 sample and 0.01 for spread, and the .hdr has been diffuse_convoluted with the SH_plugin.

    However, I’m still getting black areas in the render. Most notably is at the feet under the robe and under the chin, and in some creases of the robe.

    Your render is all lit up properly with no black areas whatsoever?! Am wondering what is different in the setup.



  9. Harry Bardak says:

    And what about the max distance ? This one should be very low as well ( 0.001 )

  10. M says:

    Ah! That was it – I had left it at 0. I changed it to 0.001 and it all works now ;-)

    Thanks so much!!


  11. Michael Murphy says:

    ” Unfortunately you can’t plug anything in the global ambience parameter. So the best if to use a light that cast only ambient light. And for that you need to code it so ask your favourites shader writer to do it.”

    Thanks for the article. I had checked this out on your site recently. There are several techniques I’ve read about recently (including yours) where an ambient only light would be very useful.
    See zap’s mental ray blog entry about ambient occlusion, where he notes that you can do this in 3ds Max:

    Does anyone know any coders that would be willing and able to tackle this?


  12. francisco says:

    Little simple question? and how do you apply all your textures and material tree to this? mix? thanks in advance

  13. Sarah says:

    I am unable to introduce the sharpness and clearness in my work like , you have done in your textures. Can anyone tel me what’s wrong i am doing? or is it something wrong with my graphic card?