Sixbirds PixelParticles 1.0

March 3rd, 2010 by Helge Mathee - Viewed 18488 times -

Hey folks,

time to release another plugin under the flag of Sixbirds Barcelona!

As already shown in a teaser video on vimeo.com, we worked on a plugin to use particles as pixels to unleash the power of ICE to textures.

Read the rest of this entry »

Constant passes without constant materials

February 19th, 2010 by Stefano Jannuzzo - Viewed 15152 times -

Recently we had to face this interesting problem: extracting a constant pass out of an arbitrarely complex rendertree. In short, we received scenes set up for rendering, with a given number of passes and channels already set up. However, we needed an extra constant pass which was not planned in advance.

The rendertrees have all kinds of materials, with bump, transparency and reflection in place.

The most obvious approach to solve the problem is brute force: writing a script that would traverse all the materials, and substitute each material with a constant one, using as color the diffuse color of the original material, and inheriting all the subtrees for the transparency and reflection mixing. This would have required a couple of days of scripting, and, as any brute force approach, is not really elegant.

Read the rest of this entry »

Sixbirds Rigging Solvers (UPDATE V1.3)

January 19th, 2010 by Helge Mathee - Viewed 30535 times -

Hey folks!

Edit: I upgraded the addon to version 1.3, see the 13th solver for new description of the Nulls 2 Nurbssurface. Additionally I fixed two bugs and removed some nasty logmessages from the Null 2 Curve solver. Some solvers are NOT compatible with the version 1.3, so I will keep the link to the previous versions around.

So finally after a good first round here at Sixbirds in Spain we decided to release some of the tools we are using for production to the community. At this point we are sharing our rigging solvers, a collection of custom operators for “solving” certain equations, like IK, Bezier projection, curve lookup etc. The collection includes 12 different solvers. Please have a look at this video, which gives you a quick runover of the technology…

Read the rest of this entry »

PPG Based Particle Animation Work Flow With ICE

August 7th, 2009 by Hans Payer - Viewed 17612 times -

About a year ago already, XSI 7.0 was released with much expectations and enthusiasm. Most of us, by now, have played with ICE and if you’re lucky, you even had the chance to squeeze an operator in a real production. Everyone is still marveled when new videos are posted online showing the latest tricks or achievements made with ICE. ICE is an amazing design tool and definitely opens many doors for all Softimage users. But for many, they hit a wall; in order to create even the simplest simulation, they have to learn the meaning of vectors, scalar, arrays and how to use them. If you do have the technical chops, great, new things to learn, but for the more artistically skilled users, the hill is steep.

Everyone complained (including me) that the old particle system was too weak, too limited…. But it was fast to get some things done. To randomize a value, for example, you simply had to open a property page and set a variance parameter. Partly because ICE is so open, to vary a parameter, you have to search for the desired compound or node modifier, drag & drop, connect ports and then set values in the new compounds. Read me right, ICE is very powerful but if you have to modify dozens of parameters hundreds of times in one typical work day, this work flow becomes redundant an inefficient. There must be a way to be quicker.

Softimage proposed a work flow that can be described as follows: a technical director connects basic nodes, designs compounds and exposes ports. An artist, who is not necessary knowledgeable of ICE, then sets the different values of the exposed parameters. Some problems may arise with this approach. How can a TD predict every alteration needed by the artist? In the context of a particle simulation of falling rain, for example, a TD may have to design his compounds to support, for a simple shot, wind and gravity. Then, another shot may require water splashes from the droplet hitting the ground. Another still, requires the same rain but with the added effect of coagulating droplets on a smooth surface. And so on. You can easily imagine multiple variations of the same effect. So how are TDs supposed to tailor their compounds to fit all of the artists needs? One obvious solution is to build a system of compounds rather than a top level one. Yet the artists would need to be taught and learn how this system works and therefore, learn how to use ICE. Yes, maybe… but unlikely. I know many technical XSI users that were clueless on how create a simple simulation; artists, even less probable. How can technical directors empower artists without limiting them with a simple set of parameters? There should be a window to deliver the power of ICE to artists without being too painful. Artists should be able to easily create simple particle animation. You do not need to be a mechanic to drive a car.

With these observations in mind came the search for a way to simplify the ICE work flow which any Softimage user would understand and produce simulations in no time. The intent is not to replace the current work flow but to complement it and accelerate the multiple iterations needed in order to design particle simulations. Subsequent refinements, complex connections or relationships can and should be achieved the traditional way by connecting nodes and compounds in the ICE tree. But once a new solution is found, it should be easy to re-integrate it in a simpler work flow system. It should also be seen as an added value to cut the time to help generate about 75% of particle animation scenarios.

You do not need to look very far to find solutions. By simply looking at the different ways shaders can be modified, it is easy to wonder why ICE property pages were not designed the same way. Isn’t it faster in many situations to create connections by using the plug icon in a shader’s property page rather than opening it in the render tree, dragging and dropping a node, then making the connection? It should be the same in ICE property pages. No?

dk9zhxb_72gnm2jkfk_b

Also, all parameters affecting the simulation should reside in the same property page. With a typical ICE tree containing easily 20 compounds or more, finding a parameter often requires a search through many compounds. Too many doubles clicks are required to access the compounds’ parameters; why not put them in one location. When a scalar is randomized for example, the parameters associated with randomization should replace the original scalar in the top property page. You say: isn’t the top compound intended for that? Yes, but it doesn’t expose ports automatically. Remember, the goal here is to create particle simulations in a production environment; not to design ICE trees in a R&D context.

dk9zhxb_73fgm8zwtz_b

In the following video, I demonstrate a working prototype. It shows the above ideas in action with the exception of having iterations made through parameter contextual menus instead of the plug connection icon menu. The Softimage development kit does not give access to this type of UI widgets for ICE compounds property pages. Also, for clarity’s sake, it would have been beneficial to have used tabs in the property page to seperate emission, particle type, forces, triggers and events. This was not implemented. This prototype was written about a year and a half ago with a beta version of XSI 7.0 and unfortunately has not been updated to work with the current version of Softimage but would be relatively simple to rewrite. I must also credit my colleagues at the time, Jeff Wilson, who helped with the original concept and Javier von der Pahlen who co-wrote the prototype. It is important also to say that I was an employee of Softimage at the time when that prototype was written. As a long time user of Softimage’s products, I was trying to push solutions that made the work of technical directors easier. Unfortunately, this concept was not accepted. But it remains a great concept on how technical directors can go beyond compounds and integrate ICE in production pipelines more efficiently. It clearly illustrate the ability to build a simple particle animation rapidly without having to access the ICE tree. All iterations are made through the property page.

I simply wanted to share these ideas with the Softimage community and start a discussion. As, hopefully, more and more people will be using ICE, I’m sure that work flows will evolve to more efficient ways to manage ICE trees. Maybe it’ll lead to similar solutions, whether it comes from Autodesk or the community. I think it’s really cool concept and it would save many users time and headaches. What do you think? Is this type work flow worth exploring?

PPG Based Particle Animation Work Flow With ICE from Hans Payer on Vimeo.

Fast Radiosity using Diffuse convolved environment map

March 21st, 2009 by Harry Bardak - Viewed 31274 times -

What you need to reproduce theses examples :

HDRShop
HDRShop plugin Diffuse_SH.

 

I used to work for a 2 years at Framestore-CFC in London. That mean I use Maya and PRMan as my main application. During this year I learn a lot of thing but also hear a lot of false assumtion about rendering in general. Typically the common comment was that PRman was more suitable to render complex character and that MR was very slow for this kind of job. In fact people arguing about this doesn’t not really know what are the status of modern rendering such as MR, V-ray or any others assuming everything was frozen since 5 years or just repeating what the veteran keep repeating.

But I am not engaging a new flame wars between renderer. Actually I don’t care which is the better today in 2008 you can produce beautiful picture with any renderer available in the market ( and yes that include maya’s software render well known to be a piece of crap ;) ) The most important is the guys behind the scene. But I need to put everything in their respective context to elaborate and may be justify what and why I will describe later.

Actually mental ray is very fast, if you do things correctly. If you don’t do things correctly ( such as massive spatial and/or temporal oversampling ) it as well as any other renderer will be very slow. But I always get this "PRMan displaces faster" stuff. Of course it does…. until you actually trace a ray.
You have to know that PRMan lives in a mindset where raytracing is so slow that you avoid it at all cost. The thing is that in PRMan, if you try to shoot a ray, it too has to do all those things that a raytracer do by nature. So the minute you actually shoot a single ray, PRMan has to do what mental ray always does…. and the comparasion suddenly isn’t so much in favour of PRMan any more.

So people using PRman developed a completely different approach to same problem : Lighting and Rendering a scene while trying to avoid at all cost raytracing. That what is interesting. What will happen if I use these approach with Mentalray within XSI ? Maybe I can have a hybrid solution keep the best of the two world to render my character(s) ?

 

Diffuse Convolution on Environment map

What’s that ? To keep it simple you can assume that a diffuse convolved map is envmap where you apply a smart blur that will give you the result of sampling this envmap with an infinite number of rays with the Final Gathering on a perfect lambertian surface. For more information you can refer to Debevec’s research ( the father of HDR images ).

Having your envmap pre-blurred is a great advantage because you don’t need to sample it several times to get the correct illumination. Only one sample suffice to get the illumination on your surface. Actually you must cast only one ray. Otherwise you will average twice the map and the illumination will not be correct any more.

The ray direction that you will cast must also be in the same direction as your surface normal. The Xsi’s Ambient Occlusion setup correctly will do perfectly the job. You need one sample , the spread will be very small to not deviate from the normal ( 0.01 ) and the mode should be set to environement sampling. Obviously you can code a shader that do the correct environement lookup. I am using the XSI’s AO shader because it’s available out of the box.

 

Render Balls simple test.

Env sampling brute vs FG vs Diffuse Conv

Env sampling brute vs FG vs Diffuse Conv

 

The way I set up the scene is very simple. I tried to get the contribution of the envmap once convolved and only this. So there is no illumination model just different approach.
The HDR used for this test the one called beach.hdr that you can find at debevec’s website. The map that you download is an angular map so you have to convert it to spherical coordinate system. The easiest way is to HDRshop and convert to angular map to longitute/latitude map. Once the map ready, I put it in my scene as an environement map.
The first sphere is the result of an AO shader set to environement sampling mode with a spread of 0.8. I had to crank up the sampling to 1024 to get a smooth solution. AO env sampling is brute force approach. There is no importance sampling so that mean you have cast a lot of ray to get a smooth result specially with high dynamic range image. And obviously that be very slow. It’s the slowest render.
With the second sphere I used to FG to sample env. Same settings as before except that I use a shader that return irradiance and activate FG. FG is a lot faster that the previous method because it doesn’t sample every point with 1024 rays. Instead FG will sample a certain amount of point and interpolate the result between the point calculated. But for this comparaison I pushed the number of rays to 4096 to make sure I got enough accuracy to have fair comparaison with the quality of the next approach. Even by pushing the number of rays to 4096, it was still faster to render with FG.

With last sphere I made a diffuse convolution in HDRshop ( using the SH_diffuse plug in because it ‘s a lot faster than HDRshop’s function), applied the resulting image as an environement map and use the XSI’s AO shader with the sampling set to 1 and the spread to 0.01.
The render time is ultra fast and match FG solution. It ‘s actually realtime because the convolution has been already calculated once in HDRshop.

 

Render Balls too simple let’s try with something that got balls.

We got a exact match with the FG for a fraction of time. We can conclude that the diffuse convolution is good enough to simulate FG. But in our test we were using simple sphere. This an ideal case but we needed to check first if the lookup was correct before to move on something more serious.

I like buddha. I like him because it’s a one million polygon model that can be compared to any displaced model push out from Zbrush or Mudbox.
I have simply applied what I did with the spheres.
The first render is with an diffuse convolved envmap. It took approximately 50 sec to render with a large part of preprocessing the scene. If I had a realtime shader could get the same result ( minus anti aliasing ) in realtime. Obviously there is no occlusion calculated but that due to fact I asked only an environment lookup around the normal of the surface.
The second image is what you get if you were using FG and third one is a difference done in shake
to highlights the difference between the two renders.



As you can see only the occlusion and the FG colored bounce are missing. If you apply a simple occlusion on the top you will endup with an image that is fairly close to the FG solution but with only a fraction of the time involved by the process.

 

OK but in production is it worth to use it ?

Well I will say yes and no. It depand the number of shot you have to do and the time you got to complete them. This technique involve a bit of setup at the shading level while the using FG is straight foward. So if you are working on movie that need a 2K ( or more ) render then yes. Memory foot print is very low and it’s damn fast to render it specially with very heavy object like displaced object or Hair object..
At the moment you need to set this at the shader for every object. Ideally the best will be to set this as a global ambience. Unfortunately you can’t plug anything in the global ambience parameter. So the best if to use a light that cast only ambient light. And for that you need to code it so ask your favourites shader writer to do it.

 

Publications :

An Efficient Representation for Irradiance Environment Maps