Scripting Structures

October 5th, 2006 by Homam Bahnassi - Viewed 22276 times -

Some readers probably already know about this subject. Either from attending the live presentation that was held for some companies, or from the presentation slides on

However, this article actually differs from the presentation. I re-wrote my original narration, plus re-arranged it again and added more samples and details along with mentioning some special cases for implementing the system in pipelines (which some users in the community asked for).
So this article’s text will not exactly follow the slides for the same subject.

Introducing the system

During our daily scripting work either as general Technical Directors or specialized developers, we usually write scripts for two different scopes of the project.

The first is generic reusable tools and plug-ins which can be chained with other tools to achieve complex effects or processing pipelines. Examples of this kind of scripting includes almost all scripts found in netviews (XSI and In|Framez).
The second scope of scripting is the object-specific or scene-specific scripting, which is supposed to solve a situation present only in that particular scene.
Theoretically, this can be handled and managed pretty much the same way as the first scope, but practically it tends to go messy and hard to manage, especially if the developer didn’t use a clear naming system for the script files, or if he didn’t heavily comment scripts and put very tight error-handling code to detect improper usage of the script.

Of course things can get even worse when these scripts grow in number and rely on a bigger amount of objects and scenes, then even the best well-designed pipelines would fail to manage those files. In such cases, organizing and grouping those scripts with their relevant scenes can solve nearly 90% of the problem.

For this reason we cannot rely only on the traditional workflow when scripting in huge production pipelines. Instead, we need a way to store and execute scripts on objects or scenes pretty much the same way scripted operators can work on specific objects or scenes.


The Workflow

The workflow is separated into two main parts based on the suggested pipeline: Setup and Utilization.
The Setup part is when the TD or the technical artist attaches the required script to its corresponding object, model or scene and writes the actual script code.

This is implemented easily. A TD only needs to add an annotation property to the object and rename it to any meaningful name that describes the script function, with only one requirement. The name must include the ‘SCRIPT’ keyword in the name’ As an example:

The only thing left now is to write the actual script in the attached annotation property.


An example application of this workflow in a pipeline would be to have modelers pass their models and scenes to technical directors and riggers, who will do the setup part and pass the rigged and scripted scenes or models to other divisions (e.g. animators).

The second part of the workflow is the Utilization part. The end user (or in the example above, the animator) will execute the attached scripts as required either manually or automatically.

Manual execution happens on artist’s demand. He can simply call any attached scripts by selecting objects and running a global invocation script that executes all scripts attached to the selected objects.

Below is how the global invocation script would like. This executes Jscript scripts attached to objects. It also shows how we can use one of the annotation flags to mute the script:

// Global Script Sample - Homam Bahnassi - In|Framez 2006
// This script is a sample used to invoke JScript code found in any annotation property with the "script" keyword in its name attached to the selected objects.
// Searching for Annotation properties with the "Script" keyword in its name...
oGSSSelection = Application.Selection;
for (iGSSSel=0; iGSSSel<oGSSSelection.count; iGSSSel++)
  oGSSProps = oGSSSelection(iGSSSel).Properties;
  for (iGSSProp=0; iGSSProp<oGSSProps.count; iGSSProp++)
    bGSSSearch = oGSSProps(iGSSProp)"Script");
    if (bGSSSearch == 0)
      // Flag1 is used to mute the script, Check if it''s muted...
      bGSSScript = GetValue(oGSSProps(iGSSProp) + ".flag1");
      if (bGSSScript == true)
      sGSSScript = GetValue(oGSSProps(iGSSProp) + ".text");
      // Parsing the current object name to the attached script...
      sGSSObjScript = sGSSScript.replace("sGSSObjName",oGSSSelection(iGSSSel));
      LogMessage ("Running Attached Script on: "+oGSSSelection(iGSSSel));
      // Executing the attached script...
      LogMessage ("Attached Script Completed --------------------------");

On the other hand, automatic execution depends on custom logic to execute scripts at specific times throughout the production phase.
In XSI, we can use events to control the execution of the attached scripts as an alternative method to the global script in the manual execution mode.

In real production pipelines, the execution phase might actually use both modes (automatic and manual). To mix these two modes in our sample, we can easily use one of the annotation flags to set if this script is available for auto-execution or not. This way we can differentiate auto-executing scripts from manual ones.


Implementation Notes

Development of the core system and pipeline implementation is not considered part of the workflow actually, since it’s done only once and then it’s updated as the pipeline gets updated. This includes development of the global code executer and the custom events system.
In XSI this can be developed and installed as a workgroup addon for easy sharing with all levels in the pipeline (artists, Technical Directors, etc).

For a faster and more stable implementation of this system in real-world pipelines, it is preferable that it gets implemented as a compiled custom property.

Also the implementation part covers developing additional tools and wrappers to handle all cases in artist production.
For example we might need to develop a tool for transferring or merging attached scripts from different objects, very similar to what GATOR does with other properties.
Or we can develop a wrapper for the current GATOR tool to extend it so it can handle the new script properties.
This tool can be also extended to transfer scripts to objects generated with geometry generators such as Merge, Boolean operations, etc. where the attached scripts on the original objects are transferred to the new generated objects.

Another good point to mention in the pipeline implementation is making the scripts saved in scenes accessible through the scene table of contents (TOC) file. This way it enables us to invest the additional flexibility that the TOC offers in our pipelines also for this new system.

A Final Word

Don’t forget that implementing this system might affect other parts in the pipeline, which would need to be taken care of as well. For example, with this system we can now add a new methodology for building, grouping and organizing scripting libraries in external models that can be stored, imported or even referenced in different scenes the same way we do with external material or animation libraries.
Of course this can get more complex by nesting scripts and models to fit very complex pipelines.

For additional application examples please refer to the presentation slides at:


4 Responses to “Scripting Structures”

  1. Hi Homam, it sounds like a powerful concept. Could you give some examples of what sorts of things you use these stored scripts to do in production?

  2. Andy, Iím so sorry for my very late replayÖ

    Actually Iíve two scenarios for using this concept in our production.

    The first one is where we store scripts at the scene level. In such cases we store scripts for specific scenes that require some sort of re-initializing or bug fixing.
    One actual sample of this was in my last project, where we were animating objects with muted camera projection. The mute option was buggy (each time we open a scene the some projections get broken) and we canít freeze the camera projection because we need the construction history. This leads me to develop a script that is saved with those buggy scenes to fixes them when required.

    The other case for utilizing this concept was used at the model level.
    The samples on this case are common in our game production. A lot of times we face characters animated with custom rigs that are not supported with the current exporters. Those rigs require custom scripts for plotting, optimizing before exporting them to the game engine. So we develop the plotting and optimizing script and attach it to root model of the rig.
    This way each script is saved with its associated rig and accessed every time very easily.

    Now those are some of the cases that come to my mind at the momentÖ Iíll continue sharing other cases whenever I face them.

  3. Hi again,

    Back with more interesting examples. Recently I worked on implementing a motion planning system for XSI based on my research. The system is called ICE-Planner (Intelligent Construction Equipment Planner). As the name shows, it utilizes different levels of AI to do path planning for robots and construction equipment.

    More details explained in this link:

    In this system the concept of this article was used heavily not only to run scripting code (e.g. Jscript, VBScript). But it was extended to compile and run C++ and C# code that is attached to 3D objects in the scene.

    The main idea of using such concept was to implement the concept of autonomous agents in the scene. This means that any 3D object in XSI can have its own logic attached to it as a C#/C++ code and behave based on it during solving the path-planning problem.

    One specific example for utilizing the concept of this article was when implementing the Engineering Agent in ICE-Planner. This agent plays an important role in the system to ensure that the path-planning algorithm generates safe and feasible results from the engineering point of view (e.g. static and dynamic equilibrium). The engineering agent code is generated dynamically based on the construction equipment configuration, loading charts,… etc. It is attached as C++ code to each equipment model in the scene. This agent is then executed during solving the path-planning problem where it is called to evaluate the engineering validity of the path-planner decisions.

    As I mentioned in the beginning, there are several examples available about utilizing the concept of this article, but it is not feasible to go through all of them in a blog comment :) . Refer to the system link and if anyone has any questions please don’t hesitate to post them here or mail them to me.


  4. [...] of you who are long time readers of Softimage Blog (or XSIBlog way back when) might remember a 2006 article by Homam Bahnassi. In the article Homam describes a method to run code stored in annotations in the [...]