PARC Python Animation, Rendering and Compositing
 
 
  Features   Installation   Examples   Reference   Download  

 
Common Code
Lights and Materials
Cube Geometry
Procedural Geometry
Motion Blur
Displacement
Cameras
Common Variables
Image Processing
 

Common Code

All of the examples in this section will share a common routine for setting up the basic scene. This function creates the rendering engine, sets the camera format, calls a routine to generate geometry, and then renders and saves the final image. All of this code can be found in the example files included in the distribution. The example code below was used to generate the imges at the top of each web page.

To begin with, we import all of the required modules for PARC by importing all the functions in the scene module. Although using an import * is really not that good of an idea, it simplifies the syntax for scene generation significantly, which is important for animation programming.

from scene import *

def run():            
    view = ZBufferView()
    view.set_format("1.85")
    view.set_samples(2)
    view.set_filter("gaussian", 2)
    view.set_dof(4, 0.1, 50, 5)
    view.add_light(DirectLight(C=(1,1,0.7)))
    view.add_light(DirectLight(C=(0.7,0.7,1),  L=(0,0,-1)))

    # Generate geometry, creating materials along the way
    generate_geometry()

    # Render the scene and save the final image
    view.render().
    render = view.get_image()
    render.write("render.ppm")

The generate_geometry() function is replaced by one of the procedural geometry generation functions described below.

The ZBufferView() is the main rendering pipeline. To create an image we add geometry, materials and lights and then start the engine which results in a single image. The various camera settings control the size and quality of the final image.

Depth of field is set using the view.set_dof() function. The parameters are focus distance, scale factor, focal length, and aperture respectively. The scale factor converts from the world space units into meters.

 

Lights and Materials

Light source shaders are used to illuminate the objects in the scene. They are required to have a illuminate() function which returns an (r,g,b) tuple. This color is used by materials during shading.

In addition, lights usually set the lighting direction vector, L, to indicate the direction of the incomming light. If L is not set, the light source is assumed to be ambient. The light shader sets the L vector using the shader object that is passed to every shader type. Shader objects are used to communicate arbitrary data between shaders. By convention, a number of common variables, such as L from light shaders are either implicitly defined by the system or other shaders.

Here is an example of an ambient and direct light source:

class AmbientLight:
    def __init__(self, C=(1,1,1)):
        self.C = C

    def illuminate(self, shader):
        return self.C


class DirectLight:
    def __init__(self, C=(1,1,1), L=(0,-1,0)):
        self.L = L
        self.C = C

    def illuminate(self, shader):
        shader.L = self.L
        return self.C

A material shader is derived from the base Material class and must provide a shade function which returns an (r,g,b,a) tuple. The shade function is used to compute the color at a given surface position. Material shaders are attached to each geometry object when the object is added to the rendering pipeline. Here is an example of a simple diffuse material:

class DiffuseMaterial(Material):
    def __init__(self, r, g, b):
        Material.__init__(self)
        self.Kd = (r, g, b)
        
    def shade(self, shader):
        c = 0
        for light in shader.light_list:
            a = shader.N.dot(-light.L)
            if a > 0:
                c = c + a
        if c < 0.1: c = 0.3
        kd = shader.Kd
        return (c * kd.r, c * kd.g, c * kd.b, 1)

Note that the kd value is localized in the material shader. If not, then the value would be returned 3 times for the red, green and blue components. This will be fixed when the Matrix class is extended to handle component-wise multiplication of vectors.

Through shader objects, materials can also be used hierarchically to perform texture mapping. You can build a tree of materials with each node performing a specific function. Using the shader objects, arbitrary data can be exchanged between shaders to really open up the possibilities. Here's an example of a simple texure mapping material:

class MapMaterial(Material):
    def __init__(self, map_name):
        Material.__init__(self)
        self.image = image.Image()
        d = self.image.read(map_name)
        if d < 0:
            print 'Cannot read texture map ', map_name, d
        else:
            print 'Read texture map ', map_name, d
        
    def shade(self, shader):
        c = self.image.get_color(shader.u, shader.v)
        return (c[0], c[1], c[2], 1)

To hook up a map material to an attribute of another material object, we just assign the map to the named parameter like this:

material = DiffuseMaterial(0.5, 0.5, 0.5)
map = MapMaterial("texture.ppm")
material.Kd = map

The map can be assigned to any named parameter of the material, however it is up to the material to use the named value. This is why the naming conventions are important. In this case, it is assumed that any material that implements a diffuse lighting model will use a value named Kd. Nothing bad happens if the material does not use the value. In fact, the map will never even be computed since all attributes are evaluated as needed.

 

Geometry Generation

You can create new geometry objects by deriving a new Geometry class and implementing a generate() function. Here is an example of a simple cube geometry shader:

from scene import Vertex, Geometry

class CubeGeometry(Geometry):
    def generate(self):
        v0 = Vertex((-0.5, -0.5, -0.5), (0,1))
        v1 = Vertex(( 0.5, -0.5, -0.5), (0,0))
        v2 = Vertex(( 0.5,  0.5, -0.5), (1,0))
        v3 = Vertex((-0.5,  0.5, -0.5), (1,1))

        v4 = Vertex((-0.5, -0.5,  0.5), (0,0))
        v5 = Vertex(( 0.5, -0.5,  0.5), (0,1))
        v6 = Vertex(( 0.5,  0.5,  0.5), (1,1))
        v7 = Vertex((-0.5,  0.5,  0.5), (1,0))

        self.quad(v0, v1, v2, v3)
        self.quad(v5, v4, v7, v6)
        self.quad(v3, v0, v4, v7)
        self.quad(v1, v2, v6, v5)
        self.quad(v2, v3, v7, v6)
        self.quad(v0, v1, v5, v4)
        
        return 0

The vertex creation function takes two tuples. The first argument is an (x,y,z) tuple that contains the position of the vertex in local space. The second tuple is a (u,v) tuple which specifies texture coordinates. Both of these arguments are required. The optional third argument is another (x,y,z) tuple that specifies the position of the vertex at the end of the frame. This is used to generate motion blurred deforming geometry and will be discussed below.

While this geometry will render correctly, there is a problem. The texture space of this cube is somewhat poorly defined. In the rest of the examples we will use a modified version of this cube geometry shader which contains separate verticesn for each face of the cube. By creating separate vertices, we can define a unique texture space for each face of the cube:

class CubeGeometry(Geometry):
    def generate(self):
        v0 = Vertex((-0.5, -0.5, -0.5), (0,1))
        v1 = Vertex(( 0.5, -0.5, -0.5), (0,0))
        v2 = Vertex(( 0.5,  0.5, -0.5), (1,0))
        v3 = Vertex((-0.5,  0.5, -0.5), (1,1))
        self.quad(v1, v0, v3, v2)
        
        v0 = Vertex((-0.5, -0.5,  0.5), (0,0))
        v1 = Vertex(( 0.5, -0.5,  0.5), (0,1))
        v2 = Vertex(( 0.5,  0.5,  0.5), (1,1))
        v3 = Vertex((-0.5,  0.5,  0.5), (1,0))
        self.quad(v0, v1, v2, v3)
        
        v0 = Vertex((-0.5,  0.5, -0.5), (0,0))
        v1 = Vertex((-0.5, -0.5, -0.5), (0,1))
        v2 = Vertex((-0.5, -0.5,  0.5), (1,1))
        v3 = Vertex((-0.5,  0.5,  0.5), (1,0))
        self.quad(v0, v1, v2, v3)
        
        v0 = Vertex(( 0.5, -0.5, -0.5), (0,0))
        v1 = Vertex(( 0.5,  0.5, -0.5), (0,1))
        v2 = Vertex(( 0.5,  0.5,  0.5), (1,1))
        v3 = Vertex(( 0.5, -0.5,  0.5), (1,0))
        self.quad(v0, v1, v2, v3)
        
        v0 = Vertex(( 0.5,  0.5, -0.5), (0,0))
        v1 = Vertex((-0.5,  0.5, -0.5), (0,1))
        v2 = Vertex((-0.5,  0.5,  0.5), (1,1))
        v3 = Vertex(( 0.5,  0.5,  0.5), (1,0))
        self.quad(v0, v1, v2, v3)
        
        v0 = Vertex((-0.5, -0.5, -0.5), (0,0))
        v1 = Vertex(( 0.5, -0.5, -0.5), (0,1))
        v2 = Vertex(( 0.5, -0.5,  0.5), (1,0))
        v3 = Vertex((-0.5, -0.5,  0.5), (1,1))
        self.quad(v0, v1, v2, v3)
        
        return 0
 

Procedural Geometry

One of the nice features of using Python as a scene description language is that we can easily generate all sorts of procedural geometry. New geometry objects are used primarily to read in different file formats and generate basic primitives. Procedural geometry generation is much more powerful and can easily be used to generate scenes of amazing complexity. Here is a simple function to generate a grid of randomly oriented and scaled cubes:

brush.ppm texture -->

def grid(view, n):
    map = MapMaterial("brush.ppm")
    material = DiffuseMaterial(0.3, 0.5, 0.2)
    material.Kd = map
    for x in range(-n, n):
        for y in range(-n,n):
            for z in range(0,n):
                cube = CubeGeometry()
                s = uniform(0.3, 1)
                cube.l2w = matrix.Matrix()
                cube.l2w.set_scale(s, s, s)
                rx = uniform(0, 360)
                ry = uniform(0, 360)
                rz = uniform(0, 360)
                cube.l2w.rotate(rx, ry, rz)
                cube.l2w.translate(x*3, y*3, z*3)
                view.add_geometry(cube, material, 3)

When we call this function with n=15, the scene contains 15,376 cubes rendered with a shading rate of 3 pixels, and pixel samples set to 2 (4 samples per pixel). At D1 resolution (720x486) it renders in about 7 minutes on my old 400MHz Celeron laptop. Without the depth of field, the same scene renders in less than 2 minutes. The memory footprint is reasonable, peaking at about 30MB for this scene (including Python!).

Note how the grid() function uses a single material for all of the objects. We could just as easily create new materials and maps for each object. However, the current map library does not have a way of sharing texture maps, so this would cause the texture map to be reloaded for each object.

I'm not particularly happy with the geometry creation mechanism. The basic requirements are that the rendering engine wants to call geometry objects during rasterization (not before) and a bounding box is required. Currently, all geometry obects have a bounding box of (-1,-1,-1)x(1,1,1).

 

Motion Blur

Motion blurred transformations are enabled by setting the l2w_motion matrix for each geometry object. By default, it is set to the l2w matrix. Motion blurred deforming geometry can be easily enabled by setting the motion position for each vertex. Here's an example of how to create a motion blurred, deforming cube:

class DeformingCubeGeometry(Geometry):
    def generate(self):
        v0 = Vertex((-0.5, -0.5, -0.5), (0,1), (-1,-1,-1))
        v1 = Vertex(( 0.5, -0.5, -0.5), (0,0))
        v2 = Vertex(( 0.5,  0.5, -0.5), (1,0))
        v3 = Vertex((-0.5,  0.5, -0.5), (1,1))
        self.quad(v1, v0, v3, v2)
        
        v0 = Vertex((-0.5, -0.5,  0.5), (0,0))
        v1 = Vertex(( 0.5, -0.5,  0.5), (0,1))
        v2 = Vertex(( 0.5,  0.5,  0.5), (1,1))
        v3 = Vertex((-0.5,  0.5,  0.5), (1,0))
        self.quad(v0, v1, v2, v3)
        
        v0 = Vertex((-0.5,  0.5, -0.5), (0,0))
        v1 = Vertex((-0.5, -0.5, -0.5), (0,1), (-1,-1,-1))
        v2 = Vertex((-0.5, -0.5,  0.5), (1,1))
        v3 = Vertex((-0.5,  0.5,  0.5), (1,0))
        self.quad(v0, v1, v2, v3)
        
        v0 = Vertex(( 0.5, -0.5, -0.5), (0,0))
        v1 = Vertex(( 0.5,  0.5, -0.5), (0,1))
        v2 = Vertex(( 0.5,  0.5,  0.5), (1,1))
        v3 = Vertex(( 0.5, -0.5,  0.5), (1,0))
        self.quad(v0, v1, v2, v3)
        
        v0 = Vertex(( 0.5,  0.5, -0.5), (0,0))
        v1 = Vertex((-0.5,  0.5, -0.5), (0,1))
        v2 = Vertex((-0.5,  0.5,  0.5), (1,1))
        v3 = Vertex(( 0.5,  0.5,  0.5), (1,0))
        self.quad(v0, v1, v2, v3)
        
        v1 = Vertex((-0.5, -0.5, -0.5), (0,0), (-1,-1,-1))
        v1 = Vertex(( 0.5, -0.5, -0.5), (0,1))
        v2 = Vertex(( 0.5, -0.5,  0.5), (1,0))
        v3 = Vertex((-0.5, -0.5,  0.5), (1,1))
        self.quad(v0, v1, v2, v3)

cube = DeformingCubeGeometry()
cube.l2w = matrix.Matrix()
cube.l2w.set_translate(0, 1, -1)
cube.l2w.lookat(3,4,5)
cube.l2w_motion = matrix.Matrix()
cube.l2w_motion.set_translate(0, 1, 1)
cube.l2w_motion.lookat(3,4,5)
cube.l2w_motion.rotate(0, 15, 0)
material = DiffuseMaterial(0.5, 0.2, 0.3)
map = MapMaterial("texture.ppm")
material2.Kd = map
view.add_geometry(cube, material2, 3)
 

Displacement

If a material is attached to a geometry object's D attribute, the geometry will be displaced during rendering. While you can attach any material to this attribute, it wouldn't make sense. Displacement materials are assumed to return an (x,y,z,0) tuple which represents the new position for the surface point. Since materials normally return an (r,g,b,a) tuple, you would get some weird results if you didn't use a material designed for displacement mapping. You can, of course, attach other map materials to the displacement material.

Here is an example of how to create a simple displaced plane. This includes the geometry, displacement material and scene description:

noise.ppm texture -->

class PlaneGeometry(Geometry):
    def generate(self):
        v0 = Vertex((-1, 0, -1), (0,1))
        v1 = Vertex(( 1, 0, -1), (0,0))
        v2 = Vertex(( 1, 0,  1), (1,0))
        v3 = Vertex((-1, 0,  1), (1,1))
        self.quad(v1, v0, v3, v2)
        return 0

class DisplacementMaterial(Material):
    def __init__(self, scale=1):
        Material.__init__(self)
        self.scale = scale

    def shade(self, shader):
        p = shader.P + self.scale * shader.D.r * shader.N
        return (p.x, p.y, p.z, 1)
    
def displaced_plane(view):
    plane = PlaneGeometry()
    plane.l2w = matrix.Matrix()
    plane.l2w.set_scale(3, 3, 3)
    plane.l2w.rotate(-30, 0, 0)
    material = DiffuseMaterial(0.7, 0.6, 0.3)
    displacement = DisplacementMaterial(0.5)
    map = MapMaterial("noise.ppm")
    displacement.D = map
    view.add_geometry(plane, material, 3, displacement)
 

Image Processing

We can improve our output images if we do a bit of simple image processing. For example, lets add a gradient background and some text at the bottom of the images:

    # Composite render over a gradient background 
    # and add some dropshadowed text
    foreground = ip.image(view.get_image())
    background = ip.gradient((1.0, 0.9, 1.0, 1,1),
                             (0.9, 1.0, 1.0, 1,1),
                             (0.8, 0.6, 0.7, 1,1),
                             (0.8, 0.6, 0.7, 1,1))
    comp = ip.matte(foreground, background).get_image()
    ft = ip.Font("/usr/share/fonts/default/TrueType/helr____.ttf")
    ft.set_size(32)
    text = "Cubes (c) Daniel Wexler, www.flarg.com"
    ft.draw_string(comp, 20, 10, text, 0, 0, 0)
    ft.draw_string(comp, 18, 12, text, 0.3,0.4,0.8)
    comp.write("comp.ppm")

Despite the small number of currently supported operations, the image processing engine is quite powerful. Operations are nodes in a DAG (Directed Acyclic Graph) which is computed on-demand. Currently, only point to point operations are supported, but these operations are optimized so that no cache images are required. All point to point operations are concatenated and performed in series on a per-pixel basis as needed.

Writing new operations is quite simple. For example, here is the entire code for the matte opeation:

#include "ip_op.h"

static int ip_matte_compute(IP_COMPUTE_INFO *info);

static PyObject *
ip_matte_new(PyObject *self, PyObject *args)
{
    IP_OP *over, *under;
    
    if (!PyArg_ParseTuple(args, "OO", &over, &under)) 
        onError("you must pass an over and under operation");

    return (PyObject *)IP_OP_matte(over, under);
}

IP_OP *
IP_OP_matte(IP_OP *over, IP_OP *under)
{
    IP_OP *op;

    op = ip_op_create("matte", ip_matte_compute, NULL, NULL);
    ip_op_add_input(op, "over", over);
    ip_op_add_input(op, "under", under);
    
    return op;
}

static int
ip_matte_compute(IP_COMPUTE_INFO *info)
{
    double af, ab;

    af = info->input[0][3];
    ab = (1 - af) * info->input[1][3];

    info->output[0] = af*info->input[0][0] + ab*info->input[1][0];
    info->output[1] = af*info->input[0][1] + ab*info->input[1][1];
    info->output[2] = af*info->input[0][2] + ab*info->input[1][2];

    /* multiply transparency */
    info->output[3] = (1 - af) * (1 - ab);

    /* choose closer z? */

    return 0;
}

Currently, the image processing module supports the following functions:

read()Reads a file from disk
image()Reads a file from an existing memory image
fill()Creates a procedural image with a solid color
gradient()  Creates a procedural image interpolating corner colors
matte()Performs a standard over operation

Area operations will require caching, of course. The plan is to start by implementing seperable operations such as FFT blur and convolutions and a few simple transformations including zoom and rotation. The tiled nature of the underlying image class will support tiled image file formats in the future. More sophisticated tile-based caching will be used in area operations if necessary.

The image processing library uses the FreeType library to render TrueType fonts. Before compiling, make sure to have freetype2 installed into /usr/local for everything to work correctly. You can also disable the font support by changing a define in the ip_font.c file.

Operations must define a compute() function, and they can optionally define a viewport() function. By default, an operation is assumed to be point to point, which does not require a viewport. The function is passed the following structure:

typedef struct ip_compute_info {

    /* Do _NOT_ modify any of these values */
    IP_OP *op;
    void *user_data;
    int x, y;           /* pixel being computed */
    int viewport[4];    /* total output viewport */
    double input[IP_MAX_INPUT][5]; /* input pixel values */

    /* _ONLY_ modify the output pixels */
    double *output;     /* guaranteed to exist as a double[5] */
} IP_COMPUTE_INFO;