PARC Python Animation, Rendering and Compositing
 
 
  Features   Installation   Examples   Reference   Download  

 
Matrices and Vectors
Scene
Geometry
Materials and Lights
Images
Image Operations
Image Viewer
 

Matrices and Vectors

The matrix module provides a 4x4 homogenous matrix type, a 3-element vector type, and the standard operations. Matrices and vectors are both number and sequence types. As a number type, the basic mathematical operations, such as "+", "-", "%", are overridden with type-specific operations.

Operation    Vector Method Matrix Method
+ (a,b,c)+(d,e,f)=(a+d,b+e,c+f) Not Implemented
- (a,b,c)-(d,e,f)=(a-d,b-e,c-f) Not Implemented
* (a,b,c)*(d,e,f)=(a*d,b*e,c*f) Matrix Multiplication
/ Not Implemented Not Implemented
%
Vector.cross
Cross Product Not Implemented
^
Vector.dot
Dot Product Not Implemented
~ Not Implemented Matrix Inversion
Vector.length() Returns the length of the vector Not Implemented

Unary minus also works on vectors. That is, "-v" negates each component of the vector. You can test to see if a vector is all zero using "if v:" which returns true if any component of the vector is nonzero.

Matrices can be used to transform a vector using one of three separate functions. You must call a method to transform a vector by a matrix -- there is no inline syntax for this operation. In the following table, "M" is a matrix and "v" is a vector:

Matrix.transform(v) Transforms v by M without performing a division by the homogeneous coordinate.
Matrix.project(v) Transforms v by M including homogenous division.
Matrix.transform_3x3(v) Transforms v by M using only the upper 3x3 matrix. This is useful for trasforming normal vectors. Normal vectors should be transformed by this function using the inverse transpose of the matrix used to tranform points.

The following functions are used to construct and modify existing matrices. The functions that begin with "set_" initialize the matrix using the named transformation. The remaining functions apply the requested transformation as a post-multiplied matrix operation.

Matrix.set_identity() Sets the matrix to the identity transformation.
Matrix.set_translate(x,y,z) Sets the bottom row (homogenous coordinates) to the passed in vector.
Matrix.set_scale(x,y,z) Sets the diagonal matrix values to the passed in vector.
Matrix.set_rotate(x,y,z) Constructs a rotation matrix using the order Z -> Y -> X.
Matrix.translate(x,y,z) Post multiplies the current matrix by the specified translation matrix.
Matrix.scale(x,y,z) Post multiplies the current matrix by the specified scale matrix.
Matrix.rotate(x,y,z) Post multiplies the current matrix by the specified Z->Y->X rotation matrix.
Matrix.lookat(x,y,z) Points the current matrix's Z axis at the specified point in space.
Matrix.lens(fov,hither,yon) Constructs a perspective matrix with the specified field of view and near and far clipping planes.
Matrix.copy() Creates a copy of the current matrix.

Most of the vector operations will accept a 3-tuple as a quick standin for a vector. More precisely, a 3-tuple consisting of ints or floats will be coerced into a vector when used in an operation containing other vectors. Similarly, a float will be coerced into a vector with each component set to the float value. For example:

from parc.matrix import Vector

# Use full vector types:
a = Vector(0, 1, 0)
b = Vector(1, 0, 0)
c = a + b

# This does the same thing as above:
a = Vector(0, 1, 0)
c = a + (1, 0, 0)

# You can also do the standard scalar*vector operation:
a = Vector(2, 3, 4)
b = 0.5 * a    # b is set to (1, 1.5, 2)
 

Scene

The Scene class manages rendering and stores the information describing the camera, geometry, and light objects that make up the scene description. To render a scene, you instance a Scene object, describe the camera model, and then instance various Light, Material, and Geometry objects and add them to the scene. Once the entire scene has been described, you can "render" the scene. Currently, rendering is implemented using a REYES-style algorithm.

After constructing a scene, it can be rendered using the scene.render() method. An RGBAZ image is created and accessible using the scene.get_image() method. This image is available before rendering begins so it can be passed to the Image Viewer to monitor rendering progress. The image is not saved to disk by default. The caller must save the image manually.

Before rendering, you must add light, material and geometry instances to the scene. Lights are added globally, and each geometry object has a single associated material. Use the following methods to construct the scene:

scene.add_light(Light)

Add the Light instance to the scene and use it while shading all materials. Currently, there is no way to mask off certain lights for certain materials. All light are used for all materials.


  scene.add_geometry(Geometry, Material, shading_rate [,Displacement])

Add a geometry instance to the scene and render using the material instance when rendering is requested. Shading_rate determines the pixel area of the average micropolygon generated for this geometry object at render time. The Displacement material parameter is optional. When set, the displacement material is evaluated as the geometry object is tessellated and the resulting vector value, normally considered as an [r,g,b] color, is assumed to be a [x,y,z] position for the micropolygon vertex.

Vertices in the micropolygon grid are shaded using the specified shading material only if at least some part of the micropolyon is determined as visible during rasterization. Note that some micropolygons can be shaded that don't end up being visible in the final frame, but this is somewhat uncommon.

Shaders are subclasses of the Geometry and Material base classes, and you must pass a valid instance of these subclasses. Aside from the __init__() function which is called when the object is instanciated, The geometry and material functions are not called until rendering occurs, and only if the geometry object is not culled by the frustum and tile culling or the hierarchical z-buffer.

Texture mapping is discussed below.

Each scene contains a single camera model that can be adjusted using the following methods:

scene.set_format("format name")

Set the 2D viewport and window values to match the named film format. The viewport defines the pixel limits and is defined to include the boundaries. Thw window is defined as the range in normalized device coordinates that maps to the viewport. All formats define square pixels. Currently supports the following formats :

Name Viewport Window
"d1" [0,0]-[719,485] (-1,-0.75)-(1,0.75)
"ntsc" [0,0]-[511,485] (-1,-0.75)-(1,0.75)
"1.85" [0,0]-[2047,1106] (-1,-0.541575)-(1,0.541575)
"academy" [0,0]-[2047,1240] (-1,-0.728665)-(1,0.728665)

  scene.scale_viewport(scale)

Scale down the size of then viewport and window for preview renders. A setting of 2 will produce an image half as tall and half as wide as the full-resolution render. A value of 3 will be 1/3 the hight and width.


  scene.set_samples(count)

Set the number of subpixel visibility samples in each dimension. A value of 2 will result in 2x2=4 samples per pixel, and a value of N will give 2^N samples. A value of 2 will give reasonable results for many images. Higher values are required for smooth images when depth of field and motion blur are used. The value must be an integer, but there is no hard limit on the value, although anything much greater than 10 will take forever to compute. The default is one sample per pixel.


  scene.set_filter("name", width)

Set the reconstruction filter used to construct the final image using the subpixel samples. The name can be one of "gaussian", "triangle", or "box" and the width is an integer value of pixels. The default is a 2 pixel wide gaussian filter.


  scene.set_dof(focus_distance, scale_factor, focal_length, aperture)

Set the camera parameters used to compute depth of field. The default camera does not compute any depth of field. The scale factor is used to convert the world space values in the scene into meters. The focus distance is then set in world space units. The focal length and aperture values match those on a standard film camera.

 

Geometry

Geometric primitives are generated procedurally in the generate() method of a Geometry subclass. Each primitive is made up of Vertex instances. For example, a quadrilaterial is generated using four vertices, while a Bezier patch is specified using 16 vertices as control points.

Geometry.l2w & Geometry.l2w_motion

The l2w matrix specifies the transformation from the local object space, used to define vertices and bounding boxes, to the world space. An optional additional instance variable, l2w_motion is used to define the local to world space transformation at the end of the frame for motion blur. By default, the l2w_motion matrix is identical to the l2w matrix, and the l2w matrix default to the identity matrix.


  Geometry.bbox_min & Geometry.bbox_max

Each geometry instance has a bounding box defined in the local space of the object. The bounding box is determined by the instance variables bbox_min and bbox_max. By default the bounding box is a cube from (-1,-1) to (1,1).

It is important to set the bounding box in the __init__() function since it is used to cull objects by tile or hierarchical zbuffer before calling the generate() method. The bounding box must also completely contain the object, so, if you don't know the object's actual bounding box, it is important to set the bounding box conservatively. In the worst case, the bounding box can be set to infinity.


  Vertex((x,y,z), (u,v) [,(vx,vy,vz)])

All polygon vertices and surface control points are specified using Vertex instances. Each vertex has a position and a uv-tuple, and optionally a position at frame-close used for motion blur. All positional values are specified in the object's local space. The uv-tuple is used for texture mapping.

The surface normal is computed using the cross product of the partial derivitives with respect to the U&V parametric directions. Therefore, it is manditory to pass a non-degenerate parametric space over the surface of the object.

Vertices are not associated with a specific geometry object, but instead are global to the scene and can be used in multiple geometry objects.


  Geometry.quad(v0,v1,v2,v3)

Generates a quadrilateral using the four specified vertices. Quads, like all primitives, are tessellated into a set of grids at rasterization time using the geometry object's shading rate. Each grid vertex is automatically displaced if a displacement material is assigned to the geometry object.

Quads are only generated and tessellated if the object is not culled using the bounding box tests. Once tessellated, the grids for a given quad will remain in memory until they are used or determined to be unnecessary.


 
 

Materials and Lights

Material and light shaders are created by subclassing the respective base classes. Materials are assigned to geometry objects when the geometry is added to the scene. Lights are globally added to the scene independent of other objects. Materials are used to shade geometry, to displace grid vertices, and as texture maps.

Lights and materials are passed a special object, called a shader, in their respective computation methods. The shader is used to access data about the currently shaded point. For example, you can get the current position x,y,z of the shaded point through the shader.P variable. Some of the values accessible through the shader variable are defined by the rendering system while others are defined and computed by other material shaders.

Material shaders can be hierarchically combined to produce complex effects including texture mapping. Shaders are combined by assigning one material to the local instance variable of another material. Once set, the local parameter will be evaluated on-demand through the passed-in shader object. For example:

map = MapMaterial("image.ppm")
material = DiffuseMaterial()
material.Kd = map

The return value of the map material is automatically assigned to the local "Kd" value of the diffuse material. Any local variable can be mapped to another material, but, of course, only variables used within the shader have any meaning. It is completely up to the shader writer to make sure that the mapped values are actually useful. To make this task a bit easier, a number of conventions are used so that systems of shaders can be developed and used together. The following conventional parameters are used within material shaders:

Name Definition
P Position in current space of the point being shaded
Kd Coefficient of diffuse reflection
Ks Coefficient of specular reflection

Materials have two manditory functions:

__init__(self)

Called when the object is instanciated, this function is used to set default values. There is no return value.


  shade(self)

Called during shading, this method returns an RGBA 4-tuple. Additional values can also be set as instance variables of the passed in shader object. For example, a shader that manipulates the texture values can set the shader.uv tuple.

When a material is used as a displacement map, the return value is assumed to be an (X,Y,Z,unused) 4-tuple. The position value is used as the displaced value for "P", the position of the vertex after displacement.


 
 

Images

The image module provides a set of basic input and output options for 2D images. The image data is allocated using an on-demand, tile-based scheme. All data is stored in floating point. Each image can have (R,G,B), alpha, and depth information.

Image()

Allocate a new image object. No pixel memory is allocated and the viewport is not set until a file is read in or the viewport is manually set.


  Image.read(filename)

Open an image file. Depending on the file format, data will be read in during this call, or in an on-demand fasion as each tile is requested. The file type is determined from the filename extension, and verified using any magic numbers stored in the file. Currently, only PPM files are supported.


  Image.write(filename)

Write the entire image to disk. The filetype is determined by the filename extension. Currently, only PPM files are supported.


  Image.viewport(x0,y0,x1,y1)

Set the image viewport to the rectangle defined by the minimum and maximum integer values. The viewport includes the minimum and maximum pixels, ie. a viewport of [0,0,0,0] is a valid, 1-pixel viewport. This function will destroy any existing image data.


  Image.set_color((x,y),(r,g,b))

Set the pixel at (x,y) to the color (r,g,b). Allocates tile memory if necessary.


  Image.get_color(x,y)

Returns an (R,G,B) color value for the floating point pixel location using bi-linear interpolation of the four closest pixels.


 
 

Image Operations

The image operation module provides a set of deferred image processing nodes and a set of font operations for drawing text. The deferred operations all work using a tile-based pull scheme which computes the operation graph on demand.

Here's an example of how to do some simple image processing on a passed in image called render:

def comp(render, name):
    ip.match_viewport(render)
    foreground = ip.image(render)
    background = ip.gradient((0.5, 0.5, 0.9, 1, 1), (0.5, 0.5, 0.9, 1, 1),
                             (0.2, 0.2, 0.6, 1, 1), (0.2, 0.2, 0.6, 1, 1))
    comp = ip.matte(foreground, background).get_image()
# Unix    
    ft = ip.Font("/usr/share/fonts/default/Type1/n019003l.pfb")
# Windows
#    ft = ip.Font("C:/WINNT/Fonts/arial.ttf")
    ft.set_size(16)
    ft.draw_string(comp, 19, 9,  name, 0, 0, 0)
    return comp

The first call sets the viewport of the image graph to match that of the render image. Next, two input nodes are created, the first reads data from the render image and the second creates an image using a procedural gradient function. The two input nodes are composited together using a matte() operation and finally some black text is rendered on the image. The computation of the gradient and matte functions doesn't happen until the get_image() method is invoked.

Font rendering uses the FreeType2 library which recognizes most differnt font file formats.

Font(name)

Creates a new Font object required for rendering text. You must specifiy a valid font in the required name argument. Notice that fonts live in different places under different operating systems.


  Font.draw_string(image, x, y, text, r, g, b)

Render the string contained in text at the location (x,y) with the color (r,g,b) into the specified image.


  Font.set_size(points)

Set the font size to the specified number of integer points.


 
 

Image Viewer