CMSC 435/634: Introduction to Computer Graphics

Assignment 2
Ray Tracing
Due September 26, 2011

The Assignment

For this assignment, you must write a program that will render spheres using ray tracing. The input is in a simple text format called NFF, containing information about the view and objects in the scene. You will read an NFF scene on stdin and ray trace the single image described there. Rays that hit an object should use Lambertian diffuse shading, while rays that do not hit any object should use the background color of the scene. Do not try to include any of the more advanced ray tracing features you may read about (shadows, reflection, refraction, fancier shading models, etc.), we will get to those later. Your output should be an image file in PPM format.


For the base assignment, you should be able to trace balls-2.nff, which is checked into your assn2 directory(do a "cvs update -d" to get it). Additional NFF format scenes can be generated using a set of programs called the 'Standard Procedural Databases'. A copy of these programs may be found in ~olano/public/spd3.14/. While NFF format is relatively simple, it does contain many features we will not be using in this assignment. You should be able to read any NFF format file, but ignore anything you do not implement. You may refer to, use, or modify the NFF reading code in spd3.14/readnff.c. For the basic assignment, you should at least handle the "v" viewing specification, "b" background color, "l" light sepcification, "f" object material specification (just the r g b color part and the kd diffuse coefficient, ignore the rest), and "s" sphere specification.

Since these programs produce their output on stdout and your ray tracer takes its input on stdin, you can pipe them together:

~olano/public/spd3.14/balls | ./trace

Since that has 7381 spheres and can be quite slow to raytrace, using the -s option can yield a simpler models. For example, balls-2.nff with 91 spheres was generated with

~olano/public/spd3.14/balls -s 2 > balls-2.nff


We are using PPM because it is an exceedingly simple format to write. See the man page for "ppm" for more details.

PPM files can be viewed directly or converted to other image formats for viewing. On the GL systems, you can use "display" to view these files or "convert" convert them into most other image formats.

To create a PPM file, first you should store your image in an array of bytes in y/x/color index order:

unsigned char pixels[HEIGHT][WIDTH][3];

When filling in this array, remember that it is in y/x order, not the more familiar x/y order. The final index is the color component, with r=0, g=1 and b=2. Color values range from 0 to 255. For example, this would store a floating point color value of .5 into the green component at x,y:

pixels[y][x][1]= .5*255;

You'll need something a little more complex than this, since combining the contribution from several lights can give a floating point color value greater than 1, but should still be clamped to a maximum of 255. Once you've filled in the pixels array, actually writing the PPM file is quite simple. It is just a text header consiting of the characters, "P6" (identifying a color PPM file), the image width and height, and a scale value. The header is followed by the raw binary data. Here's the complete C code necessary to write a ppm file:

FILE *f = fopen("trace.ppm","wb");
fprintf(f, "P6\n%d %d\n%d\n", WIDTH, HEIGHT, 255);
fwrite(pixels, 1, HEIGHT*WIDTH*3, f);

634 only

Implement the other NFF primitive types ('p' = polygons, 'pp' = polygon with per-vertex normals, and 'c' = cones).

Other people's code

Ray tracing is a popular rendering technique, and the internet contains lots of resources for ray tracers in general and things like ray-object intersection in particular. Other than the provided SPD code, and PPM snippet above, YOU MAY NOT USE ANY OUTSIDE CODE. All code that you use must be your own. You are not required to use the provided code, but if you choose not to, you must still write your own.


This is a big assignment. Start NOW, or you will probably not finish. No, really, I promise you will not be able to do it in the last two days.

You can use C or C++, but I recommend C++: a vector class with addition, scalar multiplication, and dot product operators will make many operations more compact and more like the vector math equivalent. C++ includes STL data structures that can be useful for this assignment. Also, several other concepts of ray tracing map well into C++.

As before, I recommend laying out a plan of attack before coding. Data structures are less of an issue for the basic assignment, but if you plan to attempt any additional primitives, a common effective strategy is to create a generic object class with spheres, polygons, cones, etc. classes derived from it, each with a specialized virtual method to compute the intersection of a ray with that primitive type. To assist in your planning, here is an outline of the steps your ray tracing program will need to do:

  1. Read file format
  2. Calculate image plane and pixel locations in world space
  3. Calculate ray from eye point through each pixel and into the scene
  4. Calculate ray-object intersections, choose smallest/closest
  5. Compute the shading for that intersection

Note that you can write your own nff files by hand, which can be very handy for debugging. We recommend that you start with a test scene looking from (0,0,0) at (0,0,1) containing a single sphere centered on the Z axis. With a single sphere, it's easier to tell if your loading is working, and with a simple view it is easier to tell if your ray positions are correct. Start trying to find intersections with a 1x1 pixel image, which should give you a ray straight down the Z axis hitting the sphere. Move the sphere around, making sure you get the right answer when you miss it, when you are inside of it, or when it is behind you. You can scale up to 2x2 or 3x3 images to make sure your ray position code is correct. Once you have the basics working, move up to a larger window so you can visually tell if your sphere is rendering as a sphere, then you can try switching to the SPD scenes.

It is also worthwhile getting image output working early. Printing debugging values works OK for one pixel/one sphere scenes, but for larger images and scenes, outputting values other than colors at each pixel can be a valuable debugging tool. For example, displaying one color if the discriminant is negative and another if it is positive, or mapping the ray intersection distance to grey values from 0 to 255, or even displaying the sphere number as an integer color value in one channel to narrow down problem cases.

What to turn in

Turn in this assignment electronically by checking your source code into your assn2 CVS directory by 11:59 PM on the day of the deadline. Do your development in the assn2 directory so we can find it, and tag your submitted version with the 'submit' tag. As always, double check that you have submitted everything we need to build and run your submission. Be sure to include a Makefile that will build your project when we run 'make', and a readme.txt file telling us about your assignment. Do not forget to tell us what (if any) help did you receive from books, web sites or people other than the instructor and TA.