Glome tutorial

From HaskellWiki

Notes[edit]

This tutorial is written against a very old version of Glome. Some things have changed, including a transition to a type-class system for defining solids. Most of this content is still relevant, though.

Installing Glome[edit]

First, you need to install the software. The GlomeTrace library has few dependencies, but GlomeView requires SDL. (Earlier versions required OpenGL.)

You should be able to install GlomeTrace with "cabal install GlomeTrace". (It may be a good idea to run "cabal update" first, if you haven't done that recently, to make sure you get the latest package.)

You should be able to "cabal install GlomeView" as well, but to use Glome for anything other than viewing the default test scene, you may want to download the source tar file manually.

To build GlomeView manually, you'll need to run the commands:

runhaskell Setup.lhs configure --prefix=$HOME --user
runhaskell Setup.lhs build
runhaskell Setup.lhs install

Glome doesn't really need to be installed in order to run it. If you'd prefer, you can invoke it directly from the build directory as "./dist/build/Glome/Glome". (To use more than one core, you can pass in "+RTS -N4" for instance if you want to run on 4 cores.)

If everything works, a window should open and you should (after a pause) see a test scene with a variety of geometric shapes. If it doesn't work, then let me know or file a bug on the github page.

Command line options, key commands[edit]

(Note: None of these options are currently implemented in GlomeView. We may re-enable this at some point.)

These are pretty sparse at the moment. You can specify an input scene file in NFF format with the "-n [filename]" option, and there is one such scene included with Glome, a standard SPD level-3 sphereflake.

NFF isn't very expressive (and it was never intended to be), so I won't say much about it here. Glome supports most of the basic features of NFF except for refraction. My approach to polygon tesselation is also questionable: the SPD "gears" scene, for instance, doesn't render correctly.

You may have to adjust the lighting to get satisfactory results (i.e. by adjusting the value of "intensity" in "shade" function in the Trace.hs file and recompiling). NFF doesn't define a specific intensity, and I'm not sure what sort of falloff (if any) Eric Haines used when he rendered the reference images.

Once an image is rendered, typing "q" with the rendering window in focus will close Glome. Typing "s" will print a dump of the internal representation of the scene. (Not very useful at this stage, perhaps, but it's a useful debugging tool that might come in handy later.)

Describing Scenes in Haskell[edit]

Ideally, using Glome would be a matter of firing up Blender or some other modeler and editing 3-d geometry in a graphical, interactive way and then exporting the scene to Glome, which would do the final render.

Unfortunately, Glome isn't able to import files from any standard 3-d format except NFF (which isn't typically used for anything but benchmark scenes).

So, with only limited import functionality, how do we model complex scenes?

One option we have left is to describe our scene directly in Haskell, and then compile the description and link it with the Glome binary. This is the approach we will be following for the remainder of this tutorial.

This isn't quite as difficult as it sounds. POV-Ray, for instance, has a very user-friendly scene description language (SDL), and many artists type their scenes in directly as text.

Glome was, in fact, quite heavily influenced by POV-Ray, so anyone familiar with POV's SDL and Haskell should be able to write scenes for Glome without much trouble.

Unlike POV-Ray, in which the SDL is separate from the implementation language (C, or more recently, C++), in Glome there is no distinction. In that sense, Glome is more of a rendering library than a standalone rendering application.

The default scene, which is loaded if the user does not specify an input file on the command line, is defined in TestScene.hs. To define a new scene, you must edit this file and then recompile the source.

TestScene.hs: Camera, Lights, and the minimal scene[edit]

TestScene.hs import a number of modules at the beginning, and it contains a handful of objects and then defines a single function.

 scn :: IO Scene
 scn = return (Scene geom lits cust_cam (t_matte (Color 0.8 0.5 0.4)) c_sky)

"scn" is called from Glome.hs to specify a scene if there wasn't one passed in as a command line argument.

"scn" uses the IO monad, in case we want to load a file from disk. We won't be doing that in any of our examples, so you can safely ignore the "IO" part. It returns an object of type "Scene".

A simple scene is described like this in Solid.hs:

type Tag   = String
type T     = Texture Tag M
type M     = Material Tag
type SI    = SolidItem Tag M
type Scene = (SI, [Light], Camera, Shader Tag M [Light] [(Color, Vec)])

lights = [ light (Vec (-100) 70 (140)) (cscale (Color 1 0.8 0.8) 7000)
         , light (Vec (-3) 5 8) (cscale (Color 1.5 2 2) 10)
         ]

cam = camera (vec (-2) (4.3) (15)) (vec 0 2 0) (vec 0 1 0) 45

geometry = sphere (vec 0 0 0) 1

scn :: IO Scene
scn = return (geometry, lights, cam, materialShader)

GlomeView executes the function "scn", which returns the scene to be rendered. (It runs in the IO monad so that you can open files or whatnot if that's necessary to gather the data you need to render the scene. Most of the time we don't need that functionality.)

In order to construct a valid scene we need to put something into all the required fields of the scene type.

The first field takes a SolidItem of some kind. (A SolidItem is a type that represents any concrete type that is a member of the Solid typeclass defined in Solid.hs.) A SolidItem could be any of the basic primitive types that Glome natively supports, or they can be user-defined. These might be triangles, spheres, cones, etc...

You might wonder why we only need one primitive to define a scene. Certainly, we'd want to have a scene that contains more than a single triangle or box or sphere!

The solution is that Glome includes several primitive types that let us create more complex Solids out of simple ones. For instance, a Group is a list of Solids, and Glome treats that list as if it was a single Solid. This will be discussed in more detail later on.

A light is defined by a location and a color (in RGB). Light intensity can be controlled by scaling up the color's rgb values (they don't need to lie between 0 and 1).

Location is specified by a vector, which can be constructed by calling "vec" with x, y, and z arguments.


(See Vec.hs from GlomeVec and Clr.hs for the definitions and useful functions for dealing with vectors and colors, respectively.)

One note on how Haskell treats numbers: unlike C, a decimal point isn't necessary to distinguish Float literals from Ints. However, it is usually necessary to enclose negative numbers in parentheses.


A camera describes the location and orientation of the viewer. There is a function for creating a camera defined in Solid.hs:

camera :: Vec -> Vec -> Vec -> Flt -> Camera
camera pos at up angle =
 let fwd   = vnorm $ vsub at pos
     right = vnorm $ vcross up fwd
     up_   = vnorm $ vcross fwd right
     cam_scale = tan ((pi/180)*(angle/2))
 in
  Camera pos fwd
         (vscale up_ cam_scale) 
         (vscale right cam_scale)

It's arguments are: a point defining it's position, another point defining where it's looking, an "up" vector, and an angle. At this point, we need to decide which direction is up. I usually pick the "Y" axis as pointing up, So, to set up a camera at position <20,3,0> looking at the origin <0,0,0>, and a 45 degree field of view (measured from the top of the image to the bottom, not right to left or diagonal) we might write:

 camera (vec 20 3 0) (vec 0 0 0) (vec 0 1 0) 45

The shader field defines how surfaces are lit. As of GlomeTrace 0.3, it is now possible to define your own shader, but in most cases the standard shader (materialShader defined in Shader.hs) is probably what you want.

The shader also defines what happens when a ray misses all objects in the scene. The standard shader just returns black in that case with the alpha channel set to zero (full transparency). GlomeView currently ignores the alpha channel, but at some point we may add support for png output.


Spheres, Triangles, Etc..[edit]

Now with all that out of the way, we can describe some geometry. As we mentioned earlier, every primitive is a member of the typeclass "Solid". There are quite a few instances: Triangle, TriangleNorm, Disc, Cylinder, Cone, Plane, Box, Group, Intersection, Bound, Difference, Intersection, Bih, Instance, Tex, Tex, and Void.

The simplest primitive is the sphere and that's where we'll start. (Its ubiquity in ray tracing might lead some to believe that it's the only primitive ray tracers are any good at rendering.)

The standard Sphere constructor takes a radius and the reciprocal of the radius: this is for efficiency, to avoid a division (which is typically much slower than multiplication). We don't really want to specify the inverse radius every time, so there's a simpler constructor we'll use instead (note the lower case vs upper case):

sphere :: Vec -> Flt -> SolidItem t m
sphere c r =
 SolidItem (Sphere c r (1.0/r))

We can, for instance, construct a Sphere at the origin with radius 3 like this:

sphere (vec 0 0 0) 3

Triangles are described by the coordinates of their vertices. There is a second kind of triangle, "TriangleNorm" that deserves a little bit of explanation. This second kind of triangle allows the user to specify a normal vector at each vertex. The normal vector is a unit vector perpendicular to the surface. Usually, this is computed automatically. However, sometimes the resulting image looks too angular, and by interpolating the normal vectors, Glome can render a flat triangle that appears curved even if it really isn't.

This is, perhaps, an inelegant trick, but it works well. Sometimes, we'll be able to define surfaces exactly without approximation, but many models are only available as triangle meshes. Also, triangles may be useful to approximate shapes that Glome doesn't support natively (like toruses).

Discs are another simple primitive that just happens to have a very simple, fast ray-intersection test. They are defined by a center point, a normal vector, and a radius. The surface of the Disc is oriented perpendicular to the normal. The radius is actually specified as radius squared, so you need to be aware of that if you're going to use them.

Planes are even simpler than the Disc, they're defined as a normal vector and a perpendicular offset from the origin. Essentially, a plane is a half-space; everything on one side is inside, and everything on the other side (the direction pointed at by the normal) is on the outside.

A (usually) more convenient way to specify a Plane is with a point on the surface, and a normal, and a function that does just that has been provided:

plane :: Vec -> Vec -> SolidItem t m
plane orig norm_ = SolidItem $ Plane norm d
 where norm = vnorm norm_
       d = vdot orig norm

If we want a horizon stretching to infinity, we simply add a plane oriented to the Y axis (assuming that's our current definition of "up").

plane (Vec 0 0 0) (Vec 0 1 0)

Glome supports Z-axis aligned cones and cylinders. The cones are perhaps more accurately described as tapered cylinders; they need not come to a point.

As is, the primitives are fairly useless unless we really want a Z-aligned cone or cylinder. Fortunately, Glome provides more convenient constructors:

cylinder_z :: Flt -> Flt -> Flt -> SolidItem t m
cylinder_z r h1 h2 = SolidItem (Cylinder r h1 h2)

cone_z :: Flt -> Flt -> Flt -> Flt -> SolidItem t m
cone_z r h1 h2 height = SolidItem (Cone r h1 h2 height)

-- | Construct a general cylinder from p1 to p2 with radius r.
cylinder :: Vec -> Vec -> Flt -> SolidItem t m
cylinder p1 p2 r =
 let axis = vsub p2 p1
     len  = vlen axis
     ax1  = vscale axis (1/len)
     (ax2,ax3) = orth ax1 
 in transform (cylinder_z r 0 len)
              [ (xyz_to_uvw ax2 ax3 ax1),
                (translate p1) ]
                        
-- | Construct a cone from p1 to p2.  R1 and r2 are the radii at each
-- end.  A cone need not come to a point at either end.
cone :: Vec -> Flt -> Vec -> Flt -> SolidItem t m
cone p1 r1 p2 r2 =
 if r1 < r2 
 then cone p2 r2 p1 r1
 else if r1-r2 < delta
      then cylinder p1 p2 r2
      else
        let axis = vsub p2 p1
            len  = vlen axis
            ax1  = vscale axis (1/len)
            (ax2,ax3) = orth ax1 
            height = (r1*len)/(r1-r2) -- distance to end point
        in
         transform (cone_z r1 0 len height)
                   [ (xyz_to_uvw ax2 ax3 ax1),
                     (translate p1) ]

cone_z and cylinder_z don't do anything the regular constructors don't do, but "cylinder" and "cone" are much more interesting. "cylinder" takes a start point and an end point and a radius, and creates a cone whose axis stretches from one point to the other. The "cone" constructor is similar, but it takes a radius for each end. Note that if you call "cone" with an identical radius at both ends, it automatically simplifies it to a cylinder. We'll see how to use cones effectively in the next section.

These constructors make use of transformations to orient them properly in space, which is something we'll get to later on.

Boxes are axis-aligned, and can be created with a constructor that takes two corner points:

box :: Vec -> Vec -> SolidItem t m
box (Vec x1 y1 z1) (Vec x2 y2 z2) =
 SolidItem (Box (Bbox (Vec (fmin x1 x2) (fmin y1 y2) (fmin z1 z2))
                      (Vec (fmax x1 x2) (fmax y1 y2) (fmax z1 z2))))

Groups[edit]

Sometimes it's convenient to treat a whole group of object as if they were a single object, and for that purpose we have Group. One reason we might want to do this has already been mentioned: the scene needs to be described in terms of a single object. There are several others. We might want to apply a texture or move or rotate a whole group of objects at once, instead of treating them individually. Using groups is quite simple. For instance:

 group [ (sphere (Vec (-1) 0 0) 2), 
         (sphere (Vec 1 0 0) 2),
         (sphere (Vec 0 (-1) 0) 2),
         (sphere (Vec 0 1 0) 2) ]

This gives us four spheres we can treat as a single object.

Group is just an alias for [SolidItem]. It has a special constructor, "group", that does a little bit of extra optimization for us:

group :: [SolidItem t m] -> SolidItem t m
group [] = SolidItem Void
group (sld:[]) = sld
group slds = SolidItem (flatten_group slds)

If you try to create an empty group, Glome will replace it with a primitive of type Void, which is a degenerate object that has no appearance.

If you try to create a group that contains only one object, Glome throws the group away and just uses that object directly.

If you try to create a group that contains other groups, Glome will consolidate those into one big group. (This is what "flatten_group" does.)

In general, it's better to use "group" than "Group". Even better than "group", though, is "bih", which behaves similarly to group but performs much better if your group contains more than a couple items. I'll explain why later.

Haskell has some convenient syntax for creating lists. For instance, if you want to create a lot of spheres, you can use:

let lattice = 
 let n = 20
 in [sphere (vec x y z) 0.1 | x <- [(-n)..n],
                              y <- [(-n)..n], 
                              z <- [(-n)..n]]

This gives you a list of spheres arranged in a 41x41x41 3-D grid. You can then create a group out of it:

group lattice

But it will render very slowly. If you use "bih" instead of "group", it will render much faster. (We'll get into why later on.)

We can use cones and spheres together in a useful way by stringing a series of cones together with a sphere to hide each joint.

spiral = [ ((Vec ((sin (rot n))*n) 
                 ((cos (rot n))*n) 
                 (n-3)), (n/15)) | n <- [0, 0.05..6]]
                               
coil = bih (zipWith (\ (p1,r1) (p2,r2) -> (Solid.group [(cone p1 r1 p2 r2), 
                                                        (sphere p1 r1)] )) 
                    spiral 
                    (tail spiral))

Here, "spiral" is a list of (location,radius) tuples, and we use zipWith to create cylinders and cones from pairs of (location,radius) tuples. We save the result into the variable "coil".

CSG[edit]

The basic primitives give us something to work with, and we might make reasonably complex scenes from them, but many simple objects are still difficult or impossible to create, unless we approximate them with triangles.

Constructive Solid Geometry (CSG) gives us the ability to combine objects in more interesting ways than we can with "group".

With a Difference object, we can subtract one object from another. For instance, to make a pipe, we could start with a cylinder, and then subtract a cylinder with a smaller radius. Or, to make a house, we might start with a box, and then subtract a smaller box to make the interior space, and then subtract more boxes to make space for windows and doors.

Creating a difference is simple: it's just
Difference a b
, where b is the solid subtracted from a.

In order for CSG to be meaningful, it needs to be performed with objects that have a well-defined inside and outside (like planes, spheres, cones, and cylinders, but not triangles or discs). It won't break anything if you use objects that don't have volume, but you might not get the results you want.

CSG can be performed on composite objects, like group or bih, and they can be nested. (Note, however, that not all combinations have been tested, so things might not work the way you expect. If this happens, send me an email about it.)

One caveat on difference is that you shouldn't ever create objects with infinitely thin walls. They might render correctly, or they might not depending on floating point approximations.

Intersection is like Difference, but it takes a list of objects, and the resulting object is the overlap of all those objects. Planes are very useful in intersections (they're more properly described as half-spaces; everything on one side is outside, and everything on the other side is inside); we could easily define a box as the intersection of six axis-aligned planes, with their normals all facing outward.

We can define a dodecahedron succinctly:

dodecahedron pos r =
 let gr = (1+(sqrt 5))/2 -- golden ratio, 1.618033988749895
     n11 = [(-r),r]
     ngrgr = [(-gr)*r,gr*r]
     points = [Vec 0 y z | y <- n11, z <- ngrgr] ++
              [Vec x 0 z | z <- n11, x <- ngrgr] ++
              [Vec x y 0 | x <- n11, y <- ngrgr]
     pln x = (Plane (vnorm x) (r+(vdot (vnorm x) pos)))
 in
  Intersection ((sphere pos (1.26*r)):(map pln points))

This is a function that takes a position and a radius, and generates a dodecahedron circumscribed about the sphere with that center and radius.

The plane normal vectors are taken from the coordinates for the verticies of an icosahedron. (Similarly, we can create an Icosahedron by using the coordinates of the verticies of a dodecahedron.)

I put a sphere in the list of objects as an optimization. Any ray that misses the first sphere can be assumed to miss the whole object. This also allows Glome to compute an efficient bounding box for the intersection: all the planes are infinite objects, so they have infinite bounding boxes.

Transformations[edit]

Transformations are a convenient way of moving, rotating, and stretching objects.

To move an object a by some vector v, we can say:

 transform a [translate v]

This effectively creates an instance of a in a new location. This an efficient way of creating many copies of a complex object without using much extra space. (Each transform uses 24 floating point numbers to store a matrix and its inverse that describe the transformation.)

"transform" takes a list, and we can combine several transformation and they behave as you might expect:

 transform a [translate (Vec 0 3 0),
              rotate (Vec 0 1 0) (deg 90),
              scale (Vec 2 2 2)]

This moves object "a" up three units, then rotates 90 degrees about the Y axis, and then stretches the object equally on all three axes by a factor of 2.

Stretching and rotations happen about the origin, so you might need to translate an object to the origin, rotate it, then translate it back in order for the translations to do what you want. (This is consistent with how most graphics APIs work -- if you're familiar with, say, OpenGL or POV-Ray, this should be pretty familiar.)

Interestingly, performing multiple transformations at a time doesn't produce any more overhead than just performing one transformation; internally, any combination of transformations can be represented as a single matrix.

(Arcane trivia: internally, "transform" reverses the list of transformations before applying them.)

One caveat is that you should use transformations sparingly. For instance, rather than creating a unit sphere and then scaling it and moving it into position, it's better to use the sphere's constructor the way it was intended. A transformed sphere consumes more memory than just a sphere by itself, and rendering will be a little slower.

Cones and cylinders, on the other hand, are already stored as transformations (except for the rare case when you really do want a Z-axis aligned cone or cylinder), so transforming them won't take up any extra space.

Bounding Objects[edit]

The preceding sections have (hopefully) shown everything you need to know to model the kinds of shapes you want to render. However, there are many equivalent ways to represent the same scene, and some of them render much faster than others. To understand why, you will need to know a little bit about how a ray tracer works.

A ray tracer sends rays out from the camera into the scene, one ray per pixel. (Or more than one ray, if the ray tracer supports anti-aliasing, which GlomeView currently does not.) Each ray has to be tested for intersection with each object in the scene. (Conceptually, scenes in Glome are only comprised of a single object, but in practice intersecting with that object usually means doing a lot of ray-intersections with sub-objects.) For scenes with many objects, this can be very slow. Supposing we have a thousand spheres in our scene, and at the default resolution of 720x480, we have 345,600 rays. This means that for each image, we have to do 345.6 million ray-sphere intersection tests! Hardware may be getting faster, but we need to do better than that if we want anything near reasonable rendering speed.

Fortunately, we don't really need to do that many ray-intersection tests. What we can do instead is place bounding objects around the complicated bits of our scene. If a ray doesn't hit the bounding object, then we know it won't hit anything inside the bounding object, and we don't have to do any of those intersection tests.

Glome has a "Bound" primitive just for this purpose: If you have some complicated solid "a" and a simple bounding solid (such as a sphere or a box) "b", you can declare the bounded object as
Bound b a
(the simple object comes first).

You could do a similar thing with Intersection, but Bound is a bit more efficient.

Bounding objects are great for improving performance, but they're unwieldy. Finding the smallest bounding object that will enclose another object is a non-trivial task, and it's easy to make mistakes, leading to incorrect renderings. Also, it is not always obvious whether the overhead introduced by testing against the bounding object is outweighed by the reduced number of intersections against the bounded object.

The Bounding Interval Hierarchy[edit]

Fortunately, there's an easier way, and we've mentioned it before: use "bih".

The "bih" constructor takes a list of primitives and sorts them into a hierarchy of bounding planes called a Bounding Interval Hierarchy.

(more information on this method can be found here: Carsten Wächter and Alexander Keller,Instant Ray Tracing: The Bounding Interval Hierarchy)

BIH is one of many acceleration structures used in ray tracing. Other choices are: regular grids, BSP trees, octrees, bounding volume hierarchies (BVH), and kd-trees. Currently, Glome only supports BIH (though there is an earlier written in Ocaml that supports kd-trees as well).

In general, BIH is well-behaved but there are a few cases to avoid when possible. For instance: try not to use very long skinny things, especially if they're overlapping a lot of other long skinny things. If you want to render a thousand toothpicks spilled on the floor, then you might want to consider representing each toothpick as a series of short cylinders instead of one long cylinder.

Another problem with the bih constructor is that it doesn't know how to interpret complex hierarchies. For instance, if you pass a list containing a single transformation of a group of objects, then bih will treat it as a list of a single object.

There is a useful helper function to flatten out complex hierarchies of bound objects, transformations, and groups. The function is "flatten_transform":

flatten_transform :: Solid -> [Solid]
flatten_transform (Group slds) =
 flatten_group $ concat (map flatten_transform slds)

flatten_transform (Instance s xfm) =
 case s of 
  Group slds -> flatten_transform $ group (map (\x -> transform x [xfm]) slds)
  Bound sa sb -> flatten_transform (transform sb [xfm])
  Instance sa xfm2 -> flatten_transform (transform s [xfm])
  _ -> [transform s [xfm]]

flatten_transform (Bound sa sb) = flatten_transform sb

flatten_transform throws away manually created bounding objects it finds, and pushes all transformations out to the leaves of the tree. In many cases, this will mean that the scene will consume more memory; however, it may also render much faster.

Textures, Lighting[edit]

In Glome, textures are not associated with individual geometric primitives. Instead, it uses a container object called "Tex":

Tex (SolidItem t m) (Texture tag mat)

The SolidItem is the thing that we want to apply the texture to, and the Texture is the texture itself.

A texture, in turn, is a function that takes a ray and it's corresponding ray intersection and returns a material. The shader's job is to take the material and ray intersection data and figure out what color to render it.

It's possible to define your own material types, but most of the time you'll want to use the one provided by Shader.hs:

data Material t =
  Surface Color Flt Flt Flt Flt Flt Bool | -- color, alpha, ambient, diffuse, specular, shine, dielectric
  Reflect Flt | -- amount
  Refract Flt Flt | -- amount, ior
  Warp (SolidItem t (Material t))
       (SolidItem t (Material t))
       [Light]
       (Ray -> Rayint t (Material t) -> Ray) | -- frame, scene, ctx, xfm
  AdditiveLayers [Material t] |
  Blend (Material t) (Material t) Flt

A "Surface" defines basic surface properties like ambient, diffuse, and specular illumination. "Reflect" is used for reflective surfaces (like mirrors), and Refract is used for refraction (not currently implemented). AdditiveLayers is used to layer multiple Materials and add the colors together, while Blend allows you to create a weighted mixture of two layers.

Often, we might want to texture a single object by itself:

m_matte :: Color -> M
m_matte c = (Surface c 1 0.03 1 0 0 False)

matte :: Color -> T
matte c = 
 (\_ _ -> m_matte c)

Tex (sphere (Vec 0 0 0) 1) (matte (Color 1 0 0))

This produces a red ball. Or we could texture a whole group of objects at once by wrapping a "tex" around a whole group (or bih) of objects.

A textured object is just a regular object, so what happens if we apply another Texture?

Tex (Tex (sphere (Vec 0 0 0) 1) (matte (Color 1 0 0))) (matte (Color 0 1 0))

You might think this will produce a green ball, but in fact it produces a red one. The rule here is that the innermost texture has highest priority. Applying a texture to a large group of objects applies that texture only to the objects that don't already have a texture. An exception is if the inner texture returns an alpha value less than one (i.e. it's at least partially transparent), in which case the outer texture is applied underneath the inner texture.

Textures can have very complicated behavior, since they can act conditionally on all the data the ray tracer returns.

type Texture tag mat = Ray -> Rayint tag mat -> mat

data Rayint tag mat = RayHit {
 ridepth' :: !Flt,
 ripos    :: !Vec,
 rinorm   :: !Vec,
 riray    :: !Ray,
 riuvw    :: !Vec,
 ritex    :: [Texture tag mat],
 ritag    :: [tag]
} | RayMiss deriving Show

The most useful field here is "ripos" which is the XYZ coordinates of the location of the ray intersection. Most textures won't need to access the other fields.

Note that a ray can miss it's target object, in which case the value of the Rayint is "RayMiss". In general, a texture shouldn't need to worry about that case, since Glome wouldn't be evaluating the texture if the ray missed the object.

When creating textures, it may be necessary to add some random variation to make it more interesting to look at. For this, "perlin" is a Perlin noise function, defined in "SolidTexture.hs". Perlin noise is a well-known and efficient algorithm for generating a three dimensional splotchy pattern that is a very useful building block for defining more complex textures.

One caveat about using Tex is that the Bih constructor treats a Tex (and all the objects it contain) as a monolithic object. So, if a Tex contains many objects (or even just a few), you might want to use "bih" instead of "group" on the children, even if you're running "bih" at the root of the scene.

Along with Tex, Glome provides a Tag type which behaves similarly to Tex except that the tag type is left up to the application. Tags aren't directly useful for rendering, but they can be very handy for other computational geometry tasks if you need to have some way to know exactly what a ray hit.

A guide to the Glome source code[edit]

Editing scenes directly in Haskell makes it possible to use the pre-existing ray tracing infrastructure to help us create our scene. For instance, the "trace" function can be used to help us place one object on another; a plant growing on a non-flat surface, for instance. Also, we may want our scene to include geometric primitives of a type that Glome does not yet support, and so we might want to add our own.

This section of the tutorial is for those who are interested in understanding not just how to use Glome, but how it works and how it can be extended.

File Reference[edit]

First, an overview of the source code files. Glome is currently split amongst many source files and between GlomeTrace and GlomeVec is about 3500 lines of code. Some of the interesting files are:

Vec.hs
This is the vector library. It contains all the things you might want to do with vectors: add, subtract, normalize, take dot products and cross products, reverse a vector, etc... It also includes the transformation matrix code, and routines for transforming vectors, points, and normals. A data type "Flt" is defined for floating point numbers. Switching from Float to Double or vice versa is a matter of changing the definition of Flt in Vec.hs and CFlt in Clr.hs.
Clr.hs
Color library. Colors are records of three floats in RGB format. There's nothing surprising or particularly clever here.
Solid.hs
This is where most of the interesting definitions are: definitions of the Solid typeclass and SolidItem existential type, definitions of the Rayint type, and a few of the base primitives (like Group, Void, and Instance).
Trace.hs
This contains the "trace" function, which converts a ray and a scene into a color to be drawn to the screen.
Shader.hs
This provides an illumination model. (You can implement your own, but most of the time you'll want to stick with materialShader.)
Bih.hs
The BIH acceleration structure.
Sphere.hs, Triangle.hs, Box.hs, Plane,hs, Cone.hs, etc..
Basic geometric primitives.
Glome.hs
The main loop of the program, part of the GlomeView package. All SDL-related code resides in this module. Also included is a get_color function that accepts screen coordinates and a scene, and computes the ray for that screen coordinate, traces the ray, and returns the color.
TestScene.hs
This is what gets rendered if no input file is specified. This is meant to be edited by users. Provided in GlomeView package.
SolidTexture.hs
Perlin noise and other related texture functions. Part of GlomeVec package.
Spd.hs
NFF file parser for SPD scenes.

Tracing Rays[edit]

Glome's Solid typeclass defines a ray-intersection function "rayint" that pattern matches against a primitive and returns an appropriate Rayint.

For instance, let's look at the "disc" case, as it is short and simple:

instance Solid (Disc t m) t m where
 rayint = rayint_disc
 shadow = shadow_disc
 inside (Disc _ _ _) _ = False
 bound = bound_disc

rayint_disc :: Disc tag mat -> Ray -> Flt -> [Texture tag mat] -> [tag] -> Rayint tag mat
rayint_disc (Disc point norm radius_sqr) r@(Ray orig dir) d t tags =
 let dist = plane_int_dist r point norm 
 in if dist < 0 || dist > d 
    then RayMiss
    else let pos = vscaleadd orig dir dist
             offset = vsub pos point
         in 
          if (vdot offset offset) > radius_sqr
          then RayMiss
          else RayHit dist pos norm r vzero t tags

"rayint" takes five arguments: the Solid to be intersected, the Ray to intersect with it, a maximum trace depth, a Texture stack, and a tag stack. "rayint" is expected to return a RayHit value if a ray intersection exists that's closer than the maximum depth. If there is more than one hit, rayint should return the closest one, but never an intersection that is behind the Ray's origin.

Rayint_disc extracts the Ray's origin and direction from "r", and then uses a function called "plane_int_dist" defined in Vec.hs. This returns the distance to the plane defined by a point on the plane and it's normal, and intersected by ray r.

Then Glome checks if the distance is less than zero or more than the maximum allowed, and if so returns RayMiss.

Otherwise, Glome computes the hit location from the Ray and distance to the plane. "vscaleadd" is another function from Vec.hs that takes one vector, and then adds a second vector after scaling the second vector by some scalar. By taking the Ray's (normalized) direction vector scaled by the distance to the plane and adding it to the Ray's origin, we get the hit location. (This technique is used in many of the ray-intersection tests.)

Once we know the location on the disc's plane where our ray hit, we need to know if it is within the radius of the disc. For that, we compute an offset vector from the center of the disc ("point") to the hit location ("pos"). Then we want to check if this offset vector is less than the radius of the disc. Or, in other words:

sqrt (offset.x^2 + offset.y^2 + offset.z^2) < r

We can square both sides to get rid of the square root, and then observe that squaring the components of a vector is the same as taking the dot product of that vector with itself:

vdot offset offset < r^2

Also, Glome doesn't store the radius with a disc but rather it's radius squared (to avoid a multiply, since we don't often need to know the disc's actual radius), and that explains the last if statement.

The RayHit constructor needs a little explanation, though. What are all those fields for?

First, there's the distance to the nearest hit and the position. (You might notice some redundancy here, since the calling function could infer the distance to the nearest hit from the ray and the hit position, or the the hit position from the distance and the ray. We return both to save the trouble of recomputing values we've already determined.) "norm" is the vector perpendicular to the surface of the disc, used in lighting calculations. For discs, this is easy: the normal is stored as part of the disc's definition. For other objects (like Spheres or Cones), we might have to compute a normal.

The texture passed in as an argument to "rayint" is simply returned in the returned Rayint record. All of the ray intersection cases behave this way except Tex, which pushes a new texture onto the stack.

As for the other basic primitives like Triangle, Sphere, Cylinder, Cone, Plane, and Box, Glome uses fairly typical intersection tests that can be found in graphics textbooks (such as Physically Based Rendering and Graphics Gems volume one).

A "Void" object is a special case: its ray intersector simply returns "RayMiss" regardless of input. The existence of "Void" is somewhat redundant, since it is equivalent to "Group []".

The composite primitives (Group, Difference, Intersection, Tex, Bih, Instance, and Bound) are more interesting, as their ray-intersection tests are defined recursively. This recursion is what allows us to treat a complex object made up of many sub-objects the same as we would treat a simple base primitive like a Sphere, and in fact Glome makes no distinction whatsoever between base primitives and composite primitives.

We'll look at Group as our composite ray-intersection test example:

instance Solid [SolidItem t m] t m where
 rayint xs r d t tags  = foldl' nearest RayMiss (map (\s -> rayint s r d t tags) xs)
 ...

Here, we traverse the list calling "rayint" for each primitive and returning the nearest hit. ("nearest" is defined in Solid.hs, and returns the nearest of two ray intersections, or RayMiss if they both miss.)

One important thing to keep in mind about Group is that intersecting with a large group is very inefficient. That's why we have Bih. However, even Bih uses Groups as leaf nodes, so Groups are still important in some cases.

There is a second ray intersection function called "shadow" that only takes the primitive, a ray, and a maximum distance, and returns True if the ray hits the object and False otherwise.

"shadow" is used for shadow-ray occlusion tests. In order to test whether a particular point is lit by a particular light, a shadow ray is traced from the ray intersection point to the light. If there is something in the way, that light is in shadow and it does not contribute to the illumination at that point.

For most scenes with more than one light, more shadow rays are traced than regular rays. Therefore, we want the shadow ray intersection tests to be as fast as possible.

instance Solid [SolidItem t m] t m where
  shadow xs r d = foldl' (||) False (map (\s -> shadow s r d) xs)

This is faster in some cases than the full ray intersection, because we can stop as soon as one of the results comes back as True.

Primitives are not required to implement a shadow test. Glome defines a reasonable default case:

shadow s !r !d =
  case (rayint s r d undefined []) of
   RayHit _ _ _ _ _ _ _ -> True
   RayMiss -> False

For base primitives, the performance penalty of using the full ray intersection test instead of a shadow test may be insignificant. However, composite primitives should always define a shadow test. Consider, for instance, if Group did not implement a shadow test: all it's children would be tested with "rayint" rather than "shadow", and if any of those objects have sub-objects, they will be tested with "rayint" as well! A single primitive type high in the tree that doesn't support "shadow" will force its entire subtree to be evaluated with "rayint".

There is one case where "rayint" actually calls "shadow", rather than the other way around: Bound uses a shadow test to determine if the ray hits the bounding object or not.

Advanced Topics, and things that don't quite work yet[edit]

Navigation[edit]