Difference between revisions of "GPipe"
m 
(Tutorial) 

Line 1:  Line 1:  
−  {{stub}} 

+  == What is GPipe? == 

−  This is a wiki stub for the [http://hackage.haskell.org/package/GPipe GPipe package]. If you have any questions, feel free to [mailto:tobias_bexelius@hotmail.com mail] me. 

+  [http://hackage.haskell.org/package/GPipe GPipe] is a library for programming the GPU (graphics processing unit). It is an alternative to using OpenGl, and has the advantage that it is purely functional, statically typed and operates on immutable data as opposed to OpenGl's inherently imperative style. GPipe uses the same conceptual model as OpenGl, and it is recommended that you have at least a basic understanding of how OpenGl works to be able to use GPipe. 

−  == Example == 

+  In GPipe, you'll primary work with these four types of data on the GPU: 

−  This is a simple GPipe example that animates a textured box. Besides [http://hackage.haskell.org/package/GPipe GPipe], it uses the 

+  * <hask>PrimitiveStream</hask>s 

−  [http://hackage.haskell.org/package/VecTransform VecTransform package] for the transformation matrices, and the [http://hackage.haskell.org/package/GPipeTextureLoad GPipeTextureLoad package] for loading textures from disc. 

+  * <hask>FragmentStream</hask>s 

−  
+  * <hask>FrameBuffer</hask>s 

−  To run this example, you'll also need an image named "myPicture.jpg" in the same directory (as you see, I used a picture of some wooden planks). 

+  * <hask>Texture</hask>s 

+  Let's walk our way through an simple example as I explain how you work with these types. 

+  This page is formatted as a literate Haskell page, simply save it as "<tt>box.lhs</tt>" and then type 

+  <pre> 

+  ghc make –O box.lhs 

+  box 

+  </pre> 

+  at the prompt to see a spinning box. You’ll also need an image named "<tt>myPicture.jpg</tt>" in the same directory (I used a picture of some wooden planks). 

<haskell> 
<haskell> 

−  module Main where 

−  
−  import Graphics.GPipe 

−  import Graphics.GPipe.Texture.Load 

−  import Data.Monoid 

−  import Data.IORef 

−  import qualified Data.Vec as Vec 

−  import Data.Vec.Nat 

−  import Data.Vec.LinAlg.Transform3D 

−  import Graphics.UI.GLUT 

−  (mainLoop, 

−  postRedisplay, 

−  idleCallback, 

−  getArgsAndInitialize, 

−  ($=)) 

−  
−  uvCoords = [0:.0:.(), 0:.1:.(), 1:.0:.(), 1:.1:.()] 

−  sidePosX = toGPUStream TriangleStrip $ zip [1:.0:.0:.(), 1:.1:.0:.(), 1:.0:.1:.(), 1:.1:.1:.()] (map ((,) (1:.0:.0:.())) uvCoords) 

−  sideNegX = toGPUStream TriangleStrip $ zip [0:.0:.1:.(), 0:.1:.1:.(), 0:.0:.0:.(), 0:.1:.0:.()] (map ((,) ((1):.0:.0:.())) uvCoords) 

−  sidePosY = toGPUStream TriangleStrip $ zip [0:.1:.1:.(), 1:.1:.1:.(), 0:.1:.0:.(), 1:.1:.0:.()] (map ((,) (0:.1:.0:.())) uvCoords) 

−  sideNegY = toGPUStream TriangleStrip $ zip [0:.0:.0:.(), 1:.0:.0:.(), 0:.0:.1:.(), 1:.0:.1:.()] (map ((,) (0:.(1):.0:.())) uvCoords) 

−  sidePosZ = toGPUStream TriangleStrip $ zip [1:.0:.1:.(), 1:.1:.1:.(), 0:.0:.1:.(), 0:.1:.1:.()] (map ((,) (0:.0:.1:.())) uvCoords) 

−  sideNegZ = toGPUStream TriangleStrip $ zip [0:.0:.0:.(), 0:.1:.0:.(), 1:.0:.0:.(), 1:.1:.0:.()] (map ((,) (0:.0:.(1):.())) uvCoords) 

−  cube = mconcat [sidePosX, sideNegX, sidePosY, sideNegY, sidePosZ, sideNegZ] 

+  > module Main where 

−  transformedCube a = fmap (transform a) cube 

+  > import Graphics.GPipe 

−  transform :: Float > (Vec3 (Vertex Float), (Vec3 (Vertex Float), Vec2 (Vertex Float))) > (Vec4 (Vertex Float), (Vec3 (Vertex Float), Vec2 (Vertex Float))) 

+  > import Graphics.GPipe.Texture.Load 

−  transform a (pos, (norm, uv)) = (transformedPos, (transformedNorm, uv)) 

+  > import qualified Data.Vec as Vec 

−  where 

+  > import Data.Vec.Nat 

−  modelMat = rotationVec (normalize (1:.0.5:.0.3:.())) a `multmm` translation (0.5) 

+  > import Data.Vec.LinAlg.Transform3D 

−  viewMat = translation ((0:.0:.2:.())) 

+  > import Data.Monoid 

−  projMat = perspective 1 100 (pi/3) (4/3) 

+  > import Data.IORef 

−  viewProjMat = projMat `multmm` viewMat 

+  > import Graphics.UI.GLUT 

−  transformedPos = toGPU (viewProjMat `multmm` modelMat) `multmv` homPoint pos 

+  > (Window, 

−  transformedNorm = toGPU (Vec.map (Vec.take n3) $ Vec.take n3 $ modelMat) `multmv` norm 

+  > mainLoop, 

−  
+  > postRedisplay, 

−  enlight tex (norm, uv) = let RGB c = sample (Sampler Linear Wrap) tex uv 

+  > idleCallback, 

−  in RGB (c * Vec.vec (norm `dot` toGPU (0:.0:.1:.()))) 

+  > getArgsAndInitialize, 

−  
+  > ($=)) 

−  coloredFragments tex = fmap (enlight tex) . rasterizeFront . transformedCube 

−  paintSolid = paintColor NoBlending (RGB $ Vec.vec True) 

+  </haskell> 

+  
+  Besides [http://hackage.haskell.org/package/GPipe GPipe], this example also uses the [http://hackage.haskell.org/package/VecTransform VecTransform package] for the transformation matrices, and the [http://hackage.haskell.org/package/GPipeTextureLoad GPipeTextureLoad package] for loading textures from disc. [http://hackage.haskell.org/package/GLUT GLUT] is used in GPipe for window management and the main loop. 

+  
+  
+  == Creating a window == 

+  
+  We start by defining the <hask>main</hask> function. 

+  
+  <haskell> 

+  
+  > main :: IO () 

+  > main = do 

+  > getArgsAndInitialize 

+  > tex < loadTexture RGB8 "myPicture.jpg" 

+  > angleRef < newIORef 0.0 

+  > newWindow "Spinning box" (100:.100:.()) (800:.600:.()) (renderFrame tex angleRef) initWindow 

+  > mainLoop 

+  
+  > renderFrame :: Texture2D RGBFormat > IORef Float > IO (FrameBuffer RGBFormat () ()) 

+  > renderFrame tex angleRef = do 

+  > angle < readIORef angleRef 

+  > writeIORef angleRef ((angle + 0.01) `mod'` (2*pi)) 

+  > return $ cubeFrameBuffer tex angle 

+  
+  > initWindow :: Window > IO () 

+  > initWindow win = idleCallback $= Just (postRedisplay (Just win)) 

+  
+  </haskell> 

+  
+  First we set up GLUT, and load a texture from disc via the [http://hackage.haskell.org/package/GPipeTextureLoad GPipeTextureLoad package] function <hask>loadTexture</hask>. In this example we're going to animate a spinning box, and for that we put an angle in an <hask>IORef</hask> so that we can update it between frames. We then create a window with [http://hackage.haskell.org/packages/archive/GPipe/latest/doc/html/GraphicsGPipeFrameBuffer.html#v:newWindow <hask>newWindow</hask>]. When the window is created, <hask>initWindow</hask> registers this window as being continously redisplayed in the idle loop. At each frame, the <hask>IO</hask> action <hask>renderFrame tex angleRef</hask> is run. In this function the angle is incremented with 0.01 (reseted each lap), and a <hask>FrameBuffer</hask> is created and returned to be displayed in the window. But before I explain <hask>FrameBuffer</hask>s, let's jump to the start of the graphics pipeline instead. 

+  
+  
+  == PrimitiveStreams == 

+  
+  The graphics pipeline starts with creating primitives such as triangles on the GPU.Let's create a box with six sides, each made up of two triangles each. 

+  
+  <haskell> 

+  
+  > cube :: PrimitiveStream Triangle (Vec3 (Vertex Float), Vec3 (Vertex Float), Vec2 (Vertex Float)) 

+  > cube = mconcat [sidePosX, sideNegX, sidePosY, sideNegY, sidePosZ, sideNegZ] 

−  main = do getArgsAndInitialize 

+  > sidePosX = toGPUStream TriangleStrip $ zip3 [1:.0:.0:.(), 1:.1:.0:.(), 1:.0:.1:.(), 1:.1:.1:.()] (repeat (1:.0:.0:.())) uvCoords 

−  texture < loadTexture RGB8 "myPicture.jpg" :: IO (Texture2D RGBFormat) 

+  > sideNegX = toGPUStream TriangleStrip $ zip3 [0:.0:.1:.(), 0:.1:.1:.(), 0:.0:.0:.(), 0:.1:.0:.()] (repeat ((1):.0:.0:.())) uvCoords 

−  angleRef < newIORef 0.0 

+  > sidePosY = toGPUStream TriangleStrip $ zip3 [0:.1:.1:.(), 1:.1:.1:.(), 0:.1:.0:.(), 1:.1:.0:.()] (repeat (0:.1:.0:.())) uvCoords 

−  newWindow "Spinning box" (100:.100:.()) (800:.600:.()) 

+  > sideNegY = toGPUStream TriangleStrip $ zip3 [0:.0:.0:.(), 1:.0:.0:.(), 0:.0:.1:.(), 1:.0:.1:.()] (repeat (0:.(1):.0:.())) uvCoords 

−  (do angle < readIORef angleRef 

+  > sidePosZ = toGPUStream TriangleStrip $ zip3 [1:.0:.1:.(), 1:.1:.1:.(), 0:.0:.1:.(), 0:.1:.1:.()] (repeat (0:.0:.1:.())) uvCoords 

−  writeIORef angleRef ((angle + 0.01) `mod'` (2*pi)) 

+  > sideNegZ = toGPUStream TriangleStrip $ zip3 [0:.0:.0:.(), 0:.1:.0:.(), 1:.0:.0:.(), 1:.1:.0:.()] (repeat (0:.0:.(1):.())) uvCoords 

−  return $ paintSolid (coloredFragments texture angle) (newFrameBufferColor (RGB 0)) 

+  
−  ) 

+  > uvCoords = [0:.0:.(), 0:.1:.(), 1:.0:.(), 1:.1:.()] 

−  (\ w > idleCallback $= Just (postRedisplay (Just w))) 

+  
−  mainLoop 

+  </haskell> 

+  
+  Every side of the box is created from a normal list of four elements each, where each element is a tuple with three vectors: a position, a normal and an uvcoordinate. These lists of vertices are then turned into [http://hackage.haskell.org/packages/archive/GPipe/latest/doc/html/GraphicsGPipeStreamPrimitive.html#t:PrimitiveStream <hask>PrimitiveStream</hask>]s on the GPU by [http://hackage.haskell.org/packages/archive/GPipe/latest/doc/html/GraphicsGPipeStreamPrimitive.html#v:toGPUStream <hask>toGPUStream</hask>] that in our case creates triangle strips from the vertices, i.e 2 triangles from 4 vertices. Refer to the OpenGl specification on how triangle strips and the other topologies works. 

+  
+  All six sides are then concatenated together into a cube. We can see that the type of the cube is a <hask>PrimitiveStream</hask> of [http://hackage.haskell.org/packages/archive/GPipe/latest/doc/html/GraphicsGPipeStreamPrimitive.html#t:Triangle <hask>Triangle</hask>]s where each vertex is a tuple of three vectors, just as the lists we started with. One big difference is that those vectors now are made up of <hask>Vertex Float</hask>s instead of <hask>Float</hask>s since they are now on the GPU. 

+  
+  The cube is defined in modelspace, i.e where positions and normals are relative the cube. We now want to rotate that cube using a variable angle and project the whole thing with a perspective projection, as it is seen through a camera 2 units down the zaxis. 

+  
+  <haskell> 

+  
+  > transformedCube :: Float > PrimitiveStream Triangle (Vec4 (Vertex Float), (Vec3 (Vertex Float), Vec2 (Vertex Float))) 

+  > transformedCube angle = fmap (transform angle) cube 

+  
+  > transform angle (pos, norm, uv) = (transformedPos, (transformedNorm, uv)) 

+  > where 

+  > modelMat = rotationVec (normalize (1:.0.5:.0.3:.())) angle `multmm` translation (0.5) 

+  > viewMat = translation ((0:.0:.2:.())) 

+  > projMat = perspective 1 100 (pi/3) (4/3) 

+  > viewProjMat = projMat `multmm` viewMat 

+  > transformedPos = toGPU (viewProjMat `multmm` modelMat) `multmv` homPoint pos 

+  > transformedNorm = toGPU (Vec.map (Vec.take n3) $ Vec.take n3 $ modelMat) `multmv` norm 

+  
</haskell> 
</haskell> 

+  
+  The [http://hackage.haskell.org/packages/archive/GPipe/latest/doc/html/GraphicsGPipeStream.html#v:toGPU <hask>toGPU</hask>] function transforms normal values like <hask>Float</hask>s into GPUvalues like <hask>Vertex Float</hask> so it can be used with the vertices of the <hask>PrimitiveStream</hask>. 

+  
+  
+  == FragmentStreams == 

+  
+  To render the primitives on the screen, we must first turn them into pixel fragments. This called rasterization and in our example done by the function [http://hackage.haskell.org/packages/archive/GPipe/latest/doc/html/GraphicsGPipeStreamFragment.html#v:rasterizeFront <hask>rasterizeFront</hask>], which transforms <hask>PrimitiveStream</hask>s into [http://hackage.haskell.org/packages/archive/GPipe/latest/doc/html/GraphicsGPipeStreamFragment.html#t:FragmentStream <hask>FragmentStream</hask>]s. 

+  
+  <haskell> 

+  
+  > rasterizedCube :: Float > FragmentStream (Vec3 (Fragment Float), Vec2 (Fragment Float)) 

+  > rasterizedCube angle = rasterizeFront $ transformedCube angle 

+  
+  </haskell> 

+  
+  In the rasterization process, values of type <hask>Vertex Float</hask> are turned into values of type <hask>Fragment Float</hask>. 

+  
+  For each fragment, we now want to give it a color from the texture we initially loaded, as well as light it with a directional light coming from the camera. 

+  
+  <haskell> 

+  
+  > litCube :: Texture2D RGBFormat > Float > FragmentStream (Color RGBFormat (Fragment Float)) 

+  > litCube tex angle = fmap (enlight tex) $ rasterizedCube angle 

+  
+  > enlight tex (norm, uv) = RGB (c * Vec.vec (norm `dot` toGPU (0:.0:.1:.()))) 

+  > where RGB c = sample (Sampler Linear Wrap) tex uv 

+  
+  </haskell> 

+  
+  The function [http://hackage.haskell.org/packages/archive/GPipe/1.1.3/doc/html/GraphicsGPipeTexture.html#v:sample <hask>sample</hask>] is used for sampling the texture we have loaded, using the fragment's interpolated uvcoordinates and a sampler state. 

+  
+  Once we have a <hask>FragmentStream</hask> of <hask>Color</hask>s, we can paint those fragments onto a <hask>FrameBuffer</hask>. 

+  
+  
+  == FrameBuffers == 

+  
+  A [http://hackage.haskell.org/packages/archive/GPipe/latest/doc/html/GraphicsGPipeFrameBuffer.html#t:FrameBuffer <hask>FrameBuffer</hask>] is a 2D image in which fragments from <hask>FragmentStream</hask>s are painted. A <hask>FrameBuffer</hask> may contain any combination of a color buffer, a depth buffer and a stencil buffer. Besides being shown in windows, <hask>FrameBuffer</hask>s may also be saved to memory or converted to textures, thus enabling multi pass rendering. A <hask>FrameBuffer</hask> has no defined size, but take the size of the window when shown, or are given a size when saved to memory or converted to a texture. 

+  
+  And so finally, we paint the fragments we have created onto a black <hask>FrameBuffer</hask>. By this we use [http://hackage.haskell.org/packages/archive/GPipe/latest/doc/html/GraphicsGPipeFrameBuffer.html#v:paintColor <hask>paintColor</hask>] without any blending or color masking. 

+  
+  <haskell> 

+  
+  > cubeFrameBuffer :: Texture2D RGBFormat > Float > FrameBuffer RGBFormat () () 

+  > cubeFrameBuffer tex angle = paintSolid (litCube tex angle) emptyFrameBuffer 

+  
+  > paintSolid = paintColor NoBlending (RGB $ Vec.vec True) 

+  > emptyFrameBuffer = newFrameBufferColor (RGB 0) 

+  
+  </haskell> 

+  
+  This <hask>FrameBuffer</hask> is the one we return from the <hask>renderFrame</hask> action we defined at the top. 

+  
+  
+  == Screenshot == 

[[Image:box.jpg]] 
[[Image:box.jpg]] 

+  
+  
+  == Questions and feedback == 

+  
+  If you have any questions or suggestions, feel free to [mailto:tobias_bexelius@hotmail.com mail] me. I'm also interested in seeing some use cases from the community, as complex or trivial they may be. 
Revision as of 22:25, 20 December 2009
Contents
What is GPipe?
GPipe is a library for programming the GPU (graphics processing unit). It is an alternative to using OpenGl, and has the advantage that it is purely functional, statically typed and operates on immutable data as opposed to OpenGl's inherently imperative style. GPipe uses the same conceptual model as OpenGl, and it is recommended that you have at least a basic understanding of how OpenGl works to be able to use GPipe.
In GPipe, you'll primary work with these four types of data on the GPU:

PrimitiveStream
s 
FragmentStream
s 
FrameBuffer
s 
Texture
s
Let's walk our way through an simple example as I explain how you work with these types. This page is formatted as a literate Haskell page, simply save it as "box.lhs" and then type
ghc make –O box.lhs box
at the prompt to see a spinning box. You’ll also need an image named "myPicture.jpg" in the same directory (I used a picture of some wooden planks).
> module Main where
> import Graphics.GPipe
> import Graphics.GPipe.Texture.Load
> import qualified Data.Vec as Vec
> import Data.Vec.Nat
> import Data.Vec.LinAlg.Transform3D
> import Data.Monoid
> import Data.IORef
> import Graphics.UI.GLUT
> (Window,
> mainLoop,
> postRedisplay,
> idleCallback,
> getArgsAndInitialize,
> ($=))
Besides GPipe, this example also uses the VecTransform package for the transformation matrices, and the GPipeTextureLoad package for loading textures from disc. GLUT is used in GPipe for window management and the main loop.
Creating a window
We start by defining the main
function.
> main :: IO ()
> main = do
> getArgsAndInitialize
> tex < loadTexture RGB8 "myPicture.jpg"
> angleRef < newIORef 0.0
> newWindow "Spinning box" (100:.100:.()) (800:.600:.()) (renderFrame tex angleRef) initWindow
> mainLoop
> renderFrame :: Texture2D RGBFormat > IORef Float > IO (FrameBuffer RGBFormat () ())
> renderFrame tex angleRef = do
> angle < readIORef angleRef
> writeIORef angleRef ((angle + 0.01) `mod'` (2*pi))
> return $ cubeFrameBuffer tex angle
> initWindow :: Window > IO ()
> initWindow win = idleCallback $= Just (postRedisplay (Just win))
First we set up GLUT, and load a texture from disc via the GPipeTextureLoad package function loadTexture
. In this example we're going to animate a spinning box, and for that we put an angle in an IORef
so that we can update it between frames. We then create a window with newWindow
. When the window is created, initWindow
registers this window as being continously redisplayed in the idle loop. At each frame, the IO
action renderFrame tex angleRef
is run. In this function the angle is incremented with 0.01 (reseted each lap), and a FrameBuffer
is created and returned to be displayed in the window. But before I explain FrameBuffer
s, let's jump to the start of the graphics pipeline instead.
PrimitiveStreams
The graphics pipeline starts with creating primitives such as triangles on the GPU.Let's create a box with six sides, each made up of two triangles each.
> cube :: PrimitiveStream Triangle (Vec3 (Vertex Float), Vec3 (Vertex Float), Vec2 (Vertex Float))
> cube = mconcat [sidePosX, sideNegX, sidePosY, sideNegY, sidePosZ, sideNegZ]
> sidePosX = toGPUStream TriangleStrip $ zip3 [1:.0:.0:.(), 1:.1:.0:.(), 1:.0:.1:.(), 1:.1:.1:.()] (repeat (1:.0:.0:.())) uvCoords
> sideNegX = toGPUStream TriangleStrip $ zip3 [0:.0:.1:.(), 0:.1:.1:.(), 0:.0:.0:.(), 0:.1:.0:.()] (repeat ((1):.0:.0:.())) uvCoords
> sidePosY = toGPUStream TriangleStrip $ zip3 [0:.1:.1:.(), 1:.1:.1:.(), 0:.1:.0:.(), 1:.1:.0:.()] (repeat (0:.1:.0:.())) uvCoords
> sideNegY = toGPUStream TriangleStrip $ zip3 [0:.0:.0:.(), 1:.0:.0:.(), 0:.0:.1:.(), 1:.0:.1:.()] (repeat (0:.(1):.0:.())) uvCoords
> sidePosZ = toGPUStream TriangleStrip $ zip3 [1:.0:.1:.(), 1:.1:.1:.(), 0:.0:.1:.(), 0:.1:.1:.()] (repeat (0:.0:.1:.())) uvCoords
> sideNegZ = toGPUStream TriangleStrip $ zip3 [0:.0:.0:.(), 0:.1:.0:.(), 1:.0:.0:.(), 1:.1:.0:.()] (repeat (0:.0:.(1):.())) uvCoords
> uvCoords = [0:.0:.(), 0:.1:.(), 1:.0:.(), 1:.1:.()]
Every side of the box is created from a normal list of four elements each, where each element is a tuple with three vectors: a position, a normal and an uvcoordinate. These lists of vertices are then turned into PrimitiveStream
s on the GPU by toGPUStream
that in our case creates triangle strips from the vertices, i.e 2 triangles from 4 vertices. Refer to the OpenGl specification on how triangle strips and the other topologies works.
All six sides are then concatenated together into a cube. We can see that the type of the cube is a PrimitiveStream
of Triangle
s where each vertex is a tuple of three vectors, just as the lists we started with. One big difference is that those vectors now are made up of Vertex Float
s instead of Float
s since they are now on the GPU.
The cube is defined in modelspace, i.e where positions and normals are relative the cube. We now want to rotate that cube using a variable angle and project the whole thing with a perspective projection, as it is seen through a camera 2 units down the zaxis.
> transformedCube :: Float > PrimitiveStream Triangle (Vec4 (Vertex Float), (Vec3 (Vertex Float), Vec2 (Vertex Float)))
> transformedCube angle = fmap (transform angle) cube
> transform angle (pos, norm, uv) = (transformedPos, (transformedNorm, uv))
> where
> modelMat = rotationVec (normalize (1:.0.5:.0.3:.())) angle `multmm` translation (0.5)
> viewMat = translation ((0:.0:.2:.()))
> projMat = perspective 1 100 (pi/3) (4/3)
> viewProjMat = projMat `multmm` viewMat
> transformedPos = toGPU (viewProjMat `multmm` modelMat) `multmv` homPoint pos
> transformedNorm = toGPU (Vec.map (Vec.take n3) $ Vec.take n3 $ modelMat) `multmv` norm
The toGPU
function transforms normal values like Float
s into GPUvalues like Vertex Float
so it can be used with the vertices of the PrimitiveStream
.
FragmentStreams
To render the primitives on the screen, we must first turn them into pixel fragments. This called rasterization and in our example done by the function rasterizeFront
, which transforms PrimitiveStream
s into FragmentStream
s.
> rasterizedCube :: Float > FragmentStream (Vec3 (Fragment Float), Vec2 (Fragment Float))
> rasterizedCube angle = rasterizeFront $ transformedCube angle
In the rasterization process, values of type Vertex Float
are turned into values of type Fragment Float
.
For each fragment, we now want to give it a color from the texture we initially loaded, as well as light it with a directional light coming from the camera.
> litCube :: Texture2D RGBFormat > Float > FragmentStream (Color RGBFormat (Fragment Float))
> litCube tex angle = fmap (enlight tex) $ rasterizedCube angle
> enlight tex (norm, uv) = RGB (c * Vec.vec (norm `dot` toGPU (0:.0:.1:.())))
> where RGB c = sample (Sampler Linear Wrap) tex uv
The function sample
is used for sampling the texture we have loaded, using the fragment's interpolated uvcoordinates and a sampler state.
Once we have a FragmentStream
of Color
s, we can paint those fragments onto a FrameBuffer
.
FrameBuffers
A FrameBuffer
is a 2D image in which fragments from FragmentStream
s are painted. A FrameBuffer
may contain any combination of a color buffer, a depth buffer and a stencil buffer. Besides being shown in windows, FrameBuffer
s may also be saved to memory or converted to textures, thus enabling multi pass rendering. A FrameBuffer
has no defined size, but take the size of the window when shown, or are given a size when saved to memory or converted to a texture.
And so finally, we paint the fragments we have created onto a black FrameBuffer
. By this we use paintColor
without any blending or color masking.
> cubeFrameBuffer :: Texture2D RGBFormat > Float > FrameBuffer RGBFormat () ()
> cubeFrameBuffer tex angle = paintSolid (litCube tex angle) emptyFrameBuffer
> paintSolid = paintColor NoBlending (RGB $ Vec.vec True)
> emptyFrameBuffer = newFrameBufferColor (RGB 0)
This FrameBuffer
is the one we return from the renderFrame
action we defined at the top.
Screenshot
Questions and feedback
If you have any questions or suggestions, feel free to mail me. I'm also interested in seeing some use cases from the community, as complex or trivial they may be.