Rasterisation! What and How

Rendering Geometry

In 3D geometry, vector conversion to graphics is called rasterisation. End of post.

In all seriousness, rasterisation is arguably the most used method of rendering a 3D image for computer graphics. Along with ray tracing, both compete to do the same thing. Unlike the latter option, the history of how this method was introduced to people has been lost to the fabric of time, but the problem it solves is the same. Generally deemed as the visibility issue (eg.  Abrash 1997), the issue is to render what can be seen by the camera area.

How rasterisation approaches this is to change the geometry of the model it wants to project from 3D to 2D with triangles. Kind of like how a photo makes a 3D perspective 2D, the shape projects its 3D perspective to a 2D perspective, (called perspective projection), the GPU (Graphics Processing Unit) will then check each pixel that is available. Each projection contains a vertex of data, which holds details such as the colour, what texture it needs, where it is in the world space and its normal… (Caulfield 2018). Shaders are also required to correctly texture and generate the correct colour scheme of the object, as the vertex data does not hold these details.

How rasterisation works as an algorithm is simple when broken down. First, the triangles that are being projected into the scene of pixels is iterated over in a loop. A second loop is than created that checks if the current pixel contains apart of the projected image.

An example of this in pseudocode:


for (each triangle)

                {

Create a series of vectices for the projection of the objects

for (each pixel)
                                {

                if (pixel contained in 2d triangle (the vectices being passed in, the x, the y)

                                {

                                Image(x,y) = triangle[i].colour

                                }

                }

}

There is much more that can be done to make rasterisation faster, such as clamping certain areas of the triangles to have Axis-Aligned Binding Boxes around the triangles and iterate to find the binding boxes as opposed to every single pixel on the scene, or for not rendering an object behind another object if it is not in focus.

References:

Abrash, M 1997, Michael Abrash’s Graphics Programming Black Book, The Coriolis Group Inc.
Caulfield, B 2018, What’s the difference between ray tracing and rasterization?, Nvidia, viewed 22 April 2020 https://blogs.nvidia.com/blog/2018/03/19/whats-difference-between-ray-tracing-rasterization/

Leave a comment