Hubbry Logo
RasterisationRasterisationMain
Open search
Rasterisation
Community hub
Rasterisation
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Rasterisation
Rasterisation
from Wikipedia
Raster graphic image

In computer graphics, rasterisation (British English) or rasterization (American English) is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image (a series of pixels, dots or lines, which, when displayed together, create the image which was represented via shapes).[1][2] The rasterized image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format. Rasterization may refer to the technique of drawing 3D models, or to the conversion of 2D rendering primitives, such as polygons and line segments, into a rasterized format.

Etymology

[edit]

The term "rasterisation" comes from German Raster 'grid, pattern, schema' and Latin rāstrum 'scraper, rake'.[3][4]

2D images

[edit]

Line primitives

[edit]

Bresenham's line algorithm is an example of an algorithm used to rasterize lines.

Circle primitives

[edit]

Algorithms such as the midpoint circle algorithm are used to render circles onto a pixelated canvas.

3D images

[edit]

Rasterization is one of the typical techniques of rendering 3D models. Compared with other rendering techniques such as ray tracing, rasterization is extremely fast and therefore used in most realtime 3D engines. However, rasterization is simply the process of computing the mapping from scene geometry to pixels and does not prescribe a particular way to compute the color of those pixels. The specific color of each pixel is assigned by a pixel shader (which in modern GPUs is completely programmable). Shading may take into account physical effects such as light position, their approximations or purely artistic intent.

The process of rasterizing 3D models onto a 2D plane for display on a computer screen ("screen space") is often carried out by fixed function (non-programmable) hardware within the graphics pipeline. This is because there is no motivation for modifying the techniques for rasterization used at render time [5] and a special-purpose system allows for high efficiency.

Triangle rasterization

[edit]
Rasterizing triangles using the top-left rule

Polygons are a common representation of digital 3D models. Before rasterization, individual polygons are typically broken down into triangles; therefore, a typical problem to solve in 3D rasterization is rasterization of a triangle. Properties that are usually required from triangle rasterization algorithms are that rasterizing two adjacent triangles (i.e. those that share an edge)

  1. leaves no holes (non-rasterized pixels) between the triangles, so that the rasterized area is completely filled (just as the surface of adjacent triangles). And
  2. no pixel is rasterized more than once, i.e. the rasterized triangles don't overlap. This is to guarantee that the result doesn't depend on the order in which the triangles are rasterized. Overdrawing pixels can also mean wasting computing power on pixels that would be overwritten.

This leads to establishing rasterization rules to guarantee the above conditions. One set of such rules is called a top-left rule, which states that a pixel is rasterized if and only if

  1. its center lies completely inside the triangle, or
  2. its center lies exactly on the triangle edge (or multiple edges in case of corners) – that is (or, in case of corners, all are), either the top or left edge.

A top edge is an edge that is exactly horizontal and lies above other edges, and a left edge is a non-horizontal edge that is on the left side of the triangle.

This rule is implemented e.g. by Direct3D[6] and many OpenGL implementations (even though the specification doesn't define it and only requires a consistent rule[7]).

Quality

[edit]
Pixel precision (left) vs sub-pixel precision (middle) vs anti-aliasing (right)

The quality of rasterization can be improved by antialiasing, which creates "smooth" edges. Sub-pixel precision is a method which takes into account positions on a finer scale than the pixel grid and can produce different results even if the endpoints of a primitive fall into same pixel coordinates, producing smoother movement animations. Simple or older hardware, such as PlayStation 1, lacked sub-pixel precision in 3D rasterization.[8]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Rasterisation is a core technique in for converting geometric primitives, such as lines, polygons, or triangles, from a vector representation into a discrete grid of pixels on a raster display, determining which pixels to illuminate and with what color to form a 2D image. This process addresses the visibility problem by projecting 3D scene elements onto an and resolving overlaps using depth tests, enabling efficient rendering of complex scenes from a given viewpoint. Unlike ray tracing, which simulates light paths for global effects, rasterisation focuses on local computations per primitive, making it suitable for real-time applications like video games and . The rasterisation pipeline, implemented primarily in graphics processing units (GPUs), consists of sequential stages that transform input geometry into final pixel colors. It begins with vertex processing, where 3D coordinates and attributes (e.g., normals, textures) are transformed via model-view-projection matrices to screen space. are then assembled, clipped to the view volume, and rasterised through scan conversion algorithms—such as edge walking or barycentric interpolation—to identify covered pixels and interpolate per-fragment values like depth and parameters. Subsequent fragment processing applies tests (e.g., depth buffering via Z-buffer) and shaders for , texturing, and blending, before merging results into the for display. Rasterisation emerged in the 1960s and 1970s as part of early research, with key algorithms like (1965) and scan-line polygon filling developed to support frame buffer-based displays. By the 1980s, in systems like (SGI) workstations and the Pixel-Planes architecture standardized the pipeline, enabling interactive 3D rendering. Today, it remains the dominant method for interactive graphics due to its parallelism and speed, though hybrid approaches with ray tracing are increasingly used for enhanced realism in modern rendering engines.

Fundamentals

Definition and Principles

Rasterisation is the algorithmically driven process of converting geometric primitives, such as lines, polygons, or triangles, into a discrete set of s on a raster display by determining which s are covered by the primitive and assigning appropriate colors or intensities to them. This technique is fundamental to , enabling the efficient rendering of 2D and 3D scenes by approximating continuous geometric shapes on a grid. Key principles of rasterisation center on sampling continuous at discrete points, typically the s of s, to decide coverage and avoid where possible. Common approaches include scanline rasterisation, which processes horizontal lines (scanlines) across the primitive sequentially to fill spans of s, and edge walking or edge function methods, which evaluate linear equations along primitive edges to test inclusion in parallel. Unlike , where shapes are stored and rendered using mathematical descriptions of paths and curves for without quality loss, rasterisation generates fixed-resolution arrays that represent the final directly. The mathematical foundation of rasterisation relies on techniques like barycentric coordinates to assess pixel coverage within primitives, particularly triangles, by expressing a point's position as a convex combination of the vertices. For a point p\mathbf{p} inside a triangle with vertices v0\mathbf{v_0}, v1\mathbf{v_1}, v2\mathbf{v_2}, the barycentric coordinates (α,β,γ)(\alpha, \beta, \gamma) are computed as the normalized areas of the sub-triangles formed opposite each vertex: α=A(p,v1,v2)A(v0,v1,v2),β=A(v0,p,v2)A(v0,v1,v2),γ=A(v0,v1,p)A(v0,v1,v2)\alpha = \frac{A(\mathbf{p}, \mathbf{v_1}, \mathbf{v_2})}{A(\mathbf{v_0}, \mathbf{v_1}, \mathbf{v_2})}, \quad \beta = \frac{A(\mathbf{v_0}, \mathbf{p}, \mathbf{v_2})}{A(\mathbf{v_0}, \mathbf{v_1}, \mathbf{v_2})}, \quad \gamma = \frac{A(\mathbf{v_0}, \mathbf{v_1}, \mathbf{p})}{A(\mathbf{v_0}, \mathbf{v_1}, \mathbf{v_2})} where AA denotes the signed area; the point lies inside if α0\alpha \geq 0, β0\beta \geq 0, γ0\gamma \geq 0, and α+β+γ=1\alpha + \beta + \gamma = 1. Pixel intensity I(p)I(\mathbf{p}) at the pixel center p\mathbf{p} is then determined by interpolating attributes from the vertices, such as I(p)=αI(v0)+βI(v1)+γI(v2)I(\mathbf{p}) = \alpha I(\mathbf{v_0}) + \beta I(\mathbf{v_1}) + \gamma I(\mathbf{v_2}), reflecting the geometry's intersection properties. In the modern on GPUs, rasterisation follows vertex shading—where are transformed into screen space—and precedes fragment shading, generating fragments with interpolated attributes for subsequent per-pixel operations like and texturing. This positioning ensures efficient parallel processing of into screen-covered fragments, forming the bridge between geometric and image-space computations.

Historical Development

The term "rasterisation" originates from the German word Raster, meaning "screen" or "grid," derived from the Latin rastrum, signifying a "rake" used for scraping or lines. This etymology reflects the grid-like structure of pixel-based displays, with the concept first appearing in contexts around 1934 to describe scanning patterns in cathode-ray tubes. In , the term gained prominence in the as systems shifted toward grid-based rendering to represent images on discrete pixels, contrasting with continuous vector approaches. Early developments in rasterisation trace back to the , building on foundational work such as Ivan Sutherland's system (1963), which ran on the computer and introduced interactive drawing, though primarily using vector displays. The transition to raster techniques accelerated in the late 1960s with A. Michael Noll's invention of a scanned raster display at , enabling the first computer-generated raster images influenced by television scanning technology. By the 1970s, raster displays began replacing costly vector CRTs due to their affordability and capacity for filled colors and textures; a pivotal advancement was J.E. Bresenham's 1965 algorithm for efficiently rasterizing straight lines on digital plotters, which optimized selection using integer arithmetic to approximate ideal lines on grid-based outputs. The 1980s marked significant milestones with the emergence of specialized hardware for rasterisation, including the Pixel Machine (introduced in 1987), a MIMD system designed for high-speed image processing and , serving as an early precursor to modern GPUs. In the 1990s, standardization efforts solidified rasterisation pipelines through , released in 1992 by Incorporated as an open, cross-platform that formalized stages like primitive assembly and fragment processing, enabling consistent implementation across diverse hardware. The integrated programmability into these pipelines, with NVIDIA's 3 GPU (2001) introducing vertex and shaders, allowing developers to customize during rasterisation for more realistic effects without fixed-function limitations. In the modern era, rasterisation has evolved within fully programmable GPU architectures, exemplified by NVIDIA's platform launched in , which unified graphics and general-purpose computing on GPUs while preserving rasterisation as the backbone for real-time rendering. Contemporary systems handle immense workloads, processing billions of pixels per second to support high-resolution displays (e.g., 8K at 60 frames per second) and complex scenes with overdraw, , and effects, far surpassing early grid-based limitations. This progression underscores rasterisation's role in enabling immersive graphics in gaming, , and visualization.

2D Rasterisation

Line Primitives

In 2D rasterisation, line primitives represent straight line segments defined by two endpoints, (x1,y1)(x_1, y_1) and (x2,y2)(x_2, y_2), where the coordinates are typically specified in or floating-point values relative to the discrete screen grid. These are fundamental for rendering wireframe models, outlines, and other non-filled graphics elements, with rasterisation determining the set of pixels closest to the ideal line to approximate its appearance on a pixelated display. One of the earliest and most efficient methods for rasterising line primitives is , introduced in 1965 for controlling digital plotters. This algorithm employs step-by-step integer arithmetic to select s that minimize the perpendicular distance error from the true line, avoiding floating-point operations for speed on early hardware. Assuming the line has a less than 1 (i.e., Δx>Δy>0\Delta x > \Delta y > 0), it initializes a decision variable d=2ΔyΔxd = 2\Delta y - \Delta x, where Δx=x2x1\Delta x = |x_2 - x_1| and Δy=y2y1\Delta y = |y_2 - y_1|. At each step along the major axis (x-direction), the algorithm tests dd: if d0d \geq 0, it increments the y-coordinate and updates dd+2(ΔyΔx)d \leftarrow d + 2(\Delta y - \Delta x); otherwise, it keeps y constant and updates dd+2Δyd \leftarrow d + 2\Delta y. This ensures exact coverage without gaps or overlaps, making it ideal for low-resource environments. An alternative approach is the Digital Differential Analyzer (DDA) , an incremental floating-point method that simulates the analog differential analyzer hardware from early computing. It computes the line's m=Δy/Δxm = \Delta y / \Delta x and advances by fixed ratios along the axes, determining the number of steps as max(Δx,Δy)\max(|\Delta x|, |\Delta y|). For a line with m>1m > 1 (stepping in y-direction), the updates are xi+1=xi+1/mx_{i+1} = x_i + 1/m and yi+1=yi+1y_{i+1} = y_i + 1, with pixels plotted at the rounded coordinates after each increment; the process swaps axes for m<1m < 1. While simpler to implement than Bresenham's, DDA can accumulate rounding errors over long lines due to repeated floating-point additions. For improved visual quality, Wu's anti-aliased line algorithm extends Bresenham's framework to achieve sub-pixel accuracy by assigning intensity gradients to based on their fractional distance to the line. Developed in , it processes the line in passes, computing the exact y-position f(x)f(x) at each x and setting intensities for the two straddling : the lower receives intensity 1{f(x)}1 - \{f(x)\} and the upper {f(x)}\{f(x)\}, where {}\{ \cdot \} denotes the , effectively modeling the line as a filtered signal. This reduces jagged edges () without significantly increasing computational cost, using only arithmetic for . These algorithms handle edge cases such as horizontal, vertical, and diagonal lines through optimizations across the eight octants of the coordinate plane, reducing redundant computations by reflecting or swapping axes as needed—for instance, vertical lines (Δx=[0](/page/0)\Delta x = [0](/page/0)) simply increment y while keeping x fixed, and octant symmetries the major axis is always stepped forward.

Filled and Curved Primitives

Filled polygons in 2D rasterisation are generated using the scanline algorithm, which processes the image row by row to identify and fill horizontal spans between polygon edges. This method begins by constructing an edge table (ET) that lists all polygon edges sorted by their starting y-coordinate, followed by an active edge table (AET) that maintains edges intersecting the current scanline, sorted by x-intercept. As the scanline advances downward, edges are added to or removed from the AET, and spans are filled by pairing intersection points and drawing pixels between them. To handle complex polygons with self-intersections, filling rules such as the even-odd (parity) rule—alternating fill based on edge crossings—or the nonzero winding rule—which counts net edge windings around a point—are applied to determine interior regions. Circle primitives are rasterised using the , an incremental method analogous to that exploits octant symmetry to generate pixels efficiently without floating-point operations. Starting from the top vertex (0, r), the algorithm plots the initial point and evaluates a decision parameter at the midpoint between candidate pixels (x+1, y) and (x+1, y-1) to select the closest to the true circle, updating the parameter iteratively. The initial decision parameter is set to p0=1rp_0 = 1 - r, where rr is the radius (assuming integer r). At each step, x is incremented by 1; if pk<0p_k < 0, y remains unchanged and pk+1=pk+2xk+1p_{k+1} = p_k + 2x_k + 1; otherwise, y is decremented by 1 and pk+1=pk+2(xkyk)+1p_{k+1} = p_k + 2(x_k - y_k) + 1, where xkx_k and yky_k are the coordinates before the y update. This ensures integer arithmetic, making it suitable for early raster hardware. Curve rasterisation often involves approximating smooth curves like Bézier segments through recursive subdivision using , which evaluates points via repeated between control points. For quadratic or cubic Bézier curves, the process starts with the full curve and subdivides it at parameter t=0.5t = 0.5 to produce two sub-curves with new control points, continuing until sub-segments are sufficiently straight based on flattening thresholds, such as a maximum deviation from . Flat segments are then rasterised as lines or filled polygons, balancing accuracy and performance in vector-to-raster conversion. For irregular shapes without explicit boundaries, flood fill variants provide an alternative to edge-based methods. Seed fill begins at an interior (seed) and propagates to connected neighbors of the same color using a queue for breadth-first traversal, supporting 4-connected (orthogonal) or 8-connected (including diagonals) neighborhoods to fill enclosed regions. Boundary fill, conversely, starts from a seed and fills inward while detecting boundary colors to halt expansion, also employing queues to manage pixel stacks and avoid depth issues in large areas. These techniques are particularly useful for interactive editing in paint programs, where connectivity ensures complete region coverage without predefined edges.

3D Rasterisation

Triangle Setup and Traversal

Back-face is typically applied earlier in the , after vertex transformation to view space but before projection, to eliminate facing away from the viewer and reduce unnecessary processing. The triangle normal N\mathbf{N} is computed as the of two edge vectors in 3D view space, such as V1V2×V1V3\overrightarrow{V_1V_2} \times \overrightarrow{V_1V_3}
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.