Recent from talks
Contribute something
Nothing was collected or created yet.
Raster graphics
View on WikipediaThis article needs additional citations for verification. (November 2016) |

In computer graphics and digital photography, a raster graphic, raster image, or simply raster is a digital image made up of a rectangular grid of tiny colored (usually square) so-called pixels. Unlike vector graphics which use mathematical formulas to describe shapes and lines, raster images store the exact color of each pixel, making them ideal for photographs and images with complex colors and details. Raster images are characterized by their dimensions (width and height in pixels) and color depth (the number of bits per pixel).[1] They can be displayed on computer displays, printed on paper, or viewed on other media, and are stored in various image file formats.
The printing and prepress industries know raster graphics as contones (from "continuous tones"). In contrast, line art is usually implemented as vector graphics in digital systems.[2]

Many raster manipulations map directly onto the mathematical formalisms of linear algebra, where mathematical objects of matrix structure are of central concern.
Raster or gridded data may be the result of a gridding procedure.
Etymology
[edit]The word "raster" has its origins in the Latin rastrum (a rake), which is derived from radere (to scrape). It originates from the raster scan of cathode-ray tube (CRT) video monitors, which draw the image line by line by magnetically or electrostatically steering a focused electron beam.[3] By association, it can also refer to a rectangular grid of pixels. The word rastrum is now used to refer to a device for drawing musical staff lines.
Data model
[edit]
The fundamental strategy underlying the raster data model is the tessellation of a plane, into a two-dimensional array of squares, each called a cell or pixel (from "picture element"). In digital photography, the plane is the visual field as projected onto the image sensor; in computer art, the plane is a virtual canvas; in geographic information systems, the plane is a projection of the Earth's surface. The size of each square pixel, known as the resolution or support, is constant across the grid.
A single numeric value is then stored for each pixel. For most images, this value is a visible color, but other measurements are possible, even numeric codes for qualitative categories. Each raster grid has a specified pixel format, the data type for each number. Common pixel formats are binary, grayscale, palettized, and full-color, where color depth[4] determines the fidelity of the colors represented, and color space determines the range of color coverage (which is often less than the full range of human color vision). Most modern color raster formats represent color using 24 bits (over 16 million distinct colors), with 8 bits (values 0–255) for each color channel (red, green, and blue). The digital sensors used for remote sensing and astronomy are often able to detect and store wavelengths beyond the visible spectrum; the large CCD bitmapped sensor at the Vera C. Rubin Observatory captures 3.2 gigapixels in a single image (6.4 GB raw), over six color channels which exceed the spectral range of human color vision.
Uses
[edit]Image storage
[edit]
Most computer images are stored in raster graphics formats or compressed variations, including GIF, JPEG, and PNG, which are popular on the World Wide Web.[4][5] A raster data structure is based on a (usually rectangular, square-based) tessellation of the 2D plane into cells, each containing a single value. To store the data in a file, the two-dimensional array must be serialized. The most common way to do this is a row-major format, in which the cells along the first (usually top) row are listed left to right, followed immediately by those of the second row, and so on.
In the example at right, the cells of tessellation A are overlaid on the point pattern B resulting in an array C of quadrant counts representing the number of points in each cell. For purposes of visualization a lookup table has been used to color each of the cells in an image D. Here are the numbers as a serial row-major array:
1 3 0 0 1 12 8 0 1 4 3 3 0 2 0 2 1 7 4 1 5 4 2 2 0 3 1 2 2 2 2 3 0 5 1 9 3 3 3 4 5 0 8 0 2 4 3 2 8 4 3 2 2 7 2 3 2 10 1 5 2 1 3 7
To reconstruct the two-dimensional grid, the file must include a header section at the beginning that contains at least the number of columns, and the pixel datatype (especially the number of bits or bytes per value) so the reader knows where each value ends to start reading the next one. Headers may also include the number of rows, georeferencing parameters for geographic data, or other metadata tags, such as those specified in the Exif standard.
Compression
[edit]High-resolution raster grids contain a large number of pixels, and thus consume a large amount of memory. This has led to multiple approaches to compressing the data volume into smaller files. The most common strategy is to look for patterns or trends in the pixel values, then store a parameterized form of the pattern instead of the original data. Common raster compression algorithms include run-length encoding (RLE), JPEG, LZ (the basis for PNG and ZIP), Lempel–Ziv–Welch (LZW) (the basis for GIF), and others.
For example, Run length encoding looks for repeated values in the array, and replaces them with the value and the number of times it appears. Thus, the raster above would be represented as:
| values | 1 | 3 | 0 | 1 | 12 | 8 | 0 | 1 | 4 | 3 | ... |
|---|---|---|---|---|---|---|---|---|---|---|---|
| lengths | 1 | 1 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | ... |
This technique is very efficient when there are large areas of identical values, such as a line drawing, but in a photograph where pixels are usually slightly different from their neighbors, the RLE file would be up to twice the size of the original.
Some compression algorithms, such as RLE and LZW, are lossless, where the original pixel values can be perfectly regenerated from the compressed data. Other algorithms, such as JPEG, are lossy, because the parameterized patterns are only an approximation of the original pixel values, so the latter can only be estimated from the compressed data.
Raster–vector conversion
[edit]Vector images (line work) can be rasterized (converted into pixels), and raster images vectorized (raster images converted into vector graphics), by software. In both cases, some information is lost, although certain vectorization operations can recreate salient information, as in the case of optical character recognition.
Displays
[edit]Early mechanical televisions developed in the 1920s employed rasterization principles. Electronic television based on cathode-ray tube displays are raster scanned with horizontal rasters painted left to right, and the raster lines painted top to bottom.
Modern flat-panel displays such as LED monitors still use a raster approach. Each on-screen pixel directly corresponds to a small number of bits in memory.[6] The screen is refreshed simply by scanning through pixels and coloring them according to each set of bits. The refresh procedure, being speed critical, is often implemented by dedicated circuitry, often as a part of a graphics processing unit.
Using this approach, the computer contains an area of memory that holds all the data that are to be displayed. The central processor writes data into this region of memory and the video controller collects them from there. The bits of data stored in this block of memory are related to the eventual pattern of pixels that will be used to construct an image on the display.[7]
An early scanned display with raster computer graphics was invented in the late 1960s by A. Michael Noll at Bell Labs,[8] but its patent application filed February 5, 1970, was abandoned at the Supreme Court in 1977 over the issue of the patentability of computer software.[9]
Printing
[edit]During the 1970s and 1980s, pen plotters, using vector graphics, were common for creating precise drawings, especially on large format paper. However, since then almost all printers create the printed image as a raster grid, including both laser and inkjet printers. When the source information is vector, rendering specifications and software such as PostScript are used to create the raster image.
Three-dimensional rasters
[edit]Three-dimensional voxel raster graphics are employed in video games and are also used in medical imaging such as MRI scanners.[10]
Geographic information systems
[edit]Geographic phenomena are commonly represented in a raster format in GIS. The raster grid is georeferenced, so that each pixel (commonly called a cell in GIS because the "picture" part of "pixel" is not relevant) represents a square region of geographic space.[11] The value of each cell then represents some measurable (qualitative or quantitative) property of that region, typically conceptualized as a field. Examples of fields commonly represented in rasters include: temperature, population density, soil moisture, land cover, surface elevation, etc. Two sampling models are used to derive cell values from the field: in a lattice, the value is measured at the center point of each cell; in a grid, the value is a summary (usually a mean or mode) of the value over the entire cell.
Resolution
[edit]This section needs additional citations for verification. (November 2016) |
Raster graphics are resolution dependent, meaning they cannot scale up to an arbitrary resolution without loss of apparent quality. This property contrasts with the capabilities of vector graphics, which easily scale up to the quality of the device rendering them. Raster graphics deal more practically than vector graphics with photographs and photo-realistic images, while vector graphics often serve better for typesetting or for graphic design. Modern computer-monitors typically display about 72 to 130 pixels per inch (PPI), and some modern consumer printers can resolve 2400 dots per inch (DPI) or more; determining the most appropriate image resolution for a given printer-resolution can pose difficulties, since printed output may have a greater level of detail than a viewer can discern on a monitor. Typically, a resolution of 150 to 300 PPI works well for 4-color process (CMYK) printing.
However, for printing technologies that perform color mixing through dithering (halftone) rather than through overprinting (virtually all home/office inkjet and laser printers), printer DPI and image PPI have a very different meaning, and this can be misleading. Because, through the dithering process, the printer builds a single image pixel out of several printer dots to increase color depth, the printer's DPI setting must be set far higher than the desired PPI to ensure sufficient color depth without sacrificing image resolution. Thus, for instance, printing an image at 250 PPI may actually require a printer setting of 1200 DPI.[12]
Raster-based image editors
[edit]Raster-based image editors, such as PaintShop Pro, Corel Painter, Adobe Photoshop, Paint.NET, Microsoft Paint, Krita, and GIMP, revolve around editing pixels, unlike vector-based image editors, such as Xfig, CorelDRAW, Adobe Illustrator, or Inkscape, which revolve around editing lines and shapes (vectors). When an image is rendered in a raster-based image editor, the image is composed of millions of pixels. At its core, a raster image editor works by manipulating each individual pixel.[5] Most[13] pixel-based image editors work using the RGB color model, but some also allow the use of other color models, such as the CMYK color model.[14]
See also
[edit]References
[edit]- ^ "Introduction to Computer Graphics, Section 1.1 -- Painting and Drawing". math.hws.edu. Retrieved 2024-08-25.
- ^ "Patent US6469805 – Post raster-image processing controls for digital color image printing". Google.nl. Archived from the original on 5 December 2014. Retrieved 30 November 2014.
- ^ Bach, Michael; Meigen, Thomas; Strasburger, Hans (1997). "Raster-scan cathode-ray tubes for vision research – limits of resolution in space, time and intensity, and some solutions". Spatial Vision. 10 (4): 403–14. doi:10.1163/156856897X00311. PMID 9176948.
- ^ a b "Types of Bitmaps". Microsoft Docs. Microsoft. 29 March 2017. Archived from the original on 2 January 2019. Retrieved 1 January 2019.
The number of bits devoted to an individual pixel determines the number of colors that can be assigned to that pixel. For example, if each pixel is represented by 4 bits, then a given pixel can be assigned one of 16 different colors (2^4 = 16).
- ^ a b "Raster vs Vector". Gomez Graphics Vector Conversions. Archived from the original on 5 January 2019. Retrieved 1 January 2019.
Raster images are created with pixel-based programs or captured with a camera or scanner. They are more common in general such as jpg, gif, png, and are widely used on the web.
- ^ "bitmap display". FOLDOC. 2002-05-15. Archived from the original on 16 June 2018. Retrieved 30 November 2014.
- ^ Murray, Stephen. "Graphic Devices". Computer Sciences, edited by Roger R. Flynn, vol. 2: Software and Hardware, Macmillan Reference USA, 2002, pp. 81–83. Gale eBooks. Accessed 3 Aug. 2020.
- ^ Noll, A. Michael (March 1971). "Scanned-Display Computer Graphics". Communications of the ACM. 14 (3): 143–150. doi:10.1145/362566.362567. S2CID 2210619.
- ^ "Patents". Noll.uscannenberg.org. Archived from the original on 22 February 2014. Retrieved 30 November 2014.
- ^ "CHAPTER-1". Cis.rit.edu. Archived from the original on 16 December 2014. Retrieved 30 November 2014.
- ^ Bolstad, Paul (2008). GIS Fundamentals: A First Text on Geographic Information Systems (3rd ed.). Eider Press. p. 42.
- ^ Fulton, Wayne (April 10, 2010). "Color Printer Resolution". A few scanning tips. Archived from the original on August 5, 2011. Retrieved August 21, 2011.
- ^ Tooker, Logan (2022-02-02). "Photoshop vs. CorelDRAW: Which Is Better for Graphic Editors?". MUO. Retrieved 2024-07-13.
- ^ "Print Basics: RGB Versus CMYK". HP Tech Takes. HP. 12 June 2018. Archived from the original on 2 January 2019. Retrieved 1 January 2019.
If people are going to see it on a computer monitor, choose RGB. If you're printing it, use CMYK. (Tip: In Adobe® Photoshop®, you can choose between RGB and CMYK color channels by going to the Image menu and selecting Mode.)
Raster graphics
View on GrokipediaFundamentals
Etymology and Definition
The term "raster" in the context of graphics originates from the German word Raster, meaning "screen" or "frame," which itself derives from the Latin rāstrum, denoting a "rake." This etymology evokes the systematic, line-by-line scanning pattern of early cathode-ray tube (CRT) displays, akin to the sweeping motion of a rake across a field.[4][5] In computer graphics, the concept emerged in the 1960s, drawing from television scanning technology where images are built by sweeping an electron beam across a phosphor-coated screen to form horizontal lines of illuminated points. Engineers at Bell Laboratories, including A. Michael Noll, developed early raster-based systems in the mid-1960s, adapting CRT scanning for digital image display and manipulation, with the term "raster graphics" appearing in technical literature by 1971.[6][7] Raster graphics refers to a dot matrix data structure that represents images as a rectangular grid of discrete picture elements, known as pixels, where each pixel encodes specific values for color and intensity to form the overall visual content.[2][1] This pixel-based approach fundamentally contrasts with vector graphics, which define images through mathematical descriptions of paths, shapes, and fills rather than a fixed grid of dots.[8]Historical Development
The development of raster graphics traces its roots to the mid-20th century, influenced by advancements in cathode-ray tube (CRT) technology and military applications. In the 1950s, military radar displays began employing raster scanning techniques to visualize data in real-time, adapting television-style scan lines to present echo returns as pixel-like grids on screens, which laid foundational principles for grid-based image representation in computing.[9] Early innovations included Douglas Engelbart's Picture System in 1963, which used raster scanning for on-screen image manipulation, and the first frame buffer developed by Randy Mott at Evans & Sutherland in 1968 for storing pixel data.[10][3] The Whirlwind computer, developed at MIT starting in 1944 and operational by 1951, further advanced this by integrating CRT displays for real-time graphics simulation, enabling the first high-speed digital computer to handle interactive visual outputs for applications like flight training, though its displays were primarily vector-based precursors to full raster systems.[11] By the late 1960s, true raster computer graphics emerged with A. Michael Noll's scanned display at Bell Labs, patented in 1970, which used frame buffers to store and refresh pixel data systematically.[12] The 1970s marked the adoption of raster graphics in consumer and entertainment contexts, driven by affordable CRT technology. Atari's Pong, released in 1972, utilized raster scan displays derived from television hardware to render simple geometric shapes and motion, popularizing real-time pixel-based visuals in video games and demonstrating raster's suitability for dynamic content.[13] Concurrently, Xerox PARC's Alto computer, introduced in 1973, pioneered bitmap displays—a core raster technique—featuring a 606 × 808 pixel monochrome screen that supported bitmapped graphics for the first graphical user interface (GUI), influencing future personal computing designs.[14] In the 1980s, raster graphics achieved standardization and widespread accessibility through personal computers. The Graphics Interchange Format (GIF), developed by CompuServe and released in 1987, provided a compressed bitmap standard supporting up to 256 colors, facilitating the exchange of raster images over early networks and becoming a cornerstone for web and bulletin board graphics.[15] Apple's Macintosh, launched in 1984, integrated raster bitmap displays into its GUI, using a 512 × 342 pixel monochrome screen to enable intuitive icon-based interactions and raster editing via software like MacPaint, which accelerated the shift from command-line to visual computing paradigms.[16] The 1990s and 2000s saw raster graphics proliferate with digital media and the internet, bolstered by compression standards and editing tools. The Joint Photographic Experts Group (JPEG) standard, finalized in 1992, introduced lossy compression for photographic raster images, enabling efficient storage and transmission of high-fidelity color data, which revolutionized digital photography and web imaging.[17] Adobe Photoshop's release in February 1990 further popularized raster editing by offering layered pixel manipulation on Macintosh systems, evolving into an industry-standard tool that democratized professional image processing for photographers and designers.[18] From the 2010s onward, raster graphics integrated deeply with mobile devices and AI technologies. High-resolution standards like 4K and 8K became common in smartphones and tablets by the mid-2020s, while AI algorithms advanced raster processing through super-resolution upscaling and generative image creation, maintaining raster's dominance in foundational digital imaging as of November 2025.[19][20]Data Model
Pixel Grid and Sampling
Raster graphics represent images as a rectangular array of pixels organized in rows and columns, forming a structure known as a bitmap. Each pixel corresponds to a discrete sampling point on a two-dimensional grid, capturing intensity or color values from an underlying continuous scene. This grid-based model discretizes spatial information, enabling efficient storage and manipulation in digital systems.[21] The process of creating a raster image involves sampling, where continuous analog signals or real-world visuals are converted to digital form through spatial discretization into the pixel grid. This sampling must adhere to principles from signal processing to preserve fidelity; according to the Nyquist-Shannon sampling theorem, the sampling frequency must be at least twice the highest frequency in the signal to avoid aliasing artifacts, such as jagged edges or moiré patterns in the resulting image.[22] In practice, for raster graphics, this implies selecting a pixel resolution that adequately captures scene details without introducing reconstruction errors during display or processing.[23] Pixels within the grid are addressed using integer coordinates , where ranges from 0 to and from 0 to , with denoting the grid width and the height in pixels. This formulation defines the image's spatial extent and data volume directly.[24] In rendering pipelines, rasterization algorithms fill the pixel grid by determining which pixels intersect geometric primitives like points, lines, or polygons. These algorithms project vector-based descriptions onto the discrete grid, resolving visibility and coverage to produce the final bitmap; for instance, scanline methods process the image row by row to efficiently compute pixel values from primitives.[25] This step is fundamental in real-time graphics, transforming continuous geometry into the sampled raster format.[26] Raster images are stored as binary or indexed data structures, optimizing for the grid's uniformity. Monochrome rasters, representing binary images with on/off states, use 1 bit per pixel for compact storage, such as in early bitmap formats where the entire grid fits into a bit-packed array. Grayscale rasters extend this by assigning intensity levels to each pixel, typically using 8 bits (256 shades) per pixel to encode variations from black to white, allowing for smoother tonal representation without color.[27] Formats like TIFF support both modes, storing the grid data in uncompressed or packed rows for direct access.[28]Color Representation and Depth
In raster graphics, color representation refers to the methods used to encode the color and intensity of each pixel within the image data structure. The color depth, measured in bits per pixel (bpp), determines the number of distinct colors or shades that can be represented, calculated as 2 raised to the power of the bit depth. For instance, a 1 bpp format supports monochrome images with two colors, typically black and white, suitable for simple binary representations.[29][30] An 8 bpp grayscale image allows 256 levels of gray, providing a continuous range from black to white for applications requiring tonal variation without color.[29][31] True color representation commonly uses 24 bpp, allocating 8 bits per channel in the RGB model to achieve approximately 16.7 million colors.[29][32] The RGB color model is an additive color space widely used in raster graphics for display devices, where colors are formed by combining varying intensities of red, green, and blue light. Each channel typically ranges from 0 (minimum intensity) to 255 (maximum intensity) in 8-bit implementations. To normalize each channel value to a fractional intensity between 0 and 1, divide the 8-bit value by 255; for example, the normalized red intensity is , where is the red channel value.[33][34] This model starts from black (0,0,0) and adds light components to produce the desired hue.[35] For printing applications, the CMYK color model serves as a subtractive alternative, using cyan, magenta, yellow, and black inks to absorb specific wavelengths from a white substrate, thereby creating colors through subtraction of light. Each channel is typically 8 bits, resulting in 32 bpp for full representation. Indexed color palettes offer efficiency in raster graphics by mapping each pixel to an index in a predefined table of up to 256 colors (8 bpp), reducing storage needs for images with limited color variety while maintaining visual fidelity through color quantization.[36][37] Transparency in raster images is handled via an alpha channel, which specifies the opacity of each pixel on a scale from 0 (fully transparent) to 255 (fully opaque) in 8-bit implementations. The RGBA model extends RGB by adding this channel, commonly at 32 bpp (8 bits per R, G, B, and A), enabling compositing where semi-transparent pixels blend with underlying layers.[38][39] High dynamic range (HDR) raster images surpass standard 8-bit limitations by employing floating-point representations, often 16 or 32 bits per channel, to capture a wider range of luminance values from deep shadows to bright highlights without clipping. Formats like OpenEXR use 16-bit half-float per channel for efficient HDR storage in rendering pipelines. This extended depth preserves detail in scenes with high contrast ratios, essential for professional imaging and computer graphics.[40][41]Image Properties
Resolution and Aspect Ratio
In raster graphics, resolution refers to the density of pixels within a given physical or display area, which directly influences image sharpness and detail. For digital displays and screens, resolution is typically measured in pixels per inch (PPI), representing the number of pixels packed into one inch of the image or screen surface.[42] In contrast, for printing, resolution is expressed as dots per inch (DPI), which quantifies the number of ink dots a printer can place per inch to reproduce the image, often requiring higher values for comparable quality due to the physical nature of ink deposition.[43] The effective resolution of a raster image can be calculated as the pixel count divided by the physical size, where PPI (or DPI) = total pixels along a dimension / length in inches, ensuring the image maintains clarity when mapped to a specific output medium.[42] The sampling rate in raster graphics determines how finely the continuous scene is discretized into pixels, and insufficient rates can lead to aliasing—artifacts such as jagged edges or moiré patterns where high-frequency details are misrepresented.[44] To mitigate aliasing, anti-aliasing techniques are employed, including supersampling, which involves rendering the image at a higher resolution than the final output and then downsampling it to average pixel values, thereby smoothing edges and reducing visual distortions.[45] Aspect ratio defines the proportional relationship between the width and height of a raster image, commonly expressed as a ratio such as 16:9, which is standard for high-definition (HD) video and widescreen displays.[46] Mismatching an image's aspect ratio during resizing can cause distortion, stretching or compressing the content unevenly, which alters visual fidelity and may introduce unintended deformations in shapes or perspectives.[47] Due to the fixed nature of pixel grids in raster graphics, scalability is limited; enlarging an image beyond its native resolution results in pixelation, where individual pixels become visibly blocky and details blur.[48] For web display, images are sized by pixel dimensions, with PPI metadata typically set to 72 or 96 but ignored by browsers; to support high-DPI screens, higher pixel counts (e.g., 2x for Retina displays) or responsive techniques like HTML srcset are recommended. Print standards recommend 300 DPI to achieve smooth, high-quality output on paper.[48][49] In modern digital photography and imaging, resolution for camera sensors is often quantified in megapixels (MP), calculated as the total pixel count divided by one million:This metric provides a concise measure of a sensor's capacity to capture detail, influencing applications from consumer snapshots to professional imaging.[50]
File Formats and Storage
Raster graphics are stored in various file formats designed to encapsulate pixel data, headers, and optional metadata, enabling efficient storage, transmission, and rendering across devices. These formats differ in compression methods, support for features like transparency or multiple pages, and suitability for specific applications such as web display or professional printing. Common formats include BMP, JPEG, PNG, TIFF, GIF, WebP, and AVIF, each balancing file size, quality preservation, and functionality.[51][52] The Bitmap (BMP) format, developed by Microsoft, is an uncompressed raster image format featuring a simple structure with a file header followed directly by pixel data. This header includes details like image dimensions, color depth, and compression flags (typically none), making BMP straightforward for basic storage but resulting in large file sizes due to the lack of compression. BMP supports various bit depths from 1 to 32 bits per pixel and is commonly used in Windows environments for icons and simple graphics.[53] JPEG, standardized by the International Organization for Standardization (ISO) as ISO/IEC 10918, employs lossy compression based on the Discrete Cosine Transform (DCT) algorithm, optimized for photographic images with continuous tones. The format divides the image into 8x8 pixel blocks, applies DCT to reduce redundancy, and quantizes coefficients to achieve compression ratios often exceeding 10:1 while maintaining perceptual quality. JPEG files support Exchangeable Image File Format (EXIF) metadata, which embeds camera settings, timestamps, and GPS data, enhancing usability in digital photography workflows.[54][55] Portable Network Graphics (PNG), defined in the W3C Recommendation and ISO/IEC 15948, provides lossless compression using the DEFLATE algorithm (a combination of LZ77 and Huffman coding), preserving all original pixel data without artifacts. PNG structures data into chunks for headers, image information, palette, and compressed pixel streams, supporting alpha channels for transparency and interlacing for progressive loading. This makes it ideal for web graphics, logos, and diagrams where exact reproduction and partial transparency are required.[56] Tagged Image File Format (TIFF), originally developed by Aldus Corporation and now maintained under Adobe's stewardship with Revision 6.0 as the baseline, offers high flexibility through a tag-based structure that accommodates multiple pages, resolutions, and compression options within a single file. TIFF uses Image File Directories (IFDs) to store metadata and supports lossless compression like Lempel-Ziv-Welch (LZW), making it suitable for professional archiving, scanning, and printing where quality and extensibility are paramount.[57][58] The Graphics Interchange Format (GIF), developed by CompuServe in 1987, uses lossless LZW compression and supports up to 256 indexed colors per frame, making it suitable for simple graphics, icons, and animations. GIF allows multiple frames for basic animations and transparency via a single color index, though it is limited for photographic images due to its color palette constraints.[59] WebP, developed by Google and standardized by the W3C as of 2025, supports both lossy and lossless compression with better efficiency than JPEG and PNG, including animation and transparency support. It uses predictive coding and VP8/VP9-derived algorithms, achieving smaller file sizes for web use while maintaining quality.[60] AVIF (AV1 Image File Format), based on the AV1 video codec and standardized by the Alliance for Open Media, provides superior compression for both still images and sequences as of 2025. It supports high dynamic range (HDR), wide color gamut, and transparency, making it ideal for modern web and high-quality applications with file sizes significantly smaller than JPEG or PNG equivalents.[60] Storage requirements for raster images depend on dimensions, bit depth, and compression. For uncompressed formats, the approximate file size in bytes is calculated as , where denotes bits per pixel; compression adjusts this downward, with lossy methods like JPEG yielding smaller files at the cost of some data fidelity. For example, a 1920x1080 image at 24 bpp uncompressed requires about 6.22 MB, but JPEG compression can reduce it to under 1 MB depending on quality settings.[61] Metadata standards enhance raster file interoperability, particularly for color management. International Color Consortium (ICC) profiles, embedded as chunks or tags in formats like JPEG and PNG, define color spaces and transformations to ensure consistent rendering across devices, preventing issues like color shifts in workflows from editing to output. EXIF in JPEG further supports this by including device-specific color information alongside other descriptive data.[62][55]Applications
Display and Rendering
Raster graphics are displayed on screens through a process known as raster scanning, where the image is refreshed line by line from top to bottom to maintain visual continuity and prevent flicker. In cathode ray tube (CRT) displays, an electron beam sweeps horizontally across the phosphor-coated screen, illuminating pixels sequentially at refresh rates typically ranging from 60 to 85 Hz, with some models supporting up to 200 Hz at reduced resolutions. Liquid crystal display (LCD) and organic light-emitting diode (OLED) panels, while not using an electron beam, employ a similar progressive raster scan by sequentially updating rows of pixels via matrix addressing, achieving refresh rates of 60 to 144 Hz or higher to ensure smooth motion rendering.[63] This line-by-line refresh is essential for raster images, as it aligns with the pixel grid structure, allowing the display hardware to map color values directly to each pixel in real time.[64] Modern rendering of raster graphics heavily relies on graphics processing units (GPUs) within the graphics pipeline, where rasterization converts vector primitives into a pixel-based fragment representation before applying textures. Texture mapping integrates raster images onto 3D surfaces by sampling pixel colors from the texture and interpolating them during the fragment shading stage, as defined in APIs like OpenGL and DirectX, which use programmable shaders to compute final pixel values efficiently. For instance, in OpenGL's fixed-function pipeline or DirectX's rasterizer stage, bilinear filtering samples neighboring texels to produce smooth transitions, enabling high-performance rendering of complex scenes with raster textures.[65] This GPU-accelerated process handles the transformation from scene geometry to screen pixels, ensuring raster graphics are rendered at the display's native resolution with minimal latency. When raster images do not match the display's resolution, scaling and interpolation techniques adjust the pixel grid to fit, preserving visual quality through mathematical resampling. Bilinear interpolation computes new pixel values by averaging the four nearest neighbors in a 2x2 grid, providing efficient smoothing for moderate resizing, while bicubic interpolation uses a 4x4 neighborhood for sharper results by considering cubic weighting functions, ideal for upscaling or downscaling high-detail images.[66] These methods are commonly implemented in display drivers and software libraries to adapt raster content dynamically, avoiding artifacts like aliasing during real-time rendering.[67] In web and mobile environments, responsive rendering of raster graphics uses HTML and CSS to deliver optimized images across varying screen sizes and densities. Thesrcset attribute in the <img> tag specifies multiple raster image sources with different resolutions, allowing browsers to select the most appropriate version based on the device's viewport and pixel ratio, while the sizes attribute guides expected display sizes for efficient loading.[68] This approach, combined with CSS properties like object-fit for scaling, ensures raster images render crisply on diverse devices without excessive bandwidth use or quality loss.[69]
High-resolution displays demand correspondingly detailed raster graphics to fully utilize their pixel density, with 4K Ultra HD (UHD) defined as 3840×2160 pixels, providing approximately four times the detail of Full HD for immersive viewing.[70] On devices like iPhones, Retina scaling renders user interfaces and images at double or higher the logical resolution—such as 2x or 3x pixel density—before downsampling to the physical screen, achieving sub-pixel sharpness equivalent to 300+ pixels per inch without apparent jagged edges.[71] Similarly, 8K displays at 7680×4320 pixels extend this capability, requiring raster sources with millions of pixels to avoid interpolation artifacts and maximize clarity in professional and consumer applications.[72]
Printing and Physical Output
Raster images, typically defined in terms of pixels per inch (PPI), must be adapted to the dots per inch (DPI) capabilities of printing devices for physical output. Inkjet printers commonly achieve resolutions between 300 and 720 DPI, while laser printers range from 600 to 2400 DPI, allowing for finer detail than many screen displays but requiring adjustments to match the raster's PPI.[73] To reproduce continuous tones and colors on printers limited to discrete ink dots, halftoning techniques are employed, particularly error diffusion algorithms that simulate grayscale or color gradations. The Floyd-Steinberg algorithm, introduced in 1976, exemplifies this by quantizing pixel values and distributing the resulting error to adjacent unprocessed pixels according to a weighted matrix, reducing visible artifacts and enhancing perceived quality in printed images.[74] A key component in this process is the Raster Image Processor (RIP), specialized software that interprets raster data, applies halftoning, and generates device-specific instructions for the printer, ensuring accurate rendering of resolution, screening, and color separation. Since most raster images are created in RGB color space for digital viewing, color management during printing involves converting to CMYK, the subtractive model used by printers, often requiring gamut mapping to compress out-of-gamut RGB colors into the narrower CMYK range while preserving visual intent through perceptual or relative colorimetric rendering.[75] For large-format applications like billboards, high-resolution raster images are scaled and tiled across panels, typically requiring an effective 100 to 150 DPI at viewing distances of tens of meters to maintain clarity without excessive file sizes; for instance, a 100-meter-wide billboard might use rasters optimized to 150 DPI overall.[76]Compression Techniques
Raster graphics compression techniques aim to reduce file sizes while preserving essential image information, enabling efficient storage and transmission. These methods exploit redundancies in pixel data, such as spatial correlations or perceptual similarities, and are broadly categorized into lossless and lossy approaches. Lossless compression ensures exact reconstruction of the original image, making it suitable for applications requiring fidelity, like medical imaging or archival. In contrast, lossy compression discards less perceptible details to achieve higher ratios, prioritizing visual quality over pixel-perfect accuracy.[77][78] Lossless techniques include run-length encoding (RLE), which replaces sequences of identical pixels with a single value and count, proving effective for simple raster images with large uniform areas, such as icons or line art. Formats like PNG employ DEFLATE, combining LZ77 dictionary-based prediction with Huffman coding for entropy reduction, achieving reversible compression across various image types. Similarly, GIF uses LZW compression, a variant of dictionary coding, to handle limited-color palettes without data loss. Huffman coding, assigning shorter variable-length codes to frequent pixel values or symbols, further enhances efficiency in formats like TIFF, where it supports both standalone and combined use with other methods.[79][77][58] Lossy compression, exemplified by JPEG, transforms pixel blocks into frequency domains for selective discard of high-frequency components imperceptible to the human eye. JPEG divides images into 8×8 pixel blocks and applies the two-dimensional discrete cosine transform (DCT) to obtain a coefficient matrix, concentrating energy in low-frequency terms. The DCT formula for an N×N block (N=8) is: where if , and 1 otherwise; are input pixel values; and are the coefficients. Subsequent quantization and Huffman entropy coding yield compact representations, with typical compression ratios of 10:1 to 20:1 for photographic content.[17][78] Modern formats build on these foundations for web optimization. WebP, introduced by Google in 2010 and based on VP8 intra-frame coding, supports both lossless and lossy modes, often reducing file sizes by up to 30% compared to JPEG at equivalent quality. AVIF, specified in 2019 by the Alliance for Open Media using AV1 codec elements within HEIF containers, achieves even greater efficiency, with 20-50% smaller files than JPEG or WebP while supporting HDR and transparency.[80][81] Lossy methods like JPEG can introduce artifacts, including blocking (visible 8×8 grid edges from quantization) and moiré patterns (interference from periodic sampling). These are mitigated by adjusting the quality factor, a scale from 1 (maximum compression, severe artifacts) to 100 (minimal compression, near-lossless), which modulates quantization table scaling to balance size and fidelity.[17][78]Three-Dimensional and Volumetric Rasters
Three-dimensional raster graphics extend the two-dimensional pixel grid into volumetric space, where data is represented using voxels, or volume elements, analogous to pixels but defined across three axes (x, y, z).[82] A voxel grid consists of a regular array of these elements, with the total number of voxels given by the product of the width (W), height (H), and depth (D) dimensions, forming a discrete sampling of a 3D volume.[83] Each voxel typically stores scalar or vector properties, such as density or color, enabling the representation of complex spatial data.[82] Voxels find prominent applications in medical imaging, where computed tomography (CT) and magnetic resonance imaging (MRI) scans produce volumetric datasets composed of stacked 2D slices reconstructed into 3D voxel grids for anatomical analysis and visualization.[84] In 3D printing, voxel-based models allow for the direct fabrication of physical objects by defining material properties at each grid point, facilitating the creation of intricate, multimaterial prototypes from digital volumetric data.[85] Rendering volumetric raster data often involves techniques like ray marching, which traces rays through the voxel grid to accumulate color and opacity values along the path, suitable for direct volume rendering of semi-transparent structures.[86] For surface extraction, the marching cubes algorithm processes the voxel grid by evaluating scalar fields at grid vertices to generate polygonal meshes approximating isosurfaces, a method introduced in seminal work for efficient 3D reconstruction.[87] Common file formats for storing volumetric raster data include DICOM, the standard for medical imaging that encapsulates voxel intensity values alongside metadata in a series of interconnected files representing 3D volumes.[84] Derived 3D models from voxels, such as polygonal meshes with applied raster textures, are frequently exported in OBJ format to support rendering and further processing in graphics pipelines. Volumetric rasters impose significant storage demands due to their cubic scaling; for instance, a 512³ voxel grid at 16 bits per voxel requires approximately 256 MB of uncompressed memory, highlighting the need for efficient compression and hierarchical data structures in practical implementations.[88]Geographic and Scientific Data
In geographic information systems (GIS), raster graphics form the basis for grid-based layers that represent continuous spatial phenomena, such as elevation models and satellite imagery. A digital elevation model (DEM) is a raster dataset depicting the bare-earth topographic surface, where each cell in the grid stores an elevation value, excluding vegetation and structures.[89] For instance, satellite imagery from the Landsat program utilizes raster pixels with a 30-meter spatial resolution for multispectral bands, enabling analysis of land cover and environmental changes over large areas.[90] Raster data in GIS often employs specialized formats like GeoTIFF to incorporate georeferencing, which embeds spatial metadata such as coordinate systems and projections directly into the file. GeoTIFF extends the TIFF standard to support georeferenced raster imagery, including projection parameters like the Universal Transverse Mercator (UTM) system, which divides the Earth into zones for accurate metric mapping.[91] This allows rasters to align precisely with geographic coordinates, facilitating integration in mapping applications. Analysis of raster data in GIS commonly involves overlay operations, where multiple grid layers are combined using tools like the raster calculator to perform mathematical computations cell by cell. For terrain analysis, slope derivation from a DEM exemplifies this: the slope for each cell is calculated as the maximum rate of change in elevation, using the formula where and represent the change in elevation along the x and y directions, respectively; this yields slope in degrees as the first derivative of the DEM surface.[92][93] Beyond GIS, raster structures appear in scientific visualization, such as computational fluid dynamics (CFD) simulations, which discretize space into 2D or 3D rectangular grids analogous to raster pixels for solving fluid flow equations. In these grids, variables like velocity and pressure are stored at discrete points (i,j,k indices), enabling numerical approximations similar to pixel-based sampling.[94] In astronomy, the Flexible Image Transport System (FITS) format stores images as 2D raster arrays of pixel intensities, supporting multispectral data from telescopes with embedded metadata for calibration.[95] Handling projections is crucial for raster data representing the curved Earth surface, as transformations introduce distortions; for example, the Mercator projection preserves angles for navigation but exaggerates areas near the poles, affecting accurate area measurements in polar regions.[96] Georeferenced rasters mitigate this by specifying projection details, ensuring proper re-projection during analysis to minimize spatial inaccuracies.[91]Editing and Manipulation
Raster Image Editors
Raster image editors are software applications designed for creating, editing, and manipulating raster graphics by directly altering individual pixels or groups of pixels within an image. These tools enable precise control over pixel-level details, making them essential for tasks such as photo retouching, digital painting, and compositing. Unlike vector-based editors, raster editors operate on fixed-resolution grids, where changes affect the underlying bitmap data. Raster image editors can be broadly categorized into professional pixel editors, which offer advanced manipulation capabilities for complex workflows, and simpler paint programs, which focus on basic drawing and filling operations. Pixel editors, such as Adobe Photoshop released in 1990, provide sophisticated features for professional photographers and designers, including non-destructive adjustments and multi-layered compositions.[18] In contrast, paint programs like Microsoft Paint, introduced with Windows 1.0 in 1985, emphasize intuitive tools for casual sketching and color application without extensive pixel-level precision.[97] Core tools in raster image editors include the brush tool for painting pixels with customizable shapes and opacity, the clone stamp for duplicating pixels from one area to another to repair imperfections, and layers that allow stacking editable elements for non-destructive editing. Selection masks enable isolating specific regions for targeted modifications, preserving the original image data outside the selected area.[98] A typical workflow in a raster image editor begins with importing a raster file, followed by applying transformations or filters, such as the Gaussian blur, which smooths images through kernel convolution to reduce noise while maintaining edge details. Edits are then saved in raster formats, potentially applying compression to optimize file size.[99] Open-source alternatives have expanded accessibility to raster editing, with GIMP first publicly released in version 0.54 in January 1996 as a free Photoshop-like tool supporting plugins and advanced filtering. Krita, originating from a 1998 KDE project, specializes in digital painting with brush engines simulating traditional media.[100][101] Modern raster editors increasingly integrate artificial intelligence for enhanced creativity. Adobe Photoshop, for example, incorporated Firefly in 2023 for generative raster fills and has continued to expand these capabilities; as of 2025, features like Generative Upscale and Harmonize use updated Firefly models to intelligently expand images or match color styles with text prompts.[102][103] Open-source editors have also adopted AI through plugins, such as GIMP's Dream Prompter (released September 2025), which integrates Google Gemini for AI image generation and editing directly within the interface, and Krita's AI Diffusion plugin, enabling Stable Diffusion-based inpainting and outpainting for digital artists.[104][105]Raster-Vector Interoperability
Raster-to-vector conversion, often referred to as tracing or vectorization, involves algorithms that analyze pixel data in a raster image to generate scalable vector paths, such as Bezier curves or polygons, approximating the original shapes.[106] This process typically begins with edge detection to identify boundaries, followed by path optimization to create smooth outlines. For instance, the Potrace algorithm preprocesses the bitmap into a binary image, detects contours using a polygon-based approach, and fits Bezier curves to produce compact vector representations suitable for formats like SVG.[106] Similarly, edge detection techniques like the Canny filter, which applies Gaussian smoothing, gradient computation, non-maximum suppression, and hysteresis thresholding, can preprocess raster images to extract edges before tracing, enhancing accuracy for line art.[107] Popular tools facilitate this conversion in professional workflows. Adobe Illustrator's Image Trace feature automates the process by offering presets for black-and-white, color, or high-fidelity photo modes, where users adjust parameters like paths, corners, and noise to balance detail and smoothness before expanding traces into editable vectors. Raster editors like GIMP support SVG export through path tools, allowing users to manually or semi-automatically trace selections and save as vector files, though full automation often requires plugins or external tracers. Recent advancements incorporate artificial intelligence to improve conversion accuracy, especially for complex or photographic images. AI-based tools like Vectorizer.AI and Recraft use machine learning models to automatically detect and trace shapes, handling gradients and textures better than traditional methods by predicting vector paths from pixel patterns, resulting in higher fidelity outputs with fewer manual adjustments.[108][109] The reverse process, vector-to-raster conversion or rasterization, renders vector paths onto a pixel grid by filling enclosed areas and drawing lines, determining pixel colors based on geometric intersections.[110] Anti-aliasing techniques, such as supersampling or multisample anti-aliasing (MSAA), mitigate jagged edges (aliasing) by averaging multiple sub-pixel samples along path boundaries, producing smoother results especially at lower resolutions.[110] Despite these methods, interoperability faces limitations, particularly in fidelity. Tracing complex raster images like photographs often results in loss of detail, as vectors struggle to represent continuous tones or subtle gradients without excessive path complexity, leading to approximations that deviate from the original.[111] Fidelity metrics, such as vector error (measuring the Hausdorff distance between original raster edges and vector approximations), quantify this discrepancy, with higher errors in noisy or textured inputs.[112] Common use cases include converting scanned logos to vectors for scalable branding, where tracing preserves sharp edges without pixelation upon resizing.[113] Another application is generating scalable web graphics from bitmap icons, enabling crisp rendering across devices via SVG exports after raster-to-vector processing.[111]Advantages and Limitations
Strengths in Photorealism
Raster graphics excel in representing photorealistic images due to their pixel-based structure, which naturally accommodates complex visual elements such as smooth gradients, intricate textures, and subtle noise patterns inherent in real-world scenes. Each pixel can independently store color and intensity values, enabling the capture of continuous-tone variations that mimic the nuances of natural light and shadow without the need for mathematical approximations. This pixel-level granularity allows raster formats to depict fine details like skin textures or foliage in landscapes with high fidelity, making them particularly suited for applications requiring visual realism.[114][1] Unlike vector graphics, which rely on scalable mathematical paths best suited for geometric shapes and uniform fills, raster graphics handle irregular and non-geometric data more effectively, such as varying skin tones, organic forms, or atmospheric effects in landscapes. Vector approaches struggle to represent the subtle, pixel-by-pixel variations required for photorealism without excessive complexity or loss of detail, whereas rasters directly encode these irregularities through dense pixel arrays. This inherent capability makes raster the preferred format for content where precision in color blending and tonal subtlety is paramount, avoiding the artifacts that can arise from vector approximations of complex, non-uniform elements.[114][115] In digital photography, raster graphics form the foundation of images captured by DSLR cameras, typically at resolutions of 20-50 megapixels, allowing for detailed post-processing while preserving photorealistic quality. Similarly, medical imaging modalities like MRI and CT scans utilize raster representations to maintain high fidelity in visualizing tissue densities and anomalies, where pixel resolution directly correlates with diagnostic accuracy. In computer-generated imagery (CGI), ray-traced raster outputs enable photorealistic rendering in films; for instance, Pixar’s RenderMan produces frames at 2K resolution (approximately 2048x1152 pixels) or higher, simulating realistic lighting and materials through per-pixel ray calculations, as demonstrated in productions like Cars.[116][117][118] Raster graphics also offer performance advantages for fixed-size displays, as their pre-defined pixel grid eliminates the need for real-time recalculation or rasterization, enabling direct mapping to screen pixels for efficient rendering and immediate visual output. This static nature ensures consistent photorealistic display without computational overhead, particularly beneficial for high-resolution monitors or projectors where vector scaling could introduce delays or inconsistencies.[114][1]Challenges and Modern Mitigations
One of the primary challenges in raster graphics is scalability, as these images are resolution-dependent and consist of fixed pixel grids. When enlarged beyond their native resolution, raster images exhibit pixelation, where the discrete nature of pixels becomes apparent, leading to blocky or blurry appearances that degrade visual fidelity. For example, an uncompressed 4K RGB raster image at 3840×2160 pixels requires approximately 24 MB of storage, and demands escalate rapidly for higher resolutions like 8K, contributing to file bloat that complicates storage, editing, and transmission in resource-constrained environments.[1][119] Raster graphics also suffer from aliasing artifacts, such as jaggies—jagged edges on diagonals and curves caused by undersampling continuous geometry onto discrete pixels—and compression-induced noise, including blockiness or ringing from lossy encoding to manage file sizes. These issues are particularly evident in dynamic applications like gaming or animation, where rapid scaling or minification exacerbates moiré patterns and texture shimmering. A key mitigation for aliasing in real-time raster rendering is mipmapping, which precomputes a pyramid of filtered texture versions at halving resolutions (e.g., 1/2, 1/4 of the original) and selects the appropriate level based on screen-space derivatives during rasterization, effectively applying low-pass filtering to suppress high-frequency details that cause artifacts without excessive computational overhead.[44][120][121] Contemporary solutions leverage computational advances to overcome these limitations. Procedural generation techniques, such as those using layered fractal noise (e.g., Perlin or value noise with multiple octaves and persistence factors around 0.5), enable infinite detail in raster terrains by dynamically synthesizing scalable content, avoiding fixed-resolution bloat while integrating real-world data for realism in applications like video games.[122] GPU-accelerated upscaling further addresses scalability; NVIDIA's Deep Learning Super Sampling (DLSS), launched in 2019, renders raster frames at lower resolutions and uses AI-driven reconstruction with temporal data to upscale to native quality, boosting performance by up to 2x while reducing aliasing through integrated anti-aliasing.[123] This technology has evolved through subsequent versions, with DLSS 4 (as of January 2025) introducing multi-frame generation for even greater performance gains in real-time rendering. AI-based super-resolution models like ESRGAN (2018) enhance this by training generative adversarial networks on perceptual losses to infer high-fidelity details, producing sharper, texture-rich upscales from low-resolution rasters with minimal artifacts, outperforming traditional bicubic interpolation in visual quality metrics.[124] For sustainability, modern formats prioritize efficient storage without compromising raster quality. The High Efficiency Image File Format (HEIF), standardized by MPEG as ISO/IEC 23008-12 in 2017, employs HEVC-based compression to achieve 25–50% smaller file sizes than JPEG for equivalent perceptual quality, supporting features like transparency and animations while optimizing for cloud delivery through reduced bandwidth needs. These mitigations collectively enable raster graphics to handle high-resolution demands in contemporary workflows, from mobile devices to large-scale simulations.[125]References
- https://en.wiktionary.org/wiki/raster