Hubbry Logo
Scan lineScan lineMain
Open search
Scan line
Community hub
Scan line
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Scan line
Scan line
from Wikipedia
Scanlines on a Mitsubishi CS-40307 CRT color television. The fine dots through the bright scanlines are due to the shadow mask.
PAL video signal scan line. From the left: horizontal sync pulse, back porch with color burst, signal itself, front porch, sync pulse, back porch with color burst, video portion of the next scan line. The signals from multiple lines are overlaid, showing shaded areas instead of a single curve.

A scan line (also scanline) is one line, or row, in a raster scanning pattern, such as a line of video on a cathode-ray tube (CRT) display of a television set or computer monitor.[1]

On CRT screens the horizontal scan lines are visually discernible, even when viewed from a distance, as alternating colored lines and black lines, especially when a progressive scan signal with below maximum vertical resolution is displayed.[2] This is sometimes used today as a visual effect in computer graphics.[3]

The term is used, by analogy, for a single row of pixels in a raster graphics image.[4] Scan lines are important in representations of image data, because many image file formats have special rules for data at the end of a scan line. For example, there may be a rule that each scan line starts on a particular boundary (such as a byte or word; see for example BMP file format). This means that even otherwise compatible raster data may need to be analyzed at the level of scan lines in order to convert between formats.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A scan line, also known as a scanline, is a single horizontal row of pixels or luminous points in a raster scanning pattern, used in display technologies to construct images by sequentially illuminating lines from top to bottom across a screen. In cathode-ray tube (CRT) displays and video systems, an electron beam traverses each scan line from left to right, generating varying voltage levels to represent and color, before rapidly returning to the start of the next line in a process known as flyback, which includes blanking intervals to prevent unwanted artifacts. The total number of scan lines determines vertical resolution; for example, the NTSC television standard employs 525 scan lines per frame, with approximately 480 active lines carrying image data after accounting for vertical blanking. In modern raster displays, including LCD and LED panels, scan lines emulate this sequential process through row-by-row activation, maintaining compatibility with video signals that odd and even lines in formats like 525/60 to achieve effective frame rates of 30 Hz while reducing bandwidth. This line-by-line rendering forms the basis of image reconstruction in both analog and , where each scan line contributes to the overall frame as a linear stream of electrical pulses encoding and information. Beyond displays, the concept of scan lines extends to computer graphics algorithms for efficient rendering and hidden surface removal, where images are processed one horizontal line at a time to determine visible pixels, intersections, and depths for polygons or curved surfaces. These scan-line algorithms exploit spatial coherence by maintaining active edge tables and updating intersections as the virtual scan progresses downward, enabling smooth display of complex 3D scenes without processing every pixel individually. Such methods, originating in early research, remain foundational for optimizing rasterization in real-time applications like video games and simulations.

Fundamentals

Definition

A scan line, in the context of raster-based imaging and display systems, is defined as a single horizontal row of pixels or luminous points generated sequentially from left to right during a pattern, representing one line of a video frame or . This row is typically referenced by an integer vertical coordinate (y-value) within the frame buffer, where values such as color or intensity are stored and processed to form part of the overall visual output. In video systems, a scan line corresponds to a narrow horizontal strip of the optical image that is scanned to produce an electrical signal encoding brightness variations along that line. Unlike a full frame, which comprises the complete set of all such horizontal rows aggregated vertically to create the entire two-dimensional image, a scan line constitutes only one of those rows. For instance, in the resolution standard—commonly used in —the frame consists of exactly 1080 scan lines, each contributing to the vertical resolution of the display. This distinction highlights how raster scanning, the overarching process of sequentially traversing these lines from top to bottom, builds the frame one scan line at a time. When multiple scan lines are rendered and combined vertically, they aggregate to form a coherent complete , with each line's data refreshed in sequence to maintain visual continuity on the display. This layered accumulation ensures that the discrete horizontal elements coalesce into a seamless raster representation, fundamental to both analog video signals and digital frame buffers.

Raster Scanning Principle

In raster scanning, the process begins with horizontal deflection, where a scanning source, such as an electron beam or light source, moves systematically from left to right across the display surface to trace a single scan line. Upon reaching the end of the line, the source undergoes a brief retrace period before vertical deflection repositions it downward to the start of the next line, repeating this sequence to cover the entire surface from top to bottom. This sequential tracing ensures uniform coverage, forming a grid-like essential for . Synchronization is maintained through horizontal and vertical sync pulses, which precisely time the deflections to align the scanning source's position with the intended image data. Horizontal sync pulses signal the completion of each line and initiate the next, while vertical sync pulses coordinate the return to the top after all lines are traced, preventing misalignment or distortion in the overall pattern. As the source traces each scan line, it modulates its intensity or color based on input signals, exciting phosphors or illuminating pixels sequentially to build the two-dimensional point by point. The cumulative effect across all lines reconstructs the full visual content, with persistence in the display medium sustaining the image until the next frame. The scan pattern resembles a series of parallel horizontal lines progressing downward, akin to reading text on a page, where each line represents the basic unit of the raster output.

Historical Development

Origins in Television

The concept of scan lines originated in the early development of technology during the and , building on mechanical scanning precursors that laid the groundwork for electronic raster scanning. In 1884, German inventor Paul Nipkow patented the , a rotating disk with spiral apertures designed to mechanically scan an by sequentially exposing portions of it to light-sensitive elements, enabling the first theoretical electromechanical television system for dissection and reconstruction. This mechanical approach influenced early experimental systems but was limited by its low resolution and mechanical complexity. Pioneers like Vladimir Zworykin advanced to fully electronic methods; in 1923, Zworykin, working at Westinghouse, invented the , an electronic camera tube that used a photoemissive mosaic to capture and scan images line by line via an electron beam, marking a shift from mechanical to electronic raster scanning. Similarly, American inventor demonstrated the first fully electronic television transmission on September 7, 1927, using his tube to scan and transmit a simple line , eliminating moving parts and enabling higher fidelity real-time imaging. Key milestones in the adoption of scan lines for broadcast television highlighted the transition from experimental mechanical systems to standardized electronic ones. The conducted early mechanical television trials using John Logie Baird's 30-line system, with regular experimental broadcasts beginning on September 30, 1929, from Long Acre in ; these transmissions used a Nipkow-style disk to scan 30 horizontal lines per frame at 12.5 frames per second, providing rudimentary moving images to a small audience of enthusiasts. By 1936, the transitioned to electronic scanning with the launch of the world's first regular high-definition on November 2 from , employing a 405-line system developed by Marconi-EMI; this used cathode-ray tubes for both scanning the image at the transmitter and displaying it at the receiver, achieving sharper resolution with 405 horizontal scan lines interlaced at 50 fields per second. These developments prioritized raster scanning, where an electron beam traces horizontal lines across the screen from top to bottom to build the image progressively. Post-World War II standardization further entrenched scan lines as the core of television imaging, with international agreements establishing compatible electronic systems to facilitate global broadcasting. In the United States, the finalized standards in 1941 for a system operating at approximately 30 frames per second, which saw widespread commercial adoption starting in 1946 after wartime suspension, using interlaced scanning to reduce bandwidth while maintaining visual persistence. In , post-war efforts led to the 625-line standard, adopted by systems like PAL in 1967 and in 1967, which scanned 625 lines at 25 frames per second to balance resolution and transmission efficiency across diverse infrastructures. The initial purpose of scan lines in these systems was to enable real-time image transmission by breaking down visual information into sequential horizontal lines, leveraging the human eye's to reconstruct a coherent, flicker-free moving picture from rapidly refreshed scans.

Adoption in Computing

The adoption of scan line concepts in computing emerged in the late , marking a shift from post-World War II vector displays—such as those in the computer (1951) and SAGE system (1958)—to raster scan systems that drew inspiration from television's sequential line scanning for more efficient image rendering. Early raster experiments at , led by A. Michael Noll under Peter B. Denes, utilized a Honeywell DDP-224 computer to generate scanned displays, starting with 252 scan lines aligned to the Picturephone standard and later expanding to 525 lines to align with the U.S. television format. This software-driven approach to scan conversion represented a pivotal adaptation, enabling the simulation of filled areas and shading that vector systems struggled to achieve economically. By the 1970s, raster scan integration accelerated with the rise of home computers, where affordability drove the use of consumer television sets as displays. The (1975), one of the first commercially successful microcomputers, lacked built-in video output but supported add-on interfaces like Don Lancaster's (1974), a low-cost kit that generated raster video signals for standard TV monitors, displaying 16 lines of 32 characters via character-based raster scanning. These interfaces leveraged the standard's 525 scan lines, allowing hobbyists to repurpose readily available televisions for computing output without specialized hardware. This compatibility became a hallmark of early personal , as seen in subsequent systems like the (1976) and (1977), which directly output to NTSC-compatible TVs for raster-based text and graphics. The transition to raster scanning profoundly influenced by facilitating the development of displays, where directly mapped to screen along scan lines. The (1973), an influential prototype , featured one of the first commercial raster displays with 875 total scan lines (808 visible) at 1024x808 resolution, enabling interactive windows, icons, and bitmapped fonts through scan line addressing. This shift from vector-drawn lines to pixel grids supported complex, filled imagery and laid the groundwork for modern graphical user interfaces, as raster systems proved more scalable for real-time rendering in resource-constrained environments.

Technical Implementation

In Cathode-Ray Tube Displays

In cathode-ray tube (CRT) displays, scan lines are generated through the precise control of an beam emitted by an at the rear of the tube. This beam is accelerated toward a phosphor-coated screen and deflected horizontally and vertically using electromagnetic deflection coils, known as the , which surround the tube's neck. The horizontal deflection coils produce a rapidly varying that sweeps the beam from left to right across the screen at , tracing out each horizontal scan line, while the vertical deflection coils move the beam downward to the next line more slowly. For standards like , the horizontal scan frequency is approximately 15.734 kHz, enabling the beam to complete thousands of lines per second to form a visible . During the vertical retrace, when the beam returns to the top of the screen after completing all lines in a frame, vertical flyback blanking is applied by suppressing the electron gun's emission or reducing the beam intensity, preventing visible retrace lines from appearing on the screen. This blanking ensures a clean transition between frames without artifacts from the non-image-bearing return path. The number of scan lines directly determines the vertical resolution in CRT displays; for , there are 525 total scan lines per frame, but only about are active and visible, establishing a vertical resolution of lines. In low-resolution modes, such as early with fewer effective lines, individual scan lines become more visible to the viewer due to the wider spacing relative to the screen height and the beam's finite thickness. Additionally, interline flicker can occur as a visible artifact when differences in brightness between adjacent scan lines cause perceived shimmering in static horizontal features. In color CRTs, a —a thin metal sheet with precisely punched apertures positioned behind the screen—ensures proper alignment by directing from separate , , and blue electron guns to their corresponding dots or stripes along each scan line. Misalignment of the relative to the beams or phosphors can result in color fringing or purity errors, where incorrect colors appear along scan lines, degrading image quality. The mask captures a significant portion of the beam (up to 80%), which inherently reduces brightness but maintains color accuracy during horizontal scanning. The horizontal scan frequency fhf_h is derived from the total number of scan lines per frame and the frame rate. For a general CRT system, fh=Nl×frf_h = N_l \times f_r, where NlN_l is the number of scan lines per frame and frf_r is the in Hz. To arrive at this, note that each frame requires the beam to trace NlN_l horizontal lines, and frames are produced at rate frf_r, so the lines per second (i.e., fhf_h) is their product. For NTSC, with Nl=525N_l = 525 and fr29.97f_r \approx 29.97 Hz, fh=525×29.9715,734f_h = 525 \times 29.97 \approx 15{,}734 Hz, confirming the standard value. This equation holds for both progressive and interlaced modes when NlN_l accounts for the full frame. In progressive scanning modes, all scan lines are drawn sequentially in each frame, which can make line visibility more uniform compared to other methods.

Progressive and Interlaced Scanning

Progressive scanning involves drawing all scan lines of a video frame sequentially from top to bottom in a single pass, resulting in a complete image at the frame rate specified, such as 1080p at 60 frames per second. This method provides full vertical resolution and detail in every frame, enhancing motion clarity by eliminating temporal offsets between lines, and the total scanning rate is calculated as frame rate multiplied by the number of scan lines. For instance, in a 1080p system at 60 Hz, the scanning processes 1080 lines × 60 frames/second = 64,800 lines per second, supporting smooth playback without artifacts. In contrast, interlaced scanning alternates between odd-numbered and even-numbered scan lines across two separate fields to form each complete frame, as seen in formats like . Each field contains half the total lines, transmitted at twice the to reduce flicker, historically enabling bandwidth savings in television broadcasting by delivering only partial images more frequently. However, this approach can introduce drawbacks such as combing artifacts during motion, where stationary objects appear sharp but moving elements show jagged edges due to the half-line temporal displacement between fields. The relationship between field and frame rates in interlaced systems is given by field rate = frame rate × 2, meaning a 30 frames per second produces 60 fields per second. This structure halves bandwidth requirements compared to progressive scanning at an equivalent full-refresh rate, as interlacing achieves the same flicker-reducing field rate (e.g., 60 Hz) while scanning only half the lines per field; for a system like early , progressive scanning at 60 full frames per second would require 525 × 60 = 31,500 lines per second, whereas interlaced scanning uses 262.5 lines per field × 60 fields per second = 15,750 lines per second, effectively reducing the scanning bandwidth by 50%.

Applications

In Video Standards

In video standards, scan lines form the foundational structure for analog television broadcasting, defining the vertical resolution and timing of image display across major global systems. The (National Television System Committee) standard, adopted in the United States and parts of the , utilizes 525 total scan lines per frame, delivered as two interlaced fields of 262.5 lines each at 60 fields per second, resulting in approximately 30 frames per second. This interlaced approach alternates odd and even lines to reduce bandwidth while maintaining perceived motion smoothness. Similarly, the PAL (Phase Alternating Line) standard, prevalent in , , and much of and , employs 625 scan lines per frame in two interlaced fields of 312.5 lines each at 50 fields per second, yielding 25 frames per second. The (Séquentiel Couleur À Mémoire) system, a variant used primarily in , , and former Soviet states, also features 625 scan lines per frame with 50 interlaced fields per second, though it differs in color encoding by sequentially transmitting components rather than simultaneously as in PAL. Key signal components synchronize the scanning process in these standards. The horizontal sync pulse, a negative-going signal marking the start of each scan line, has a duration of approximately 4.7 μs in both and PAL systems, ensuring precise horizontal retracing of the electron beam in cathode-ray tube displays. Following the sync pulse, during the back porch of the , a color burst—a short reference signal of 8 to 10 cycles at the subcarrier frequency (3.579545 MHz for , 4.433619 MHz for PAL)—is inserted to synchronize color and phase at the receiver. In , the color burst is absent due to its sequential color transmission, relying instead on of the signals across alternate lines. The evolution from analog to digital video standards has preserved the scan line concept while enhancing resolution and flexibility. In the United States, the ATSC (Advanced Television Systems Committee) standard for , implemented since 2009, supports high-definition formats such as 1080 lines at 30 or 60 frames per second, alongside legacy-compatible 480 interlaced lines, enabling sharper imagery without interlacing artifacts in progressive modes. Globally, digital transitions like Europe's mirror this shift, standardizing higher scan line counts for broadcast compatibility.
StandardTotal Scan LinesFields/SecondScan TypePrimary RegionsExample Resolution Variant
52560Interlaced, 480i (active lines)
PAL62550Interlaced, , 576i (active lines)
62550Interlaced, 576i (active lines)
ATSC (HD)720 or 1080Varies (e.g., 60 frames/s progressive, 60 fields/s interlaced)Progressive or Interlaced,

In Computer Graphics and Rendering

In , the processes images row by row to efficiently determine visible surfaces for polygons, making it suitable for rasterization on limited hardware. Introduced in early work on curved surface display, the scans from the top of the frame to the bottom, exploiting vertical coherence to reuse computations across adjacent lines. This approach contrasts with pixel-by-pixel methods by focusing on horizontal spans (segments between edge intersections) per line, reducing redundancy in edge traversal and filling operations. The core steps involve constructing an edge table from polygon vertices, sorted by minimum y-coordinate to detect intersections efficiently. For each scanline y, edges crossing that y are transferred to an active edge table (AET), sorted by x-intercept, and updated incrementally using slopes for the next line. Span filling then draws pixels between paired AET entries, with visibility resolved via per-span Z-buffer comparisons (storing depth values along the line for occlusion testing) or painter's algorithm ordering (drawing back-to-front spans). This line-at-a-time processing enhances efficiency over full-frame methods, particularly for scenes with coherent geometry, as demonstrated in extensions for antialiased rendering. In early 2D video games, sprites were aligned to scanlines to match hardware rendering pipelines that composed frames line by line, limiting the number of active sprites per line (e.g., 8 on the NES) to manage timing and memory during beam-synchronized drawing. Modern retro game emulators replicate this by applying scanline effects through GPU shaders, overlaying semi-transparent horizontal lines to simulate CRT phosphor decay and beam scanning, enhancing the authentic low-resolution appearance on LCD displays. Image file formats like BMP and TIFF incorporate scanline to ensure efficient memory access and compatibility with hardware alignment requirements. In the BMP format, each scanline's length is rounded up to a multiple of 4 bytes (DWORD alignment) by adding zero bytes at the end, preventing misalignment in 32-bit systems; the padded length is given by width×bytes per pixel4×4\left\lceil \frac{\text{width} \times \text{bytes per pixel}}{4} \right\rceil \times 4 where width is the image width in pixels and bytes per pixel depends on bit depth (e.g., 3 for 24-bit RGB). TIFF follows similar rules for uncompressed data, each scanline to the next byte boundary, though many implementations extend this to 4-byte alignment for with BMP-like workflows and to optimize decoding on word-aligned processors.

Modern Relevance

In Digital Displays

In thin-film transistor liquid crystal displays (TFT-LCDs), the concept of scan lines has been adapted into virtual addressing mechanisms where horizontal gate lines, known as scanning lines, sequentially activate rows of s (TFTs) to control illumination. Each scanning line connects to the gates of TFTs in a specific row, allowing data signals from vertical source lines to update states row by row, effectively creating virtual scan lines without physical electron beams. This row-addressing approach enables efficient matrix control in flat-panel technologies, supporting high-resolution images by limiting the number of active elements per scan cycle. Progressive scanning has become the standard in modern digital monitors and televisions, including organic light-emitting diode () panels, where all lines of a frame are rendered sequentially from top to bottom. For instance, 4K Ultra High Definition (UHD) displays operate at 3840 × 2160 resolution with 2160 lines per frame, providing smooth motion without the temporal offsets of interlaced formats. This sequential line-by-line update aligns with the original raster principle from analog systems, ensuring compatibility while leveraging digital control for higher frame rates and reduced artifacts. Digital interfaces such as HDMI and DisplayPort transmit video data in raster scan order, delivering pixel information sequentially along each scan line to maintain temporal and spatial coherence. In these protocols, encoded pixel streams are sent line by line within frames, with timing signals synchronizing the receiver's row updates to reconstruct the image progressively. For legacy interlaced content, modern displays employ deinterlacing algorithms that interpolate missing lines from adjacent fields, converting odd-even field pairs into full progressive frames to suit non-interlaced panels. These algorithms, often implemented in hardware like FPGAs, analyze motion vectors and spatial patterns to minimize artifacts in broadcast or archived video. Resolution scaling in LCD and OLED technologies further refines perceived scan line density through , which treats individual , , and subpixels within each as independent elements to enhance horizontal resolution. By optimizing modulation across subpixels, this technique increases apparent line sharpness and density, effectively tripling horizontal detail in some configurations without altering physical scan line counts. is particularly effective in portable and high-density displays, where it improves text and edge clarity by accounting for the subpixel layout during .

Legacy Effects and Artifacts

In retro gaming emulation, scan lines persist as visible effects through software simulations that replicate the horizontal lines characteristic of cathode-ray tube (CRT) displays. These are achieved via CRT shaders in emulators like , which apply to add authentic scan line patterns, such as darkened horizontal bands between lit lines, enhancing the nostalgic aesthetic of low-resolution games without altering core . For instance, shaders like CRT-Guest and CRT-Royale generate these lines by modulating and geometry per scan line, often at integer multiples of the original resolution to preserve integrity. Persistent artifacts from scan line-based systems include interlacing combing, where fast motion in upconverted produces jagged, comb-like distortions along vertical edges due to the temporal offset between odd and even fields. This effect becomes prominent when legacy or PAL content is upscaled for modern progressive displays, as the half-frame delay between scan line fields (e.g., 1/60th second in ) misaligns moving objects across lines. Interlaced scanning contributes to such artifacts by prioritizing over spatial uniformity. Another legacy artifact ties to the flicker fusion threshold, the frequency at which intermittent light appears steady, influenced by CRT scan line refresh rates; rates below 50-60 Hz often caused perceptible flicker in older displays due to phosphor persistence decay between vertical scans. Studies on CRT regeneration showed that phosphor types like P-12 required higher refresh rates (around 60-72 Hz) to exceed this threshold and eliminate , particularly with sequential horizontal scan orders common in 1980s systems. Culturally, scan lines influence and the , where artists manipulate them for aesthetic disruption, such as overlaying horizontal noise or distortions to evoke analog failure. In the , scan line effects like per-line color cycling simulate CRT behaviors in real-time demos, building on 1980s hardware limitations. For example, Commodore 64 games from that era typically rendered 200 visible scan lines in high-resolution mode within a 262-line field, enabling effects like raster interrupts for dynamic visuals in titles such as those by Maniacs of Noise.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.