Hubbry Logo
VideoVideoMain
Open search
Video
Community hub
Video
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Video
Video
from Wikipedia

A one-minute animated video showing an example of a media production process

Video is an electronic medium for the recording, copying, playback, broadcast, and display of moving-image media.[1] Video was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) systems, which, in turn, were replaced by flat-panel displays.

Video systems vary in display resolution, aspect ratio, refresh rate, color reproduction, and other qualities. Both analog and digital video can be carried on a variety of media, including radio, magnetic tape, optical discs, computer files, and network streaming.

Etymology

[edit]

The word video comes from the Latin video, "I see," the first-person singular present indicative of videre, "to see".[2]

History

[edit]

Analog video

[edit]
NTSC composite video signal (analog)

Video developed from facsimile systems developed in the mid-19th century. Mechanical video scanners, such as the Nipkow disk, were patented as early as 1884, but it took several decades before practical video systems could be developed. Whereas the medium of film records using a sequence of miniature photographic images visible to the naked eye, video encodes images electronically, turning them into analog or digital electronic signals for transmission and recording.[3]

Video was originally exclusively live technology, and was first developed for mechanical television systems. These were quickly replaced by cathode-ray tube (CRT) television systems. Live video cameras used an electron beam, which would scan a photoconductive plate with the desired image and produce a voltage signal proportional to the brightness in each part of the image. The signal could then be sent to televisions, where another beam would receive and display the image.[4] Charles Ginsburg led an Ampex research team to develop one of the first practical video tape recorders (VTR). In 1951, the first of these captured live images from television cameras by writing the camera's electrical signal onto magnetic videotape. VTRs sold for around US$50,000 in 1956, and videotapes cost US$300 per one-hour reel.[5] However, prices gradually dropped over the years, and in 1971, Sony began selling videocassette recorder (VCR) decks and tapes into the consumer market.[6]

Digital video

[edit]

Digital video is capable of higher quality and, eventually, a much lower cost than its analog predecessor. After the commercial introduction of the DVD, in 1997, and later the Blu-ray Disc, in 2006, sales of videotape and recording equipment fell. Advances in computer technology allow even inexpensive personal computers and smartphones to capture, store, edit, and transmit digital video, further reducing the cost of video production and allowing programmers and broadcasters to move to tapeless production. The advent of digital broadcasting and the subsequent digital television transition are in the process of relegating analog video to the status of a legacy technology in most parts of the world. The development of high-resolution video cameras with improved dynamic range and broader color gamuts, along with the introduction of high-dynamic-range digital intermediate data formats with improved color depth, has caused digital video technology to converge with film technology. Since 2013, the use of digital cameras in Hollywood has surpassed the use of film cameras.[7]

Characteristics

[edit]

Frame rate

[edit]

Frame rate—the number of still pictures per unit of time—ranges from six or eight frames per second (frame/s or fps) for older mechanical cameras to 120 or more for new professional cameras. The PAL and SECAM standards specify 25 fps, while NTSC specifies 29.97 fps.[8] Film is shot at a slower frame rate of 24 frames per second, which slightly complicates the process of transferring film to video. The minimum frame rate to achieve persistence of vision (the illusion of a moving image) is about 16 frames per second.[9]

Interlacing vs. progressive-scan systems

[edit]

Video can be interlaced or progressive. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is the optimum spatial resolution of both the stationary and moving parts of the image. Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second. Interlacing retains detail while requiring lower bandwidth compared to progressive scanning.[10][11]

In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively and captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the even-numbered lines. Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display.[10][11]

NTSC, PAL, and SECAM are interlaced formats. In video resolution notation, 'i' denotes interlaced scanning. For example, PAL video format is often described as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.[11][12]

When displaying a natively interlaced signal on a progressive scan device, the overall spatial resolution is degraded by simple line doubling—artifacts, such as flickering or comb effects in moving parts of the image, appear unless special signal processing eliminates them.[10][13] A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD, or satellite source on a progressive scan device such as an LCD television, digital video projector, or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.[11][12][13]

Aspect ratio

[edit]
Comparison of common cinematography and traditional television (green) aspect ratios

In video, an aspect ratio is the proportional relationship between the width and height of a video screen and video picture elements. All popular video formats are landscape, with a traditional television screen having an aspect ratio of 4:3, or about 1.33:1. High-definition televisions have an aspect ratio of 16:9, or about 1.78:1. The ratio of a full 35mm film frame with its sound track (the "Academy ratio") is 1.375:1.[14][15]

Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard and the corresponding anamorphic widescreen formats. The 720 by 480 pixel raster uses thin pixels on a 4:3 aspect ratio display and fat pixels on a 16:9 display.[14][15]

The popularity of video on mobile phones has led to the growth of vertical video. Mary Meeker, a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, highlighted the growth of vertical-video viewing in her 2015 Internet Trends Report, noting that it had grown from 5% of viewing in 2010 to 29% in 2015. Vertical-video ads are watched in their entirety nine times more frequently than those in landscape ratios.[16]

Color model and depth

[edit]
Example of U-V color plane, Y value=0.5

The color model uses the video color representation and maps encoded color values to visible colors reproduced by the system. There are several such representations in common use: typically, YIQ is used in NTSC television, YUV is used in PAL television, YDbDr is used by SECAM television, and YCbCr is used for digital video.[17][18]

The number of distinct colors a pixel can represent depends on the color depth expressed in the number of bits per pixel. A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, etc.). Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block, and the same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2-pixel blocks (4:2:2) or 75% using 4-pixel blocks (4:2:0). This process does not reduce the number of possible color values that can be displayed, but it reduces the number of distinct points at which the color changes.[12][17][18]

Quality

[edit]

Video quality can be measured with formal metrics like peak signal-to-noise ratio (PSNR) or through subjective video quality assessment using expert observation. Many subjective video quality methods are described in the ITU-T recommendation BT.500. One of the standardized methods is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video, followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying."

Compression (digital only)

[edit]

Uncompressed video delivers maximum quality, but at a very high data rate. A variety of methods are used to compress video streams, with the most effective ones using a group of pictures (GOP) to reduce spatial and temporal redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression and is closely related to image compression. Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression, including motion compensation and other techniques. The most common modern compression standards are MPEG-2, used for DVD, Blu-ray, and satellite television, and MPEG-4, used for AVCHD, mobile phones (3GP), and the Internet.[19][20]

Stereoscopy

[edit]

Stereoscopic video for 3D film and other applications can be displayed using several different methods:[21][22]

  • Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed simultaneously by using light-polarizing filters 90 degrees off-axis from each other on two video projectors. These separately polarized channels are viewed wearing eyeglasses with matching polarization filters.
  • Anaglyph 3D, where one channel is overlaid with two color-coded layers. This left and right layer technique is occasionally used for network broadcasts or recent anaglyph releases of 3D movies on DVD. Simple red/cyan plastic glasses provide the means to view the images discretely to form a stereoscopic view of the content.
  • One channel with alternating left and right frames for the corresponding eye, using LCD shutter glasses that synchronize to the video to alternately block the image for each eye, so the appropriate eye sees the correct frame. This method is most common in computer virtual reality applications, such as in a Cave Automatic Virtual Environment, but reduces effective video framerate by a factor of two.

Formats

[edit]

Different layers of video transmission and storage each provide their own set of formats to choose from.

For transmission, there is a physical connector and signal protocol (see List of video connectors). A given physical link can carry certain display standards that specify a particular refresh rate, display resolution, and color space.

Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video coding format, for which a number is available.

Analog video

[edit]

Analog video is a video signal represented by one or more analog signals. Analog color video signals include luminance (Y) and chrominance (C). When combined into one channel, as is the case among others with NTSC, PAL, and SECAM, it is called composite video. Analog video may be carried in separate channels, as in two-channel S-Video (YC) and multi-channel component video formats.

Analog video is used in both consumer and professional television production applications.

Digital video

[edit]

Digital video signal formats have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface.

Transport medium

[edit]

Video can be transmitted or transported in a variety of ways including wireless terrestrial television as an analog or digital signal, coaxial cable in a closed-circuit system as an analog signal. Broadcast or studio cameras use a single or dual coaxial cable system using serial digital interface (SDI). See List of video connectors for information about physical connectors and related signal standards.

Video may be transported over networks and other shared digital communications links using, for instance, MPEG transport stream, SMPTE 2022 and SMPTE 2110.

Display standards

[edit]

Digital television

[edit]

Digital television broadcasts use the MPEG-2 and other video coding formats and include:

Analog television

[edit]

Analog television broadcast standards include:

An analog video format consists of more information than the visible content of the frame. Preceding and following the image are lines and pixels containing metadata and synchronization information. This surrounding margin is known as a blanking interval or blanking region; the horizontal and vertical front porch and back porch are the building blocks of the blanking interval.

Computer displays

[edit]

Computer display standards specify a combination of aspect ratio, display size, display resolution, color depth, and refresh rate. A list of common resolutions is available.

Recording

[edit]
A VHS video cassette tape

Early television was almost exclusively a live medium, with some programs recorded to film for historical purposes using Kinescope. The analog video tape recorder was commercially introduced in 1951. The following list is in rough chronological order. All formats listed were sold to and used by broadcasters, video producers, or consumers; or were important historically.[23][24]

Digital video tape recorders offered improved quality compared to analog recorders.[24][26]

Optical storage mediums offered an alternative, especially in consumer applications, to bulky tape formats.[23][27]

Digital encoding formats

[edit]

A video codec is software or hardware that compresses and decompresses digital video. In the context of video compression, codec is a portmanteau of encoder and decoder, while a device that only compresses is typically called an encoder, and one that only decompresses is a decoder. The compressed data format usually conforms to a standard video coding format. The compression is typically lossy, meaning that the compressed video lacks some information present in the original video. A consequence of this is that decompressed video has lower quality than the original, uncompressed video because there is insufficient information to accurately reconstruct the original video.[28]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
refers to the electronic technology for capturing, recording, processing, transmitting, reproducing, and displaying sequences of moving visual images, typically synchronized with audio. It originated in the early 20th century with and evolved through videotape formats, , , and internet-based streaming to become a core medium for entertainment, education, communication, and information sharing worldwide. The development of video technology began with early experiments in in the 1920s, transitioning to electronic systems in the 1930s that enabled practical . The introduction of videotape in the 1950s revolutionized recording and playback, allowing content to be stored and reused without live transmission. emerged in the late 20th century, offering superior quality, compression, and editing capabilities through standards like , leading to widespread adoption in consumer devices such as camcorders, DVDs, and . Modern video encompasses a broad range of resolutions and formats, from standard definition to , 8K, and beyond, with and enhancing visual fidelity. Video delivery has shifted dramatically to digital platforms, with enabling on-demand access over the internet, supplanting and in many contexts. Video now serves diverse applications, including film and television production, , , virtual reality, and on , fundamentally shaping contemporary media consumption and communication.

Overview

Definition

Video is an electronic technology for capturing, recording, processing, transmitting, reproducing, and displaying sequences of moving visual images, typically synchronized with audio. The term "video" specifically refers to this , distinguishing it from recorded on through photochemical processes. While both create the through rapid display of still images, video relies on electronic signals rather than chemical development and projection of . Video is composed of individual —still images—displayed in sequence at a specific frame rate to produce . Common frame rates include values such as 24, 30, or 60 frames per second, with higher rates generally providing smoother . In analog video, images are represented by continuous electrical signals that scan the display area line by line, forming a pattern of . In , images are sampled into a rectangular grid of , each assigned discrete values for color and , allowing for precise manipulation, compression, and storage. Audio accompaniment is synchronized with the to create a cohesive , though video can exist without sound in some applications.

Etymology

The term video derives from the verb vidēre, meaning "to see," specifically its first-person singular present indicative form video, meaning "I see." The word entered in the as a technical term in the field of , where it initially referred to the visual component of television signals in contrast to the auditory component termed "audio." This usage appeared in phrases such as "video signal" and "video frequency" to describe the electrical representation of moving images. (Note: OED entry cited for historical usage context, though direct access not available in search; based on standard etymological records.) By the 1950s, with the introduction of magnetic tape recording systems, "video" expanded to denote recorded moving images, as in "" or "." In modern usage, "video" has broadened to encompass any electronic reproduction of moving visual sequences, including digital files, online clips, and . It is distinguished from terms like "" (originally referring to distant transmission of images, from Greek tele "far" and Latin visio "seeing"), "cinema" or "film" (emphasizing projected motion pictures on ), and "moving picture" (a more literal early descriptor). The term "video" now often implies shorter-form, electronic, or digital formats, particularly in contexts like online platforms, whereas "film" retains connotations of longer-form cinematic works.

History

Early developments

The early developments in video technology originated in the 19th century with optical devices that exploited persistence of vision to create the from a sequence of static images. The phenakistoscope, invented by in 1833, consisted of a slotted disc with sequential drawings that, when spun and viewed in a mirror, produced animated motion. Similarly, the , developed by in 1834, used a cylindrical drum with slits and interior images to achieve the same effect, enabling short animated loops. These mechanical precursors demonstrated the perceptual basis for moving images but lacked electronic capture or transmission. In the 1920s, the first practical television systems employed . John Logie Baird in the United Kingdom demonstrated the first working in 1925, transmitting low-resolution moving silhouette images using a Nipkow disc scanner and neon lamp display. In 1926, Baird achieved the transmission of recognizable human faces and in 1928 demonstrated mechanically. Concurrently, in the United States developed similar mechanical systems, transmitting simple images as early as 1923 and establishing experimental broadcasts by 1925. These mechanical approaches relied on to scan and reproduce images sequentially but were limited by mechanical complexity and low resolution. The transition to electronic television occurred in the late 1920s and 1930s with key breakthroughs in cathode-ray tube technology. , working initially at and later RCA, invented the in 1923, which used an electron beam to scan a photosensitive surface and convert light into electrical signals; he patented an improved version in 1938. independently developed an all-electronic system, transmitting the first (a straight line) in 1927 and a complete human image in 1928. By 1931, Farnsworth had demonstrated full electronic transmission of motion pictures. These electronic systems offered superior image quality and stability compared to . began in the . In the United Kingdom, the launched the world's first regular high-definition television service in using a developed by Marconi-EMI, initially alternating with before fully adopting electronic technology. In the United States, RCA and introduced regular electronic television broadcasts in following public demonstrations at the New York World's Fair, using 441-line standards. During , television development largely halted as manufacturing resources shifted to military priorities such as , though limited broadcasts continued in major cities for civilian information and morale. These foundational experiments established the electronic basis for video, paving the way for postwar expansion.

Analog video era

The analog video era (roughly ) represented the period when formed the foundation for , professional video production, and the emergence of worldwide. The widespread adoption of was a defining milestone. In the United States, the color standard was approved by the in December 1953. This system added a color subcarrier to the existing , 60-field black-and-white signal, ensuring backward compatibility so color broadcasts could be viewed on monochrome receivers without alteration. NTSC became the primary color standard in , Japan, , and much of . In Europe and elsewhere, alternative systems were developed to address limitations in NTSC, particularly color stability under varying transmission conditions. The system, developed primarily in West Germany, was first implemented in 1967 and became dominant in Western Europe, much of Asia, Africa, and Australia. The Séquentiel de couleur à mémoire (SECAM) system, developed in France, was introduced in the late 1960s and adopted in France, the Soviet Union, , and parts of Africa. A transformative development in professional video was the introduction of . In April 1956, publicly demonstrated the VR-1000, the first practical using . The employed four rotating video heads to record high-bandwidth analog signals transversely across the tape, enabling reliable recording and immediate playback of television programs. This eliminated dependence on , allowed and archiving, and rapidly became the global standard for broadcast facilities through the 1960s and 1970s. The consumer video era began in the mid-1970s with the introduction of (). Sony launched the in 1975, initially offering one-hour recording capacity and superior picture quality due to a higher bandwidth and better chrominance handling. In 1976, introduced the () format, which supported longer recording times (two hours initially, later extended to four and six hours), used less expensive tape and hardware, and was aggressively licensed to multiple manufacturers. The resulting continued through the 1980s, with VHS ultimately achieving market dominance due to longer recording duration, broader availability of pre-recorded movies, lower prices, and greater licensing support. By the mid-1980s, VHS had become the prevailing consumer standard in most countries. Concurrent with these developments, expanded dramatically, especially in the United States. Originally built in the late 1940s and 1950s to deliver to areas with poor reception, evolved in the 1970s and 1980s into multi-channel platforms through the use of . The launch of in 1972 and CNN in 1980 marked the beginning of national pay-TV and 24-hour news channels, greatly increasing programming diversity and viewer choice beyond traditional . The analog video era reached its height in the , with mature , widespread professional and consumer recording capability, and expanding . The transition to began in earnest in the late 1980s and .

Digital video transition

The transition from analog to occurred primarily during the 1990s and early 2000s, fundamentally changing recording, storage, , and practices by replacing tape-based with digital data streams. The DV (Digital Video) format, introduced in 1995 by a consortium of manufacturers including Sony, Panasonic, , and others, marked a key milestone in consumer and digital video recording. DV used to store 720×480 () or 720×576 () resolution video at 25 Mbit/s on small 6.35 mm tape cassettes, offering near-lossless quality, support, and direct digital transfer to editing systems without common in analog formats. DVD (Digital Versatile Disc) was launched in Japan in November 1996 and in the United States in 1997, providing a high-capacity medium capable of storing compressed and audio. DVDs offered significantly improved picture and sound quality over , interactive menus, and multiple language tracks, rapidly becoming the standard for distribution and accelerating the adoption of digital playback in consumer electronics. broadcasting standards emerged during this period. In the United States, the adopted the () standard in 1996, enabling with and . In Europe and other regions, the DVB (Digital Video Broadcasting) family of standards, developed from the early 1990s and deployed in the late 1990s onward, covered satellite (), cable (), and terrestrial () delivery, facilitating the gradual replacement of with digital ones. Early adoption of systems also characterized the period, with platforms such as gaining widespread use in professional production from the early 1990s. These computer-based systems allowed editors to access, rearrange, and process digitized video clips randomly rather than linearly on tape, significantly speeding workflows and enabling more creative flexibility. This transition established as the dominant format across consumer, professional, and broadcast applications, setting the stage for later and advancements.

Modern digital and streaming era

The modern digital and streaming era, beginning around the early 2010s, marked a shift toward , , and IP-based delivery dominating video consumption. () standards emerged with (3840×2160 pixels) gaining widespread adoption following the in 2012, which defined parameters for 4K and 8K production and display. 8K resolution (7680×4320 pixels) followed, with initial demonstrations at events like the 2016 Rio Olympics and consumer availability increasing in the late 2010s. () technologies, including (standardized in 2015) and proprietary systems like Dolby Vision, expanded contrast ratios and color gamuts, significantly improving image quality on compatible displays. experienced rapid growth, with Netflix transitioning to global dominance through starting in 2013 and reaching over 200 million subscribers by 2020. YouTube, already established since 2005, became the primary platform for user-generated and professional video, with features expanding for events and gaming. Live streaming platforms such as Twitch and Facebook Live further popularized real-time video interaction. This era saw video delivery move almost entirely to IP networks, with enabling seamless playback across varying connection speeds. Mobile video consumption surged as smartphones with high-resolution screens and faster mobile networks ( and later 5G) became the primary viewing devices, with reports indicating that mobile accounted for over 50% of video watch time in many markets by the mid-2010s. Cloud-based production and distribution advanced, with services like Amazon Web Services and Microsoft Azure providing scalable encoding, storage, and content delivery networks (CDNs) for global reach, reducing reliance on physical media and on-premises infrastructure. These developments collectively established streaming as the dominant mode of video access, fundamentally changing production, distribution, and viewing habits worldwide.

Analog video

Signal standards

are governed by that specify key parameters such as the number of , field rate, and color encoding method. The three primary are , , and , each developed to balance technical constraints with broadcast requirements in different regions. (National Television System Committee), used in , Japan, and parts of South America, employs () with 60 fields per second ( to accommodate the ). Color is encoded using of a at approximately 3.58 MHz superimposed on the . The standard allocates 6 MHz of bandwidth per channel for broadcast. (), dominant in Europe, Australia, Asia, and Africa, uses () and 50 fields per second (25 frames per second), aligned with . Color encoding alternates the phase of the (approximately ) on successive lines to reduce color errors from signal distortions. broadcasts typically require or depending on the variant. , developed in France and adopted in parts of Europe, Africa, and the , also uses and 50 fields per second. Unlike and , SECAM transmits color components sequentially on alternate lines using rather than , with around 4.25 MHz and 4.41 MHz for the two . SECAM systems generally use 8 MHz channel bandwidth. Analog video signals can be transmitted in different formats that vary in quality and separation of components. combines (brightness) and (color) into a single signal, making it simple but prone to artifacts such as color bleeding and . separates luminance and chrominance onto two wires, improving sharpness and color fidelity over composite. separates the signal into three channels—luminance (Y) and two color-difference signals ( or )—offering the highest quality among analog formats by avoiding cross-modulation between brightness and color information.

Recording formats

The recording of video in the analog era relied primarily on . These formats defined professional and consumer from the 1950s through the 1990s, with distinct standards for bandwidth, tape width, cassette design, and recording quality. Professional analog videotape formats included the (), introduced by in 1956. Quad used 2-inch-wide tape and transverse rotating heads to record high-quality video for broadcast television, becoming the standard for television production and archiving until the 1980s. The format offered excellent image quality and editability but required large, expensive machines and open-reel tapes. In the 1970s, smaller cassette formats emerged for professional and institutional use. Sony's , launched in 1971, was the first videocassette format, using 3/4-inch tape in a cassette. It became widely adopted in () and educational settings due to its portability and reliability compared to . The late 1970s and 1980s saw a competition between consumer formats. Sony introduced in 1975, offering superior picture quality and initially better audio, using 1/2-inch tape. However, 's , released in 1976, provided longer recording times and was licensed more widely to manufacturers. The resulting "" ended with VHS dominating the consumer market by the mid-1980s, largely due to longer play times, lower costs, and greater availability of pre-recorded tapes. Sony later developed the Video8 format in 1985, using 8mm tape for compact camcorders. Video8 was followed by Hi8 in 1989, which improved resolution and color performance while remaining . These 8mm formats proved popular for home video recording due to their small size and portability.

Transmission

in the relied on three primary methods: , , and , all using to deliver signals to receivers. In , the was onto an using vestigial sideband (VSB) modulation, which transmitted the full and a vestige of the to reduce bandwidth while maintaining compatibility with receivers designed for double-sideband modulation. The audio signal was on a separate . Channel bandwidths were standardized at 6 MHz for in and Japan, and 7 or 8 MHz for and in Europe and elsewhere. Signals were transmitted in VHF (54–216 MHz) and (470–890 MHz) frequency bands from high-power transmitters, enabling reception via antennas over tens of kilometers depending on terrain and power. Cable television distribution delivered analog video signals through coaxial cable networks. At the headend, broadcast or satellite-fed signals were received, processed, and remodulated onto RF carriers within a broadband frequency spectrum typically spanning 54–550 MHz or higher. Each channel occupied a 6 MHz or 8 MHz slot, with the cable system using amplifiers, taps, and splitters to maintain signal integrity over long distances while minimizing interference in the shielded environment. This allowed cable systems to carry dozens of channels simultaneously, far exceeding the capacity of terrestrial broadcasting in a given area. Analog satellite transmission utilized for the video signal, which provided superior signal-to-noise performance over long distances compared to . Signals were uplinked to on C-band (approximately 4–8 GHz) or (11–18 GHz) frequencies, then downlinked to . The wide bandwidth of FM carriers (often 20–36 MHz per channel) supported high-quality video and multiple . This method was widely used for in rural areas and for distribution to before digital compression became dominant. These predominated from the mid-20th century until the gradual shift to in the late 1990s and 2000s.

Digital video

Digitization and sampling

is the process of converting continuous analog video signals into discrete digital representations through and quantization. This conversion enables storage, processing, transmission, and display in digital systems while preserving as much as possible. The is fundamental to , requiring that the sampling frequency be at least twice the highest frequency component in the to avoid aliasing and allow accurate reconstruction. In analog video, (brightness) signals typically have higher bandwidth than (color) signals, so sampling rates are chosen accordingly. For example, specifies a luminance sampling frequency of 13.5 MHz for standard-definition video, which is more than twice the typical analog bandwidth of approximately 5-6 MHz. Chrominance signals are sampled at lower rates in many systems to optimize data efficiency. Following , quantization assigns discrete numerical values to the sampled amplitudes, introducing quantization error. determines the number of quantization levels: 8-bit systems provide 256 levels per channel, while offer 1024 levels, reducing in gradients and improving dynamic range. Higher bit depths are common in professional and applications. exploits the human visual system's greater sensitivity to than to , reducing data for color information without significant perceived loss. The notation (e.g., , , ) indicates relative sampling rates for and components. In 4:4:4, chroma is sampled at the same rate as luma both horizontally and vertically, preserving full color resolution. In 4:2:2, chroma is subsampled horizontally by a factor of 2, halving the chroma horizontal resolution. In 4:2:0, chroma is subsampled by 2 both horizontally and vertically, further reducing data while maintaining compatibility with and . These schemes are widely used in digital video standards, with 4:2:2 common in broadcast production and 4:2:0 prevalent in consumer distribution. Resulting digital resolutions vary depending on the original and applied , but are standardized in specifications like and to ensure interoperability.

Compression and codecs

is essential to manage the large data volumes of , enabling practical storage, transmission, and playback. Compression techniques exploit redundancies in spatial, temporal, and psychovisual domains to reduce file sizes while preserving . , the dominant approach in video applications, discards non-essential information based on limits, achieving high at the cost of irreversible data loss. , in contrast, preserves all original data perfectly but typically yields lower , making it suitable only for specific professional workflows where exact fidelity is mandatory. Video compression operates using two primary coding modes: and . processes each picture independently, applying techniques similar to still-image compression such as block-based transforms (typically the or its integer approximations), quantization, and . , also known as , exploits temporal correlation by predicting current frames from previously decoded reference frames, using and compensation to reduce redundant information across time. Most combine both modes, with periodic (I-frames) serving as access points and (P- or B-frames) providing the bulk of compression efficiency. The evolution of has been driven by successive standardization efforts to improve compression efficiency. (1993) established the foundational hybrid / architecture for applications. (1995) extended this framework with support for and higher resolutions, becoming the basis for DVD and . (1999) introduced improved tools for low-bitrate coding and object-based representation. (2003), jointly developed by and , delivered approximately twice the compression efficiency of MPEG-2 and became the prevailing standard for , broadcast, and early online streaming. Subsequent generations brought further gains. (2013) achieved roughly 50% bitrate reduction compared to at comparable , enabling widespread video distribution. (2013), developed by Google as an open-source , provided performance competitive with while remaining royalty-free. (2018), developed by the (including Google, Netflix, Amazon, and others), offers 20–30% better compression efficiency than HEVC and VP9 at the same quality level and is fully royalty-free, making it increasingly adopted for streaming platforms. The fundamental trade-off in is between bitrate and : higher bitrates permit better visual fidelity, while aggressive compression at lower bitrates introduces such as blockiness, blurring, or . Newer shift this , maintaining acceptable quality at substantially reduced bitrates compared to predecessors, which has been critical for enabling high-resolution streaming over limited bandwidth. These codecs are applied across various and resolutions.

Digital formats and resolutions

formats and resolutions have evolved to meet demands for higher quality, efficiency, and compatibility across devices and platforms. These standards define , , and for storing and distributing moving images. Common consumer resolutions include Standard Definition (SD), with typical dimensions of 720×480 () or 720×576 () pixels for or . standards introduced (1280×720 progressive) and 1080p (1920×1080 progressive), along with (1920×1080 interlaced) for broadcast applications. encompasses (3840×2160) and 8K (7680×4320), offering significantly more detail for large screens and immersive viewing. encapsulate video streams, audio tracks, subtitles, and metadata into a single file. () is widely adopted for online streaming and due to its broad compatibility. () supports multiple audio and subtitle tracks, making it popular for archival and flexible distribution. MOV () is commonly used in Apple ecosystems and professional workflows. Professional formats prioritize high-quality, low-loss or for post-production. offers visually lossless compression with variants like and for different quality needs. (and its successor ) provides suitable for . includes long-GOP and intra-frame options for broadcast and cinema applications. determine the proportional relationship between width and height. The traditional dominated early television and computer displays. The became standard for and modern displays to accommodate cinematic content. Vertical video in 9:16 orientation has gained prominence for mobile-first platforms like . Most digital formats incorporate compression codecs to reduce file size while maintaining quality.

Video production

Capture and recording

Video capture and recording is the process of using to convert light from a scene into electronic signals representing moving images, typically accompanied by audio, which are then stored or transmitted. The core component of video capture is the image sensor, which converts incoming light into electrical charges. Two main sensor types have dominated video technology: charge-coupled device (CCD) and sensors. CCD sensors, invented in 1969, accumulate charge in and transfer it sequentially to an output amplifier, historically providing excellent image quality with low noise and high sensitivity, making them common in early . , which amplify and digitize signals at each , have become the standard in nearly all modern due to lower power consumption, faster readout speeds enabling higher frame rates, reduced manufacturing costs, and the ability to integrate additional circuitry directly on the for on-sensor processing. Video cameras span a range of types tailored to different use cases. Professional cinema and broadcast cameras, such as those produced by , , Sony, and , typically feature large-format sensors, , modular designs, and support for raw recording to maximize image quality and post-production flexibility. Consumer and include dedicated camcorders, , and action cameras like those from GoPro, which balance portability, ease of use, and capable video features for everyday or enthusiast recording. Smartphones have emerged as ubiquitous video capture devices, integrating compact CMOS sensors with multiple lenses, image stabilization, and to deliver suitable for social media and casual production. Frame rate, expressed as frames per second (fps or p for ), is a fundamental parameter that influences motion rendering and visual style. is the standard for cinematic productions, mimicking the motion blur and look of traditional film. 30p (or in regions) is common for television and general video content, while 50p or 60p provides smoother motion for sports, gaming, and action sequences. at 120p, 240p, or thousands of fps enables when footage is played back at standard rates, with specialized used in scientific, industrial, and entertainment applications. Audio capture occurs simultaneously with video in most through built-in microphones or external inputs, with maintained by recording both to the same file or medium using the camera's internal clock. In professional environments, or systems ensure precise alignment across multiple cameras and separate audio recorders. Captured video footage is generally transferred to editing software for assembly and refinement.

Editing

Video editing is the process of manipulating and audio to create a coherent or presentation by selecting, arranging, and modifying clips. Early video editing was , performed on analog videotape where shots were assembled in sequential order. Any change required re-recording subsequent portions of the tape, making revisions time-consuming and restrictive. , enabled by digital technology, allows random access to stored on hard drives or other media. Editors can rearrange, trim, or duplicate clips without affecting the original source material, offering greater flexibility and efficiency. This approach became dominant in the 1990s with the rise of and remains the standard today. Modern occurs within software using a timeline interface, a visual representation of the sequence where clips are placed on horizontal tracks. Video clips occupy video tracks, while audio tracks handle sound elements, allowing layered editing. Editors make cuts to split or remove portions of clips, trim start and end points to refine timing, and insert or overwrite material as needed. Transitions are applied between clips to control the flow, commonly including straight cuts (abrupt change), dissolves (gradual fade from one shot to another), fades to or from black, and wipes. These help establish and . occurs simultaneously or in dedicated stages within the editing software, involving balancing levels across tracks, for spatial placement, synchronizing dialogue with lip movements, and incorporating music or to enhance the overall presentation. Professional video editing software includes Adobe Premiere Pro, widely used for its integration with other creative tools and support for various formats; Apple Final Cut Pro, favored for its performance on and magnetic timeline that automatically adjusts clip positions; Blackmagic Design DaVinci Resolve, notable for its free version and built-in tools for alongside editing; and , a long-standing industry standard in film and television production for its robust media management and collaborative workflows. Advanced visual effects are generally applied in later post-production phases.

Post-production and effects

Post-production is the phase of video creation that occurs after or recording, where is assembled, refined, and enhanced to produce the final product. This stage encompasses a range of technical and creative processes, including , visual effects integration, , and , transforming disparate elements into a cohesive and polished work. and represent central aspects of post-production. addresses technical inconsistencies in footage, such as exposure variations, white balance issues, and color casts arising from different lighting conditions or cameras, ensuring uniformity across shots. then applies artistic adjustments to achieve a desired aesthetic, enhancing mood, style, or narrative intent through techniques like secondary color selection, curves manipulation, and application. Tools such as DaVinci Resolve and Adobe Premiere Pro are widely used for these tasks. Visual effects (VFX) and involve combining multiple image sources into a single scene. Compositing layers with computer-generated imagery (CGI), , or other assets, using techniques like , keying, and to integrate them seamlessly. , including animated titles, , and transitions, are created using software like Adobe After Effects to add dynamic visual elements that support storytelling or branding. and Foley contribute significantly to the immersive quality of video. Sound design encompasses the creation, editing, and mixing of audio elements, including dialogue cleanup, music integration, and ambient effects. Foley artists recreate everyday sounds—such as footsteps, door creaks, or cloth rustles—in a controlled studio environment to replace or augment on-set audio, adding realism and texture that location recording often lacks. Automated dialogue replacement (ADR) may also be performed to rerecord spoken lines for clarity or performance adjustments.

Distribution and display

Broadcasting

is the traditional method of distributing video content to a wide audience through , , or . The shift from represented a major evolution in video distribution, allowing for higher picture and sound quality, more efficient use of , and additional services such as multiple channels and . The United States completed its on June 12, 2009, when ceased, adopting the . vary by region. The () standard, developed in North America, uses and originally employed compression for video, supporting resolutions up to . has been the basis for in the United States, Canada, Mexico, and parts of the Caribbean. The successor introduces IP-based delivery, compression, and support for 4K UHD, HDR, and immersive audio. In Europe, Australia, and many other regions, the Digital Video Broadcasting (DVB) family of standards prevails. (and its successor ) is used for terrestrial transmission, employing modulation for robustness against . and DVB-C2 apply to cable systems, while and serve satellite distribution. These standards support and later and codecs. In Japan, Brazil, and several , the () standard is employed, with for featuring segmented OFDM and one-segment mode for mobile reception. and cover satellite and cable applications, respectively. systems distribute video signals through or networks, typically using () to carry multiple digital channels in form. These systems often adopt in Europe or similar proprietary formats in , providing hundreds of channels and on-demand content. Internet Protocol Television (IPTV) represents a hybrid approach within broadcasting infrastructure, delivering video over managed broadband networks using rather than . IPTV enables features like video on demand, , and , often integrating with . While , , and remain primary distribution methods for live and scheduled programming, newer have emerged as complementary alternatives for on-demand video access.

Physical media

for video storage and playback have historically provided consumers with tangible formats for recording, distributing, and viewing moving images, starting with and transitioning to . () was the first widely successful consumer videotape format, introduced by in 1976. It used 1/2-inch magnetic tape housed in cassettes, recording with approximately 240 lines of horizontal resolution and typical capacities allowing 2 hours of recording on a standard T-120 cassette in (longer in and modes). prevailed over competing formats like due to longer recording times and broader licensing, dominating through the 1980s and 1990s. The introduction of DVD (Digital Versatile Disc) in 1996 marked the shift to for video. DVDs used a red laser to read data from a , offering single-layer capacity of 4.7 GB and dual-layer capacity of 8.5 GB, sufficient for up to 133 minutes of compressed video with standard-definition resolution () and improved picture and sound quality compared to . DVD quickly supplanted as the primary physical format for pre-recorded movies and home video distribution due to its durability, random access, and higher fidelity. , launched commercially in 2006 by the , employed a blue-violet laser to achieve higher data density. Standard hold 25 GB, while hold 50 GB, enabling (up to ) with advanced codecs like or , along with high-quality audio and interactive features. This format became the successor to DVD for high-definition home video releases. (), introduced in 2015, extended the format to support (), (), wider color gamut, and . It uses discs with capacities of 50 GB (dual-layer), 66 GB (triple-layer), and 100 GB (quad-layer), accommodating higher bitrate content with codecs like for significantly enhanced visual quality compared to standard . Physical media sales for video peaked in the with DVD and began declining in the as and downloads became dominant, with revenue dropping substantially by the and continuing to contract thereafter.

Streaming and online platforms

Streaming video over the internet has revolutionized content delivery by enabling on-demand and live access to video without requiring full file downloads, relying on progressive downloading or dedicated streaming protocols. is central to modern internet video delivery, allowing video players to dynamically adjust quality based on the user's available bandwidth and device capabilities to minimize buffering and optimize viewing experience. Two primary standards dominate: Apple's () and MPEG-DASH (Dynamic Adaptive Streaming over ). segments video into small chunks (typically 2–10 seconds) available at multiple bitrates, with playlists () directing the player to switch segments as network conditions change. MPEG-DASH similarly uses segmented media and manifest files (MPD) to support adaptive switching, offering greater flexibility and codec-agnostic design as an open international standard. Both protocols operate over standard , enabling efficient caching by content delivery networks (CDNs) and broad compatibility across devices. Major online platforms have shaped video consumption through specialized models. YouTube, launched in 2005 and acquired by Google in 2006, became the leading platform for user-generated and professional video, supporting uploads up to 8K resolution and serving billions of hours watched daily. Netflix transitioned from DVD rentals to internet streaming in 2007, pioneering large-scale on-demand video with original content production and personalized recommendations, reaching over 260 million subscribers worldwide. Twitch, focused on live streaming since 2011, emphasizes real-time interaction in gaming and creative content, while TikTok (launched internationally in 2017) popularized short-form vertical videos, driving through . enable , typically involving ingestion via protocols such as from creators to servers, followed by transcoding and distribution to viewers using like HLS or DASH for low-latency delivery. Platforms implement or for interactive applications where sub-second delays are critical. Bandwidth variability and buffering remain key challenges in internet video streaming. Fluctuating network conditions can cause rebuffering events when data arrival rates fall below consumption rates, prompting to downshift quality or pause playback to accumulate a buffer. Effective solutions include predictive bandwidth estimation, large buffers for high-quality streaming, and CDNs that reduce latency and improve reliability through edge caching.

Applications

Entertainment

Video serves as the cornerstone of modern , enabling a wide range of formats from to and bite-sized digital content. has been transformed by , which has largely supplanted traditional in production pipelines. capture high-resolution footage that allows for , flexible post-production workflows, and cost-effective distribution to and home viewers. This transition has democratized high-quality production, enabling to create cinematic works without the expense of physical film stock and . Television series production relies on for both scripted dramas and , with crews using to record scenes in or . and deliver detailed visuals to audiences, while multi-camera setups facilitate efficient recording of and . Serialized storytelling in video form supports ongoing narratives that build audiences over seasons, often released weekly or in full seasons for streaming. represent a distinct entertainment genre where audio tracks are paired with to enhance artistic expression or promote songs. Emerging prominently in the through channels dedicated to the format, music videos combine performance footage, , and experimental visuals to create that function as promotional tools and . Short-form video content has risen as a major entertainment category in the , particularly on online platforms. Creators produce brief clips—typically seconds to a few minutes long—featuring , dance challenges, tutorials, and , fostering rapid sharing and . This format allows for quick consumption and high engagement, contributing to new forms of celebrity and cultural trends. Video on demand (VOD) has fundamentally altered viewing habits by shifting consumption from to user-controlled access. Services provide extensive libraries of films, series, and original productions available anytime across devices, promoting behaviors where viewers consume multiple episodes consecutively. This model has reduced reliance on traditional television timetables, increased completion rates for series, and encouraged personalized recommendations based on viewing history.

Education

Video has become an integral component of educational practices, enabling scalable, flexible, and learning experiences across academic and professional contexts. E-learning platforms and () rely heavily on video lectures as the primary delivery medium. These pre-recorded videos allow learners to access structured instruction from leading institutions and experts at their own pace, supporting on a global scale. Platforms such as Coursera, edX, and Khan Academy center their offerings around video content, often supplemented by quizzes, readings, and to reinforce comprehension. Lecture capture systems record live classroom sessions using cameras and microphones, making the footage available for on-demand viewing. This technology supports student review, revision, and accessibility for those with disabilities or scheduling conflicts, while also facilitating blended and models. The uses video lectures for initial content delivery outside class time, allowing in-person sessions to focus on , , discussions, and application of concepts. This model shifts passive listening from the classroom to home viewing, promoting during face-to-face interactions. Studies have shown improved student engagement and outcomes in subjects ranging from sciences to when video is used this way. In , video-based simulations provide immersive, risk-free environments for skill development. These include interactive scenarios for fields such as healthcare (e.g., procedural training), aviation, and , where learners can practice responses to realistic situations with immediate feedback. Video training modules also support and ongoing across organizations.

Communication

Video has become a fundamental tool for personal, professional, and social communication, enabling and visual interaction that conveys non-verbal cues, , and context more effectively than text or audio alone. , such as Zoom and Microsoft Teams, support synchronous communication for business meetings, remote work, education, and personal connections. These tools allow multiple participants to engage in face-to-face discussions across distances, often with features like screen sharing, virtual backgrounds, and recording capabilities, making them essential for global collaboration and maintaining relationships when physical meetings are not possible. have integrated video sharing as a primary mode of interaction, allowing users to upload, view, and respond to videos that range from short clips to longer content. Platforms like Instagram, TikTok, and Facebook enable individuals to share personal moments, creative expressions, and opinions with friends, followers, and broader audiences, fostering community building, cultural exchange, and of ideas through , comments, and shares. extends video communication to real-time broadcasting, used for personal updates, social events, and professional purposes such as journalism. Services like and Facebook Live permit users to broadcast events or news as they unfold, engaging viewers through live chat and immediate feedback. In journalism, live video has supported on-the-ground reporting and , providing instant visual accounts of events to global audiences. has expanded access to information and interaction but also raises considerations around digital access, privacy, and the potential for misinformation in unedited or . , while related, primarily focuses on passive viewing of pre-produced content rather than .

Surveillance and security

Video technology is extensively used in and security to monitor, record, and analyze activities in real time or retrospectively, enhancing safety, , and incident response. Closed-circuit television (CCTV) systems, initially analog and now predominantly digital, form the backbone of many security setups. These systems use cameras to capture video feeds that are transmitted to a limited number of monitors or recording devices, often in a , without public broadcast. Traditional analog CCTV relied on for transmission, while modern IP () cameras convert video to digital data streams, enabling network transmission, remote viewing, , and integration with software for storage and analysis. Advanced video analytics, including , have become integral to modern . process video feeds to detect and identify individuals by comparing facial features against databases, aiding in access control, threat detection, and investigations. Such technologies are deployed in public spaces, transportation hubs, and critical infrastructure, though their use raises concerns about privacy and accuracy. increasingly employ () to record interactions with the public, providing objective documentation of events for accountability and evidence. Similarly, dashboard cameras (dashcams) in capture footage of , , and incidents from the vehicle's perspective. These devices typically record in , with features like automatic activation on lights/sirens and secure storage to prevent tampering. In some cases, may interface with communication networks to alert security personnel or transmit live feeds during incidents.
Add your contribution
Related Hubs
User Avatar
No comments yet.