Recent from talks
Nothing was collected or created yet.
Image editing
View on Wikipedia

Image editing encompasses the processes of altering images, whether they are digital photographs, traditional photo-chemical photographs, or illustrations. Traditional analog image editing is known as photo retouching, using tools such as an airbrush to modify photographs or edit illustrations with any traditional art medium. Graphic software programs, which can be broadly grouped into vector graphics editors, raster graphics editors, and 3D modelers, are the primary tools with which a user may manipulate, enhance, and transform images. Many image editing programs are also used to render or create computer art from scratch. The term "image editing" usually refers only to the editing of 2D images, not 3D ones.
Basics of image editing
[edit]Raster images are stored on a computer in the form of a grid of picture elements, or pixels. These pixels contain the image's color and brightness information. Image editors can change the pixels to enhance the image in many ways. The pixels can be changed as a group or individually by the sophisticated algorithms within the image editors. This article mostly refers to bitmap graphics editors, which are often used to alter photographs and other raster graphics. However, vector graphics software, such as Adobe Illustrator, CorelDRAW, Xara Designer Pro or Inkscape, is used to create and modify vector images, which are stored as descriptions of lines, Bézier curves, and text instead of pixels. It is easier to rasterize a vector image than to vectorize a raster image; how to go about vectorizing a raster image is the focus of much research in the field of computer vision. Vector images can be modified more easily because they contain descriptions of the shapes for easy rearrangement. They are also scalable, being rasterizable at any resolution.
Automatic image enhancement
[edit]Camera or computer image editing programs often offer basic automatic image enhancement features that correct color hue and brightness imbalances as well as other image editing features, such as red eye removal, sharpness adjustments, zoom features and automatic cropping. These are called automatic because generally they happen without user interaction or are offered with one click of a button or mouse button or by selecting an option from a menu. Additionally, some automatic editing features offer a combination of editing actions with little or no user interaction.[1]
Software apps offering automatic enhancement of images or photos include Adobe, Fotor, Picsart, Radiant Photo, Skylum and Imagen.
Super-resolution imaging
[edit]Image editing that affects content
[edit]Listed below are some of the most used capabilities of the better graphics manipulation programs. The list is by no means all-inclusive. There are a myriad of choices associated with the application of most of these features.
Selection
[edit]One of the prerequisites for many of the applications mentioned below is a method of selecting part(s) of an image, thus applying a change selectively without affecting the entire picture. Most graphics programs have several means of accomplishing this, such as:
- a marquee tool for selecting rectangular or other regular polygon-shaped regions,
- a lasso tool for freehand selection of a region,
- a magic wand tool that selects objects or regions in the image defined by proximity of color or luminance,
- vector-based pen tools,
as well as more advanced facilities such as edge detection, masking, alpha compositing, and color and channel-based extraction. The border of a selected area in an image is often animated with the marching ants effect to help the user to distinguish the selection border from the image background.
Layers
[edit]

Another feature common to many graphics applications is that of Layers, which are analogous to sheets of transparent acetate (each containing separate elements that make up a combined picture), stacked on top of each other, each capable of being individually positioned, altered, and blended with the layers below, without affecting any of the elements on the other layers. This is a fundamental workflow that has become the norm for the majority of programs on the market today, and enables maximum flexibility for the user while maintaining non-destructive editing principles and ease of use.
Image size alteration
[edit]Image editors can resize images in a process often called image scaling, making them larger, or smaller. High image resolution cameras can produce large images, which are often reduced in size for Internet use. Image editor programs use a mathematical process called resampling to calculate new pixel values whose spacing is larger or smaller than the original pixel values. Images for Internet use are kept small, say 640 x 480 pixels, which would equal 0.3 megapixels.
Cropping an image
[edit]Digital editors are used to crop images. Cropping creates a new image by selecting a desired rectangular portion from the image being cropped. The unwanted part of the image is discarded. Image cropping does not reduce the resolution of the area cropped. Best results are obtained when the original image has a high resolution. A primary reason for cropping is to improve the image composition in the new image.
Cutting out a part of an image from the background
[edit]Using a selection tool, the outline of the figure or element in the picture is traced/selected, and then the background is removed. Depending on how intricate the "edge" is this may be more or less difficult to do cleanly. For example, individual hairs can require a lot of work. Hence the use of the "green screen" technique (chroma key) which allows one to easily remove the background.
Removal of unwanted elements
[edit]Most image editors can be used to remove unwanted branches, etc., using a "clone" tool. Removing these distracting elements draws focus to the subject, improving overall composition.
Images can also be digitally altered in a commercial context, such as in fashion magazines and on billboards. Models can be made to look thinner, or have their wrinkles and eye bags digitally removed so they appear flawless[7].
Image editing that affects appearance
[edit]Change color depth
[edit]
It is possible, using the software, to change the color depth of images. Common color depths are 2, 4, 16, 256, 65,536 and 16.7 million colors. The JPEG and PNG image formats are capable of storing 16.7 million colors (equal to 256 luminance values per color channel). In addition, grayscale images of 8 bits or less can be created, usually via conversion and down-sampling from a full-color image. Grayscale conversion is useful for reducing the file size dramatically when the original photographic print was monochrome, but a color tint has been introduced due to aging effects.
Contrast change and brightening
[edit]
Image editors have provisions to simultaneously change the contrast of images and brighten or darken the image. Underexposed images can often be improved by using this feature. Recent advances have allowed more intelligent exposure correction whereby only pixels below a particular luminosity threshold are brightened, thereby brightening underexposed shadows without affecting the rest of the image. The exact transformation that is applied to each color channel can vary from editor to editor. GIMP applies the following formula:[8]
if (brightness < 0.0)
value = value * ( 1.0 + brightness);
else
value = value + ((1 - value) * brightness);
value = (value - 0.5) * (tan ((contrast + 1) * PI/4) ) + 0.5;
where value is the input color value in the 0..1 range and brightness and contrast are in the −1..1 range.
Gamma correction
[edit]In addition to the capability of changing the images' brightness and/or contrast in a non-linear fashion, most current image editors provide an opportunity to manipulate the images' gamma value.
Gamma correction is particularly useful for bringing details that would be hard to see on most computer monitors out of shadows. In some image editing software, this is called "curves", usually, a tool found in the color menu, and no reference to "gamma" is used anywhere in the program or the program documentation. Strictly speaking, the curves tool usually does more than simple gamma correction, since one can construct complex curves with multiple inflection points, but when no dedicated gamma correction tool is provided, it can achieve the same effect.
Color adjustments
[edit]
The color of images can be altered in a variety of ways. Colors can be faded in and out, and tones can be changed using curves or other tools. The color balance can be improved, which is important if the picture was shot indoors with daylight film, or shot on a camera with the white balance incorrectly set. Special effects, like sepia tone and grayscale, can be added to an image. In addition, more complicated procedures, such as the mixing of color channels, are possible using more advanced graphics editors.
The red-eye effect, which occurs when flash photos are taken when the pupil is too widely open (so that light from the flash that passes into the eye through the pupil reflects off the fundus at the back of the eyeball), can also be eliminated at this stage.
Dynamic blending
[edit]
Advanced Dynamic Blending is a concept introduced by photographer Elia Locardi in his blog Blame The Monkey to describe the photographic process of capturing multiple bracketed exposures of a land or cityscape over a specific span of time in a changing natural or artificial lighting environment. Once captured, the exposure brackets are manually blended together into a single High Dynamic Range image using post-processing software. Dynamic Blending images serve to display a consolidated moment. This means that while the final image may be a blend of a span of time, it visually appears to represent a single instant.[9][10][11]
Histogram
[edit]Image editors have provisions to create an image histogram of the image being edited. The histogram plots the number of pixels in the image (vertical axis) with a particular brightness value (horizontal axis). Algorithms in the digital editor allow the user to visually adjust the brightness value of each pixel and to dynamically display the results as adjustments are made. Improvements in picture brightness and contrast can thus be obtained.[12]
![]() |
Noise reduction
[edit]Image editors may feature a number of algorithms which can add or remove noise in an image. Some JPEG artifacts can be removed; dust and scratches can be removed and an image can be de-speckled. Noise reduction merely estimates the state of the scene without the noise and is not a substitute for obtaining a "cleaner" image. Excessive noise reduction leads to a loss of detail, and its application is hence subject to a trade-off between the undesirability of the noise itself and that of the reduction artifacts.
Noise tends to invade images when pictures are taken in low light settings. A new picture can be given an 'antiqued' effect by adding uniform monochrome noise.
Selective color change
[edit]Some image editors have color swapping abilities to selectively change the color of specific items in an image, given that the selected items are within a specific color range.

Image orientation
[edit]
Image editors are capable of altering an image to be rotated in any direction and to any degree. Mirror images can be created and images can be horizontally flipped or vertically flopped. A small rotation of several degrees is often enough to level the horizon, correct verticality (of a building, for example), or both. Rotated images usually require cropping afterwards, in order to remove the resulting gaps at the image edges.
Perspective control and distortion
[edit]
Some image editors allow the user to distort (or "transform") the shape of an image. While this might also be useful for special effects, it is the preferred method of correcting the typical perspective distortion that results from photographs being taken at an oblique angle to a rectilinear subject. Care is needed while performing this task, as the image is reprocessed using interpolation of adjacent pixels, which may reduce overall image definition. The effect mimics the use of a perspective control lens, which achieves a similar correction in-camera without loss of definition.
Lens correction
[edit]Photo manipulation packages have functions to correct images for various lens distortions, including pincushion, fisheye, and barrel distortions. The corrections are in most cases subtle, but can improve the appearance of some photographs.
Enhancing images
[edit]In computer graphics, the enhancement of an image is the process of improving the quality of a digitally stored image by manipulating the image with software. It is quite easy, for example, to make an image lighter or darker, or to increase or decrease contrast. Advanced photo enhancement software also supports many filters for altering images in various ways.[13] Programs specialized for image enhancement are sometimes called image editors.
Sharpening and softening images
[edit]Graphics programs can be used to both sharpen and blur images in a number of ways, such as unsharp masking or deconvolution.[14] Portraits often appear more pleasing when selectively softened (particularly the skin and the background) to better make the subject stand out.[citation needed] This can be achieved with a camera by using a large aperture, or in the image editor by making a selection and then blurring it. Edge enhancement is an extremely common technique used to make images appear sharper, although purists frown on the result as appearing unnatural.

Another form of image sharpening involves a form of contrast. This is done by finding the average color of the pixels around each pixel in a specified radius, and then contrasting that pixel from that average color. This effect makes the image seem clearer, seemingly adding details. An example of this effect can be seen to the right. It is widely used in the printing and photographic industries for increasing the local contrasts and sharpening the images.
Editing of multiple images
[edit]Selecting and merging of images
[edit]
Many graphics applications are capable of merging one or more individual images into a single file. The orientation and placement of each image can be controlled.
When selecting a raster image that is not rectangular, it requires separating the edges from the background, also known as silhouetting. This is the digital-analog of cutting out the image from a physical picture. Clipping paths may be used to add silhouetted images to vector graphics or page layout files that retain vector data. Alpha compositing, allows for soft translucent edges when selecting images. There are a number of ways to silhouette an image with soft edges, including selecting the image or its background by sampling similar colors, selecting the edges by raster tracing, or converting a clipping path to a raster selection. Once the image is selected, it may be copied and pasted into another section of the same file, or into a separate file. The selection may also be saved in what is known as an alpha channel.
A popular way to create a composite image is to use transparent layers. The background image is used as the bottom layer, and the image with parts to be added are placed in a layer above that. Using an image layer mask, all but the parts to be merged is hidden from the layer, giving the impression that these parts have been added to the background layer. Performing a merge in this manner preserves all of the pixel data on both layers to more easily enable future changes in the new merged image.
Slicing of images
[edit]A more recent tool in digital image editing software is the image slicer. Parts of images for graphical user interfaces or web pages are easily sliced, labeled and saved separately from whole images so the parts can be handled individually by the display medium. This is useful to allow dynamic swapping via interactivity or animating parts of an image in the final presentation.
Special effects
[edit]
Image editors usually have a list of special effects that can create unusual results. Images may be skewed and distorted in various ways. Scores of special effects can be applied to an image which include various forms of distortion, artistic effects, geometric transforms and texture effects,[15] or combinations thereof.

Using custom Curves settings[16] in Image editors such as Photoshop, one can mimic the "pseudo-solarisation" effect, better known in photographic circles as the Sabattier-effect.

Stamp Clone Tool
[edit]The Clone Stamp tool selects and samples an area of your picture and then uses these pixels to paint over any marks. The Clone Stamp tool acts like a brush so you can change the size, allowing cloning from just one pixel wide to hundreds. You can change the opacity to produce a subtle clone effect. Also, there is a choice between Clone align or Clone non-align the sample area. In Photoshop this tool is called Clone Stamp, but it may also be called a Rubber Stamp tool.
Printing
[edit]
Controlling the print size and quality of digital images requires an understanding of the pixels-per-inch (ppi) variable that is stored in the image file and sometimes used to control the size of the printed image. Within Adobe Photoshop's Image Size dialog, the image editor allows the user to manipulate both pixel dimensions and the size of the image on the printed document. These parameters work together to produce a printed image of the desired size and quality. Pixels per inch of the image, pixel per inch of the computer monitor, and dots per inch on the printed document are related, but in use are very different. The Image Size dialog can be used as an image calculator of sorts. For example, a 1600 × 1200 image with a resolution of 200 ppi will produce a printed image of 8 × 6 inches. The same image with 400 ppi will produce a printed image of 4 × 3 inches. Change the resolution to 800 ppi, and the same image now prints out at 2 × 1.5 inches. All three printed images contain the same data (1600 × 1200 pixels), but the pixels are closer together on the smaller prints, so the smaller images will potentially look sharp when the larger ones do not. The quality of the image will also depend on the capability of the printer.
Warping
[edit]This section is empty. You can help by adding to it. (November 2018) |
See also
[edit]- Color space
- Comparison of raster graphics editors
- Computer graphics
- Digital darkroom
- Digital image processing
- Digital painting
- Digital photograph restoration
- Graphic art software
- Graphics file format summary
- Homomorphic filtering
- Image development (visual arts)
- Image distortion
- Image processing
- Image retrieval
- Image warping
- Inpainting
- Photograph manipulation
References
[edit]This article includes a list of general references, but it lacks sufficient corresponding inline citations. (March 2011) |
- ^ Fating, Abhinav (2024-02-07). "Best AI photo enhancer in 2024 - our top picks for upscaling, editing, and more". PC Guide. Retrieved 2024-12-23.
- ^ Johnson, Justin; Alahi, Alexandre; Fei-Fei, Li (2016-03-26). "Perceptual Losses for Real-Time Style Transfer and Super-Resolution". arXiv:1603.08155 [cs.CV].
- ^ Grant-Jacob, James A; Mackay, Benita S; Baker, James A G; Xie, Yunhui; Heath, Daniel J; Loxham, Matthew; Eason, Robert W; Mills, Ben (2019-06-18). "A neural lens for super-resolution biological imaging". Journal of Physics Communications. 3 (6): 065004. Bibcode:2019JPhCo...3f5004G. doi:10.1088/2399-6528/ab267d. ISSN 2399-6528.
- ^ Blau, Yochai; Michaeli, Tomer (2018). The perception-distortion tradeoff. IEEE Conference on Computer Vision and Pattern Recognition. pp. 6228–6237. arXiv:1711.06077. doi:10.1109/CVPR.2018.00652.
- ^ Zeeberg, Amos (2023-08-23). "The AI Tools Making Images Look Better". Quanta Magazine. Retrieved 2023-08-28.
- ^ Cohen, Joseph Paul; Luck, Margaux; Honari, Sina (2018). "Distribution Matching Losses Can Hallucinate Features in Medical Image Translation". In Alejandro F. Frangi; Julia A. Schnabel; Christos Davatzikos; Carlos Alberola-López; Gabor Fichtinger (eds.). Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. 21st International Conference, Granada, Spain, September 16–20, 2018, Proceedings, Part I. Lecture Notes in Computer Science. Vol. 11070. pp. 529–536. arXiv:1805.08841. doi:10.1007/978-3-030-00928-1_60. ISBN 978-3-030-00927-4. S2CID 43919703. Retrieved 1 May 2022.
- ^ Cage, Carolyn. "Confessions of a retoucher: how the modelling industry is harming women". The Sydney Morning Herald.
- ^ GIMP source code for brightness and contrast image filtering.
- ^ Lazzell, Jeff. "A Dynamic Blending and Post Processing Workshop With Travel Photographer Elia Locardi". blog.xritephoto.com. Archived from the original on September 11, 2016. Retrieved Sep 11, 2016.
- ^ Cedric, De Boom. "Blending moments in time". cedricdeboom.github.io. Archived from the original on October 8, 2016. Retrieved Sep 11, 2016.
- ^ "HDR photography with Elia Locardi". www.cnet.com. Archived from the original on September 22, 2016. Retrieved Sep 11, 2016.
- ^ Detailed introduction to histograms Archived 2017-03-17 at the Wayback Machine and the levels Archived 2017-03-18 at the Wayback Machine and curves Archived 2017-03-17 at the Wayback Machine functionality available in most image editing software; PhotoshopEssentials.com
- ^ Implementations include Imagic Photo Archived 2013-01-03 at archive.today, Viesus Archived 2022-02-09 at the Wayback Machine, and Topaz Archived 2018-05-13 at the Wayback Machine
- ^ Implementations include FocusMagic Archived 2008-05-09 at the Wayback Machine, and Photoshop Archived 2008-06-03 at the Wayback Machine
- ^ JPFix. "Skin Improvement Technology". Archived from the original on 2016-03-05. Retrieved 2008-08-23.
- ^ Guyer, Jeff (24 September 2013). "The Sabattier Effect". Digital Photography School. Archived from the original on 2019-01-09. Retrieved 2019-01-09.
- "Fantasy, fairy tale and myth collide in images: By digitally altering photos of landscapes, artist Anthony Goicolea creates an intriguing world", The Vancouver Sun (British Columbia); June 19, 2006.
- "It's hard to tell where pixels end and reality begins", The San Francisco Chronicle; September 26, 2006.
- "Virtual Art: From Illusion to Immersion", MIT Press 2002; Cambridge, Massachusetts
External links
[edit]Image editing
View on GrokipediaFundamentals
Digital image basics
Digital images are fundamentally categorized into raster and vector formats, each defined by distinct structural principles that influence their creation and manipulation. Raster images, also known as bitmap images, consist of a grid of individual pixels, where each pixel represents a discrete sample of color or intensity from the original scene, making them ideal for capturing and editing complex visual details like photographs.[8] In contrast, vector images are constructed using mathematical equations to define paths, shapes, and curves, allowing for infinite scalability without loss of quality, though they are less suited for pixel-level editing tasks common in image manipulation workflows.[9] Raster images form the primary focus of digital image editing due to their pixel-based nature, which enables precise alterations at the elemental level. The pixel serves as the basic unit of a raster image, functioning as the smallest addressable element that holds color and intensity information, typically arranged in a two-dimensional array to form the complete image.[10] Resolution, measured in pixels per inch (PPI) for digital displays or dots per inch (DPI) for printing, quantifies the density of these pixels and directly affects the perceived sharpness and detail of an image; higher PPI or DPI values yield finer detail but increase file size and processing demands.[11] Bit depth refers to the number of bits used to represent the color or grayscale value of each pixel, determining the range of tonal variations possible— for instance, 8 bits per channel allows 256 levels per color component, while 16 bits enables over 65,000, enhancing editing flexibility by preserving subtle gradients and reducing banding artifacts during adjustments. Image dimensions, expressed as width by height in pixels (e.g., 1920 × 1080), dictate the total pixel count and thus the intrinsic detail capacity of the raster image, profoundly shaping editing workflows by influencing scalability, computational load, and output suitability.[12] Larger dimensions support more intricate edits and higher-quality exports but demand greater storage and processing resources, potentially slowing operations like filtering or compositing, whereas smaller dimensions streamline workflows at the cost of reduced detail upon enlargement.[10] The origins of digital image editing trace back to the 1960s, with pioneering work in computer graphics laying the groundwork for digital manipulation. Ivan Sutherland's Sketchpad system, developed in 1963 as part of his PhD thesis at MIT, introduced interactive graphical interfaces using a light pen to draw and edit vector-based diagrams on a display, marking an early milestone in human-computer visual interaction that influenced subsequent raster image technologies.[13]Color models and representations
In digital image editing, color models define how colors are numerically represented and manipulated within an image's pixels, enabling precise control over visual elements. These models vary in their approach to encoding color, with some suited to display technologies and others to printing processes. Understanding these representations is essential for tasks like color correction, compositing, and ensuring consistency across devices. The RGB color model is an additive color system used primarily for digital displays and on-screen editing. It combines red, green, and blue light to produce a wide range of colors, where each pixel's color is determined by the intensity values of its three channels, typically ranging from 0 to 255 in 8-bit images. This model is fundamental in software like Adobe Photoshop because it aligns with how computer monitors emit light, allowing editors to directly manipulate hues through channel adjustments.[14] In contrast, the CMYK model operates on subtractive color principles, ideal for printing applications where inks absorb light from a white substrate. It uses cyan, magenta, yellow, and black (key) components to simulate colors by subtracting wavelengths from reflected light, making it the standard for professional print workflows to achieve accurate reproduction on paper or other media. Editors convert RGB images to CMYK during prepress to preview print outcomes, as the gamuts differ significantly.[15] The HSV color model provides a perceptual representation that aligns more closely with human vision, organizing colors in a cylindrical coordinate system of hue (color type), saturation (intensity), and value (brightness). Developed by Alvy Ray Smith in 1978, it facilitates intuitive editing operations, such as adjusting saturation without altering brightness, which is particularly useful for selective color enhancements in images.[16] Color space conversions are critical in image editing to adapt representations between models, often involving mathematical transformations to preserve perceptual accuracy. For instance, converting an RGB image to grayscale computes luminance as a weighted sum that approximates human sensitivity to green over red and blue: This formula, derived from ITU-R BT.601 standards for video encoding, ensures the resulting monochrome image retains natural tonal balance. Bit depth determines the precision of color representation per channel, directly impacting the dynamic range and editing flexibility. In 8-bit images, each RGB channel supports 256 discrete levels (2^8), yielding about 16.7 million possible colors but risking banding in smooth gradients during heavy adjustments. 16-bit images expand this to 65,536 levels per channel (2^16), providing over 281 trillion colors and greater latitude for non-destructive edits like exposure recovery, as the expanded range minimizes quantization artifacts.[17] Historically, the Adobe RGB (1998) color space emerged as an advancement over standard RGB to address limitations in gamut for professional photography and printing. Specified by Adobe Systems in 1998, it offers a wider color gamut—encompassing about 50% more colors than sRGB—particularly in greens and cyans, enabling editors to capture and preserve subtle tones from high-end cameras without clipping during workflows.[18]File formats and storage
Image file formats play a crucial role in image editing by determining how data is stored, compressed, and preserved for manipulation. These formats vary in their support for quality retention, transparency, and metadata, influencing editing workflows and final output compatibility. Editors must select formats that balance file size, fidelity, and functionality, such as lossless options for iterative changes versus lossy for distribution.[19] Common formats include JPEG, which employs lossy compression to reduce file sizes significantly, making it ideal for web images where moderate quality loss is acceptable. In contrast, PNG uses lossless compression, preserving all original data while supporting alpha transparency for seamless compositing in editing software. TIFF offers high-quality storage with support for editable layers and multiple color depths, suitable for professional pre-press and archival purposes. RAW files capture unprocessed sensor data directly from cameras, providing maximum flexibility for post-processing adjustments like exposure and white balance. AVIF, introduced in 2019 by the Alliance for Open Media, uses the AV1 video codec for both lossy and lossless compression, achieving high efficiency with support for transparency and high dynamic range (HDR), making it suitable for modern web and mobile applications as of 2025.[20][21][22][23] Compression in image formats falls into two main types: lossless, which allows exact reconstruction of the original image without data loss, as in PNG and uncompressed TIFF; and lossy, which discards redundant information to achieve smaller files but introduces artifacts, such as blocking in JPEG where visible 8x8 pixel grid patterns appear in uniform areas due to discrete cosine transform processing. These artifacts degrade image quality upon repeated saves, emphasizing the need for lossless formats during editing to avoid cumulative degradation.[19][24] Many formats embed metadata standards to store additional information. EXIF, developed for digital photography, records camera-specific details like model, aperture, shutter speed, and GPS coordinates, aiding editors in replicating shooting conditions. IPTC provides editorial metadata, including captions, keywords, and copyright notices, facilitating asset management in professional workflows.[25][26] The evolution of image formats has addressed efficiency and modern needs. WebP, introduced by Google in 2010, combines lossy and lossless compression with transparency support, achieving up to 34% smaller files than JPEG or PNG for web applications. HEIF, standardized by MPEG in 2017, enables high-efficiency storage of images and sequences using HEVC compression, supporting features like multiple images per file and becoming default on devices like iPhones for reduced storage without quality compromise.[27][28]| Format | Compression Type | Key Features | Typical Use |
|---|---|---|---|
| JPEG | Lossy | Small files, no transparency | Web photos |
| PNG | Lossless | Transparency, exact fidelity | Graphics, logos |
| TIFF | Lossless (or lossy variants) | Layers, high bit-depth | Printing, archiving |
| RAW | Uncompressed | Sensor data, non-destructive edits | Professional photography |
| WebP | Lossy/Lossless | Efficient web compression, transparency | Online media |
| HEIF | Lossy (HEVC-based) | Multi-image support, small size | Mobile devices |
| AVIF | Lossy/Lossless (AV1-based) | High compression efficiency, transparency, HDR support | Web and mobile images |
Tools and Techniques
Image editing software overview
Image editing software encompasses a range of applications designed to manipulate digital images, primarily categorized into raster and vector editors based on their handling of image data. Raster editors work with pixel-based images, allowing detailed modifications to photographs and complex visuals, while vector editors focus on scalable graphics defined by mathematical paths, ideal for logos and illustrations that require resizing without quality loss.[8] This distinction emerged in the late 1980s as computing power advanced, enabling specialized tools for different creative needs.[29] Pioneering raster software includes Adobe Photoshop, first released on February 19, 1990, which revolutionized photo retouching and compositing with tools for layer management and color adjustments.[30] For vector graphics, Adobe Illustrator debuted on March 19, 1987, providing precision drawing capabilities that became essential for print and web design.[31] Open-source alternatives like GIMP, initiated in 1996 as a free raster editor, offered accessible alternatives to proprietary tools, supporting community-driven development for tasks such as painting and filtering.[32] These categories have evolved to include hybrid features, but their core focuses remain distinct. Key advancements in functionality include non-destructive editing, introduced in Adobe Lightroom upon its release on February 19, 2007, which allows adjustments without altering original files through parametric edits stored separately.[33][34] The shift toward accessible platforms accelerated with mobile and web-based tools; Pixlr, launched in 2008, provides browser-based raster editing with effects and overlays for quick enhancements.[35] Similarly, Canva, released in 2013, integrates simple image editing into a drag-and-drop design ecosystem, emphasizing templates and collaboration for non-professionals.[36] Cloud integration further transformed workflows, exemplified by Adobe Creative Cloud's launch on May 11, 2012, enabling seamless syncing of assets across devices and subscriptions for updated software.[37] Recent accessibility trends incorporate AI assistance, such as Adobe Sensei, unveiled on November 3, 2016, which automates tasks like object selection and content-aware fills to democratize advanced editing. More recent AI integrations, such as Adobe Firefly launched in 2023, have introduced generative AI capabilities for creating and editing image content based on text prompts.[38][39] These developments have broadened image editing from specialized desktop applications to inclusive, cross-platform ecosystems.Basic tools and interfaces
The user experience in image editing has evolved significantly since the 1970s, when digital image processing primarily relied on command-line tools in research and space applications, such as those developed for medical imaging and remote Earth sensing. These early systems required users to input textual commands to manipulate pixel data, lacking visual feedback and making iterative editing cumbersome. The transition to graphical user interfaces (GUIs) began in the mid-1970s with innovations at Xerox PARC, including the 1975 Gypsy editor, which introduced bitmap-based WYSIWYG (what-you-see-is-what-you-get) editing with mouse-driven interactions for the first time. This paved the way for more intuitive designs, culminating in MacPaint's release in 1984 alongside the Apple Macintosh, which established enduring GUI standards for bitmap graphics editing through its icon-based tools and direct manipulation on screen. MacPaint's influence extended to consumer software, demonstrating how pixel-level control could be accessible via simple mouse gestures rather than code. Modern image editing software employs standardized GUI components to facilitate efficient workflows. The canvas, or document window, acts as the primary workspace displaying the active image file, often supporting tabbed or floating views for multiple documents. Toolbars, typically positioned along the screen's edges, house selectable icons for core functions, while options bars dynamically display settings for the active tool, such as size or opacity. Panels provide contextual controls; for instance, the layers panel organizes stacked image elements for non-destructive editing, allowing users to toggle visibility, reorder, or blend layers without altering the original pixels. Undo and redo histories, usually accessible via menus or keyboard shortcuts, maintain a chronological record of actions, enabling step-by-step reversal or reapplication of changes to support experimentation. Essential tools form the foundation of hands-on editing and are universally present across major applications. The brush tool simulates traditional painting by applying colors or patterns to the canvas, often with customizable hardness, flow, and pressure sensitivity for tablet users to vary stroke width and opacity based on pen force. The eraser tool removes pixels or reveals underlying layers, mimicking physical erasure with similar adjustable properties. The move tool repositions selected elements or entire layers, while the zoom tool scales the view for precise work, typically supporting keyboard modifiers for fit-to-screen or actual-size displays. These tools often include modes like pressure sensitivity, which enhances natural drawing by responding to input device dynamics, a feature refined in professional software since the 1990s. Basic workflows in image editing begin with opening files from supported formats, followed by iterative application of tools on the canvas, and conclude with saving versions to preserve non-destructive edits. Saving supports multiple formats and versioning to track changes, preventing data loss during sessions. For efficiency with large volumes, batch processing introduces automation, allowing users to apply predefined actions—such as resizing or color adjustments—to multiple files sequentially without manual intervention per image. This capability, integral to professional pipelines, streamlines repetitive tasks while maintaining consistency across outputs.Selection and masking methods
Selection tools in image editing software enable users to isolate specific regions of an image for targeted modifications, forming the foundation for precise edits without affecting the entire composition. Common tools include the marquee, which creates geometric selections such as rectangular or elliptical shapes by defining straight boundaries around areas. The lasso tool allows freehand drawing of irregular selections, while its polygonal variant uses straight-line segments for more controlled outlines; the magnetic lasso variant enhances accuracy by snapping to edges detected via algorithms that identify contrast boundaries in the image.[40] These edge detection methods typically rely on gradient-based techniques to locate transitions between pixels, improving selection adherence to object contours. The magic wand tool selects contiguous pixels based on color similarity to a clicked point, employing a flood-fill algorithm that propagates from the seed pixel to neighboring ones within a specified tolerance.[41] Mathematically, this thresholding process includes pixels where the absolute color difference from the reference, often measured in RGB space as , falls below a user-defined tolerance value, enabling rapid isolation of uniform areas like skies or solid objects.[42] Anti-aliased and contiguous options further refine the selection by smoothing jagged edges and limiting spread to adjacent pixels, respectively.[43] Masking techniques build on selections to achieve non-destructive isolation, preserving the original image data for reversible edits. Layer masks apply grayscale values to control layer visibility, where white reveals content, black conceals it, and intermediate tones create partial transparency, allowing iterative adjustments without pixel alteration. Clipping masks constrain the visibility of a layer to the non-transparent shape of the layer below, facilitating composite effects like texture overlays limited to specific forms.[44] Alpha channels store selection data as dedicated grayscale channels within the image file, serving as reusable masks that define transparency for export formats like PNG and enabling complex, multi-layered isolations.[45] Refinement methods enhance selection accuracy and integration, particularly for complex boundaries. Feathering softens selection edges by expanding or contracting the boundary with a gradient fade, typically adjustable in pixel radius, to blend edited areas seamlessly and avoid harsh transitions.[46] AI-driven quick selection tools, such as Adobe Photoshop's Object Selection introduced in 2019, leverage machine learning models to detect and outline subjects automatically from rough bounding boxes or brushes, incorporating edge refinement for subjects like people or objects with minimal manual input.[47] These advancements, powered by Adobe Sensei AI, analyze image semantics to propagate selections intelligently, reducing time for intricate isolations compared to manual tools.[48]Content Modification
Cropping and resizing
Cropping is a fundamental technique in image editing that involves selecting and retaining a specific portion of an image while discarding the rest, primarily to improve composition, remove unwanted elements, or adjust the aspect ratio. This process enhances visual focus by emphasizing key subjects and eliminating distractions, often guided by compositional principles such as the rule of thirds, which divides the image into a 3x3 grid and positions subjects along the lines or intersections for balanced appeal.[49][50] Preserving the original aspect ratio during cropping ensures the image maintains its intended proportions, preventing distortion when preparing for specific outputs like prints or social media formats.[51] Non-destructive cropping allows editors to apply changes without permanently altering the original image data, enabling adjustments or resets at any time through features like adjustable crop overlays in software such as Adobe Photoshop.[51] This method supports iterative composition refinement by retaining cropped pixels outside the visible area for potential later use. Canvas extension complements cropping by increasing the image boundaries to add space around the existing content, aiding composition by providing room for repositioning elements or integrating additional details without scaling the core image.[52] Trimming, conversely, refines edges by removing excess canvas after extension, ensuring a tight fit to the composed frame.[52] The practice of cropping originated in darkroom photography, where photographers physically masked negatives or prints to isolate sections, influencing digital standards established in the 1990s with the advent of software like Adobe Photoshop, which digitized these workflows for precise, layer-based control.[5] Resizing alters the overall dimensions of an image, either enlarging or reducing it, which necessitates interpolation to estimate pixel values at new positions and minimize quality degradation such as blurring or aliasing. Nearest-neighbor interpolation, the simplest method, assigns to each output pixel the value of the closest input pixel, resulting in fast computation but potential jagged edges, particularly during enlargement.[53] Bilinear interpolation improves smoothness by averaging the four nearest input pixels weighted by their fractional distances, using the formula: where and are the fractional offsets in the x and y directions, respectively.[53] Bicubic interpolation further refines this by considering a 4x4 neighborhood of 16 pixels and applying cubic polynomials for sharper results, though it demands more processing power and may introduce minor ringing artifacts.[53] These methods relate to image resolution, where resizing impacts pixel density, but careful selection preserves perceptual quality across scales.Object removal and cloning
Object removal and cloning are essential techniques in image editing for erasing unwanted elements from an image while preserving visual coherence, often by duplicating and blending pixels from donor regions to fill the targeted area. These methods rely on manual or automated sampling of source pixels to replace the removed content, ensuring seamless integration with the surrounding texture and structure. Unlike simple cropping, which alters the overall frame, these tools focus on localized content manipulation within the image canvas.[54] The clone stamp tool, a foundational manual cloning method, allows users to sample pixels from a source area (donor) and paint them directly onto a target region to cover unwanted objects. Introduced in early versions of Adobe Photoshop around 1990, it copies exact pixel values without alteration, making it ideal for duplicating patterns or removing distractions like wires or blemishes in uniform areas. To use it, the editor sets a sample point using Alt-click (on Windows) or Option-click (on Mac), then brushes over the target, with options like opacity and flow controlling the application strength. This direct copying can sometimes result in visible repetition if the source is overused, but it provides precise control for texture matching in repetitive scenes such as skies or foliage.[54][55] The healing brush tool extends cloning by sampling from a source area but blending the copied pixels with the target's lighting, color, and texture for more natural results. Debuting in Photoshop 7.0 in 2002, it uses Adobe's texture synthesis to match not just pixels but also tonal variations, reducing artifacts in complex areas like skin or fabric. Similar to the clone stamp, it requires manual source selection, but the blending occurs automatically during application, making it superior for repairs where exact duplication would appear unnatural. For instance, it effectively removes scars from portraits by borrowing nearby skin texture while adapting to local shadows.[56] Spot healing, an automated variant, simplifies the process for small blemishes by sampling pixels from the immediate surrounding area without manual source selection. Introduced in Photoshop CS2 in 2005, the spot healing brush analyzes a radius around the target (typically 20-50 pixels) to blend content seamlessly, leveraging basic inpainting to fill spots like dust or acne. It excels in homogeneous regions but may struggle with edges or patterns, where manual healing is preferred. The tool's sample all layers option allows non-destructive edits on layered files.[57] Content-aware fill represents a significant advancement in automated object removal, introduced by Adobe in Photoshop CS5 in 2010, using advanced inpainting to synthesize fills based on surrounding context rather than simple sampling. After selecting and deleting an object (e.g., via Lasso tool), the Edit > Fill command with Content-Aware mode generates plausible content by analyzing global image statistics and textures, often removing people or logos from backgrounds with minimal seams. This feature, powered by patch-based algorithms, outperforms manual cloning for large areas by propagating structures like lines or gradients intelligently. For example, it can extend a grassy field to replace a removed signpost, drawing from distant similar patches.[58] At the algorithmic core of these tools, particularly healing and content-aware methods, lie patch-based synthesis techniques that fill missing regions by copying and blending overlapping patches from known image areas. Seminal work by Efros and Leung in 1999 introduced non-parametric texture synthesis, where pixels or small patches are grown iteratively by finding the best-matching neighborhood from the input sample, preserving local statistics without parametric models. This approach laid the groundwork for exemplar-based inpainting, as refined by Criminisi et al. in 2004, which prioritizes structural elements like edges during patch selection using a confidence-based priority function: for a patch on the fill front, priority , where measures data term (e.g., edge strength via Sobel gradients) and is the isophote-driven distance term, ensuring linear structures propagate first. To minimize visible seams in synthesized regions, graph cuts optimize patch boundaries by finding low-energy cuts in an overlap graph. Kwatra et al. in 2003 developed this for texture synthesis, modeling the overlap as a graph where nodes are pixels and edges weighted by difference in intensity or gradient; the minimum cut (via max-flow) selects the optimal seam, reducing discontinuities. In inpainting, this integrates with patch synthesis to blend multi-pixel overlaps, as in Criminisi's method, where post-copy graph cuts refine boundaries for artifact-free results. These algorithms enable tools like content-aware fill to handle irregular shapes efficiently, with computational complexity scaling with patch size (typically 9x9 to 21x21 pixels) and image resolution.[59][60][61]Layer-based compositing
Layer-based compositing is a fundamental technique in digital image editing that allows users to stack multiple image elements on separate layers, enabling non-destructive manipulation and precise control over composition. Introduced in Adobe Photoshop 3.0 in 1994, this feature revolutionized workflows by permitting editors to overlay, blend, and adjust components without altering underlying data, facilitating complex assemblies in professional environments such as graphic design and photography.[62][63] Layers come in several types, each serving distinct purposes in compositing. Pixel layers hold raster image data, supporting direct painting and editing with tools or filters to build or modify visual content. Adjustment layers apply tonal and color corrections non-destructively atop other layers, preserving the original pixels below. Shape layers store vector-based graphics, ensuring crisp scalability for logos or illustrations integrated into raster compositions. Smart objects embed linked or embedded content, such as images or vectors, allowing repeated scaling and transformations without quality loss, which is essential for maintaining resolution in iterative editing.[63] Blending modes determine how layers interact, altering the appearance of stacked elements through mathematical operations on pixel values normalized between 0 and 1. The Normal mode simply overlays the top layer's color onto the base, replacing pixels directly without computation. In Multiply mode, the result darkens the image by multiplying the base and blend colors, yielding black for black inputs and unchanged colors for white; the formula is: Screen mode lightens the composition by inverting and multiplying the colors, producing white for white inputs and unchanged for black; its formula is: These modes enable effects like simulating light interactions or creating depth in composites.[64] Opacity settings on layers control transparency from 0 (fully transparent) to 1 (opaque), modulating the blend's influence via the equation: This allows subtle integration of elements. Layers can be organized into groups for hierarchical management, collapsing related components to streamline navigation in complex projects. Masking within layers, often using grayscale thumbnails, hides or reveals portions non-destructively, similar to selection-based masking techniques but applied per layer for targeted compositing.[63][65] In professional workflows, layer-based compositing supports iterative refinement, version control, and collaboration by isolating edits, reducing file sizes through smart objects, and enabling rapid previews of stacked designs—core to industries like advertising and film post-production.[63][66]Appearance Enhancement
Color correction and balance
Color correction and balance in image editing involves techniques to ensure accurate representation of colors as intended, removing unwanted casts and achieving neutrality across the image. This process is essential for maintaining color fidelity, particularly when images are captured under varying lighting conditions or displayed on different devices. White balance is a foundational method that compensates for the color temperature of the light source, making neutral tones appear truly white or gray.[67] White balance adjustments can be performed automatically by software algorithms that analyze the image to detect and neutralize dominant color casts, often based on scene statistics or predefined lighting presets. Manual correction typically employs an eyedropper tool to sample a neutral gray area in the image, which sets the balance point for the entire photo. Additionally, sliders for temperature (measured in Kelvin, shifting from cool blue to warm orange) and tint (adjusting green-magenta shifts) allow fine-tuned control over the overall color neutrality. These methods are widely implemented in tools like Adobe Camera Raw, where the white balance tool directly influences RGB channel balances.[67][68] Histogram-based adjustments, such as levels and curves, provide precise control over color distribution by mapping input pixel values to output values, enhancing overall balance without altering the image's core content. Levels adjustments target shadows, midtones, and highlights by setting black and white points on the histogram, which clips or expands tonal ranges to redistribute colors more evenly. Curves offer greater flexibility, representing the tonal mapping as a diagonal line on a graph where users add control points to create custom adjustments; the curve is interpolated using spline methods to ensure smooth transitions, defined mathematically as: where is a spline-interpolated function. This approach allows targeted color corrections across the tonal spectrum, such as balancing subtle hue shifts in landscapes.[69][70] Selective color adjustments enable editors to isolate and modify specific hue ranges, such as enhancing skin tones by fine-tuning reds and yellows while preserving other colors. This technique works by defining color sliders for primary ranges (e.g., reds, blues) and secondary composites (e.g., skin), adjusting their saturation, lightness, and balance relative to CMYK or RGB components. It is particularly useful for correcting localized color issues, like yellowish casts on portraits, without global impacts.[71][72] To achieve device-independent color balance, standards like ICC profiles are employed, which embed metadata describing how colors should be rendered across input, display, and output devices. The International Color Consortium released the first ICC specification in 1994, establishing a standardized file format for color transformations that ensures consistent fidelity from capture to reproduction. These profiles reference device-specific color spaces, often built on models like CIE Lab for perceptual uniformity.[73][74]Contrast, brightness, and gamma adjustments
Contrast, brightness, and gamma adjustments are fundamental techniques in image editing that modify the tonal distribution of an image to enhance visibility, correct exposure errors, and achieve desired aesthetic moods without altering color hue or saturation.[75] These adjustments primarily target luminance values across shadows, midtones, and highlights, allowing editors to balance the overall lightness or darkness while preserving perceptual uniformity.[76] In digital workflows, they are often implemented via sliders or curves in software interfaces, enabling non-destructive edits that can be fine-tuned to avoid loss of detail. Brightness and contrast adjustments typically employ linear transformations to shift tonal values uniformly across the image.[75] The brightness slider adds or subtracts a constant value to each pixel's luminance, effectively lightening or darkening the entire image in a linear manner, while the contrast slider stretches or compresses the range around the mid-gray point by multiplying deviations from 128 (in 8-bit scale), increasing separation between light and dark areas.[75] However, excessive use risks clipping, where highlight values exceed the maximum (e.g., 255 in 8-bit) or shadows fall below zero, resulting in loss of detail as clipped areas become uniformly white or black.[75] To mitigate this, editors often preview adjustments using histograms, which visualize tonal distribution and warn of impending clipping. Gamma correction provides a nonlinear adjustment to refine tonal reproduction, particularly for matching image data to display characteristics or perceptual response.[77] It applies the transformation given by the equation where (gamma) is typically 2.2 for standard sRGB workflows, effectively decoding gamma-encoded images to linear light or vice versa to ensure accurate luminance rendering.[76] This nonlinear mapping preserves midtone details better than linear shifts, as it emphasizes perceptual uniformity by allocating more bit depth to darker tones, which the human eye perceives more gradually.[77] In practice, gamma adjustments in editing software allow fine control over the overall tone curve, improving visibility in underexposed shadows or taming harsh highlights without uniform linear changes.[69] In RAW image editing, exposure compensation simulates in-camera adjustments by applying a linear multiplier to the raw sensor data, scaling photon counts to recover or enhance overall lightness before demosaicing.[78] Tools like Adobe Camera Raw's exposure slider adjust values in stops (e.g., +1.0 EV doubles brightness), leveraging the higher dynamic range of RAW files (often 12-14 bits) to minimize noise introduction compared to JPEG edits. This method is particularly useful for correcting underexposure, as it preserves latent detail in highlights that might otherwise be lost in processed formats.[78] The use of gamma in digital image tools traces its origins to the 1980s, when CRT monitors' inherent nonlinear response—approximating a power function with exponent around 2.5—necessitated correction to achieve linear light output for accurate reproduction.[77] Standardization to gamma 2.2 in the 1990s, influenced by early digital video and graphics standards, carried over to modern software, ensuring compatibility across displays and workflows.[79] This historical adaptation continues to shape tools like Photoshop's gamma sliders, bridging analog display limitations with digital precision.[77]Sharpening, softening, and noise reduction
Sharpening techniques enhance the perceived detail in images by increasing contrast along edges and textures, a process essential for compensating for limitations in capture devices. The unsharp mask method, originally developed in the 1930s for improving X-ray image reproduction and later adapted for analog photography to enhance fine detail in high-contrast reproductions like maps, was adapted for digital image processing in software such as Adobe Photoshop starting in the 1990s.[80][81] This technique creates a mask by blurring the original image with a low-pass filter, typically Gaussian, then subtracts it from the original to isolate high-frequency edge details, which are added back with a controllable amount to amplify transitions without altering overall brightness.[81] Mathematically, the output sharpened image is given bywhere is the input image, is the blurred version, and (often between 0.5 and 2) controls the sharpening strength; this convolution-based approach highlights edges by emphasizing differences in pixel intensities.[81] A related edge-detection method employs the Laplacian filter, a second-order derivative operator that computes the divergence of the image gradient to detect rapid intensity changes, defined as
The Laplacian kernel, such as the 3x3 matrix with center -4 and surroundings 1, is convolved with the image to produce a sharpened result by adding the filtered output to the original, effectively boosting edge responses.[82] The rise of charge-coupled device (CCD) sensors in the 1990s, which dominated digital cameras by 1990 and produced images with softer edges compared to film due to finite resolution and anti-aliasing, made digital sharpening indispensable for restoring perceptual acuity in post-processing workflows.[83] Softening, conversely, applies blur effects to diffuse details, either for artistic simulation or to correct over-sharpening. Gaussian blur uses a rotationally symmetric kernel based on the Gaussian function to spread pixel values smoothly, preserving isotropy and minimizing artifacts in corrective applications like reducing moiré patterns.[84] Motion blur simulates linear movement by averaging pixels along a directional vector, useful for artistic effects or deblurring compensation, while radial blur creates circular diffusion from a center point to mimic spinning or zooming, often employed in creative compositing.[85] Noise reduction addresses imperfections like sensor grain or compression artifacts by suppressing random variations while retaining structural content. Median filtering, introduced by Huang et al. in 1979, replaces each pixel with the median value of its neighborhood, excelling at removing impulsive "salt-and-pepper" noise from digital sensors without blurring edges as severely as linear filters.[86] For more complex Gaussian or Poisson noise common in CCD captures, wavelet denoising decomposes the image into wavelet coefficients, applies soft-thresholding to shrink noise-dominated small coefficients toward zero, and reconstructs the signal; this method, pioneered by Donoho in 1995, achieves near-optimal risk bounds for preserving textures and edges in noisy images.[87]
Advanced Editing
Perspective and distortion correction
Perspective and distortion correction addresses geometric aberrations in images caused by lens imperfections or camera positioning, restoring accurate spatial relationships for applications like photography and document processing. These corrections are essential in image editing software to mitigate effects such as barrel distortion, where straight lines bow outward, or pincushion distortion, where they curve inward, both arising from radial lens properties.[88] Lens correction typically employs polynomial models to remap distorted pixels to their ideal positions. The Brown-Conrady radial distortion model, a foundational approach, approximates this using an even-order polynomial:where is the distorted radial distance from the image center, is the undistorted distance, and are coefficients fitted via calibration; negative values correct barrel distortion, while positive ones address pincushion. This model, introduced by Duane C. Brown in 1966, enables precise compensation by inverting the transformation during editing.[88] Perspective warp techniques correct viewpoint-induced distortions, such as converging lines in architectural shots, by aligning vanishing points and applying mesh-based transformations. Vanishing point detection identifies convergence of parallel lines, often using line clustering or Hough transforms, to estimate the image's projective geometry; subsequent homography or mesh warping then rectifies the view to a frontal plane. For instance, mesh transformations divide the image into a grid and adjust control points to conform to detected perspective planes, ensuring smooth deformation without artifacts.[89][90] Keystone correction specifically targets trapezoidal distortions in scanned documents or projected images, where off-axis capture causes top-bottom asymmetry. Algorithms detect document boundaries or vanishing points to compute an affine transformation, pre-warping the image to yield a rectangular output; for example, camera-assisted methods use region-growing on projected patterns to infer the screen-to-camera mapping and apply inverse distortion. This is particularly useful for mobile scanning, improving text readability for OCR.[91][89] Software tools like Adobe Camera Raw, introduced in 2003, incorporate profile-based auto-correction by matching lens metadata to pre-calibrated distortion profiles, automatically applying polynomial adjustments for common camera-lens combinations. These features streamline workflow, combining manual sliders for fine-tuning with automated detection for efficiency.[92]






