Recent from talks
Contribute something
Nothing was collected or created yet.
Digital imaging
View on WikipediaThis article needs additional citations for verification. (July 2007) |
Digital imaging or digital image acquisition is the creation of a digital representation of the visual characteristics of an object,[1] such as a physical scene or the interior structure of an object. The term is often assumed to imply or include the processing, compression, storage, printing and display of such images. A key advantage of a digital image, versus an analog image such as a film photograph, is the ability to digitally propagate copies of the original subject indefinitely without any loss of image quality.
Digital imaging can be classified by the type of electromagnetic radiation or other waves whose variable attenuation, as they pass through or reflect off objects, conveys the information that constitutes the image. In all classes of digital imaging, the information is converted by image sensors into digital signals that are processed by a computer and made output as a visible-light image. For example, the medium of visible light allows digital photography (including digital videography) with various kinds of digital cameras (including digital video cameras). X-rays allow digital X-ray imaging (digital radiography, fluoroscopy, and CT), and gamma rays allow digital gamma ray imaging (digital scintigraphy, SPECT, and PET). Sound allows ultrasonography (such as medical ultrasonography) and sonar, and radio waves allow radar. Digital imaging lends itself well to image analysis by software, as well as to image editing (including image manipulation).
History
[edit]Before digital imaging, the first photograph ever produced, View from the Window at Le Gras, was in 1826 by Frenchman Joseph Nicéphore Niépce. When Joseph was 28, he was discussing with his brother Claude about the possibility of reproducing images with light. His focus on his new innovations began in 1816. He was in fact more interested in creating an engine for a boat. Joseph and his brother focused on that for quite some time and Claude successfully promoted his innovation moving and advancing him to England. Joseph was able to focus on the photograph and finally in 1826, he was able to produce his first photograph of a view through his window. This took 8 hours or more of exposure to light.[2]
The first digital image was produced in 1920, by the Bartlane cable picture transmission system. British inventors, Harry G. Bartholomew and Maynard D. McFarlane, developed this method. The process consisted of "a series of negatives on zinc plates that were exposed for varying lengths of time, thus producing varying densities".[3] The Bartlane cable picture transmission system generated at both its transmitter and its receiver end a punched data card or tape that was recreated as an image.[4]
In 1957, Russell A. Kirsch produced a device that generated digital data that could be stored in a computer; this used a drum scanner and photomultiplier tube.[3]
Digital imaging was developed in the 1960s and 1970s, largely to avoid the operational weaknesses of film cameras, for scientific and military missions including the KH-11 program. As digital technology became cheaper in later decades, it replaced the old film methods for many purposes.
In the early 1960s, while developing compact, lightweight, portable equipment for the onboard nondestructive testing of naval aircraft, Frederick G. Weighart[5] and James F. McNulty (U.S. radio engineer)[6] at Automation Industries, Inc., then, in El Segundo, California co-invented the first apparatus to generate a digital image in real-time, which image was a fluoroscopic digital radiograph. Square wave signals were detected on the fluorescent screen of a fluoroscope to create the image.
Digital image sensors
[edit]The charge-coupled device was invented by Willard S. Boyle and George E. Smith at Bell Labs in 1969.[7] While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next.[8] The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.[9]
Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD).[10] It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980.[10][11] It was a photodetector structure with low lag, low noise, high quantum efficiency and low dark current.[10] In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.[10]
The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels.[12][13] The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985.[14] The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric Fossum's team at the NASA Jet Propulsion Laboratory in 1993.[10] By 2007, sales of CMOS sensors had surpassed CCD sensors.[15]
Digital image compression
[edit]An important development in digital image compression technology was the discrete cosine transform (DCT).[16] DCT compression is used in JPEG, which was introduced by the Joint Photographic Experts Group in 1992.[17] JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format on the Internet.[18]
Digital cameras
[edit]These different scanning ideas were the basis of the first designs of digital camera. Early cameras took a long time to capture an image and were poorly suited for consumer purposes.[3] It was not until the adoption of the CCD (charge-coupled device) that the digital camera really took off. The CCD became part of the imaging systems used in telescopes, the first black-and-white digital cameras in the 1980s.[3] Color was eventually added to the CCD and is a usual feature of cameras today.
Changing environment
[edit]This article is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. (March 2025) |
Great strides have been made in the field of digital imaging. Negatives and exposure are foreign concepts to many, and the first digital image in 1920 led eventually to cheaper equipment, increasingly powerful yet simple software, and the growth of the Internet.[19]
The constant advancement and production of physical equipment and hardware related to digital imaging has affected the environment surrounding the field. From cameras and webcams to printers and scanners, the hardware is becoming sleeker, thinner, faster, and cheaper. As the cost of equipment decreases, the market for new enthusiasts widens, allowing more consumers to experience the thrill of creating their own images.
Everyday personal laptops, family desktops, and company computers are able to handle photographic software. Our computers are more powerful machines with increasing capacities for running programs of any kind—especially digital imaging software. And that software is quickly becoming both smarter and simpler. Although functions on today's programs reach the level of precise editing and even rendering 3-D images, user interfaces are designed to be friendly to advanced users as well as first-time fans.
The Internet allows editing, viewing, and sharing digital photos and graphics. A quick browse around the web can easily turn up graphic artwork from budding artists, news photos from around the world, corporate images of new products and services, and much more. The Internet has clearly proven itself a catalyst in fostering the growth of digital imaging.
Online photo sharing of images changes the way we understand photography and photographers. Online sites such as Flickr, Shutterfly, and Instagram give billions the capability to share their photography, whether they are amateurs or professionals. Photography has gone from being a luxury medium of communication and sharing to more of a fleeting moment in time. Subjects have also changed. Pictures used to be primarily taken of people and family. Now, we take them of anything. We can document our day and share it with everyone with the touch of our fingers.[20]
In 1826 Niepce was the first to develop a photo which used lights to reproduce images, the advancement of photography has drastically increased over the years. Everyone is now a photographer in their own way, whereas during the early 1800s and 1900s the expense of lasting photos was highly valued and appreciated by consumers and producers. According to the magazine article on five ways digital camera changed us states the following:The impact on professional photographers has been dramatic. Once upon a time a photographer wouldn't dare waste a shot unless they were virtually certain it would work."The use of digital imaging( photography) has changed the way we interacted with our environment over the years. Part of the world is experienced differently through visual imagining of lasting memories, it has become a new form of communication with friends, family and love ones around the world without face to face interactions. Through photography it is easy to see those that you have never seen before and feel their presence without them being around, for example Instagram is a form of social media where anyone is allowed to shoot, edit, and share photos of whatever they want with friends and family. Facebook, snapshot, vine and twitter are also ways people express themselves with little or no words and are able to capture every moment that is important. Lasting memories that were hard to capture, is now easy because everyone is now able to take pictures and edit it on their phones or laptops. Photography has become a new way to communicate and it is rapidly increasing as time goes by, which has affected the world around us.[21]
A study done by Basey, Maines, Francis, and Melbourne found that drawings used in class have a significant negative effect on lower-order content for student's lab reports, perspectives of labs, excitement, and time efficiency of learning. Documentation style learning has no significant effects on students in these areas. He also found that students were more motivated and excited to learn when using digital imaging.[22]
Field advancements
[edit]This section needs additional citations for verification. (January 2025) |
In the field of education.
- As digital projectors, screens, and graphics find their way to the classroom, teachers and students alike are benefitting from the increased convenience and communication they provide, although their theft can be a common problem in schools.[23] In addition acquiring a basic digital imaging education is becoming increasingly important for young professionals. Reed, a design production expert from Western Washington University, stressed the importance of using "digital concepts to familiarize students with the exciting and rewarding technologies found in one of the major industries of the 21st century".[24]
The field of medical imaging
- A branch of digital imaging that seeks to assist in the diagnosis and treatment of diseases, is growing at a rapid rate. A recent study by the American Academy of Pediatrics suggests that proper imaging of children who may have appendicitis may reduce the amount of appendectomies needed. Further advancements include amazingly detailed and accurate imaging of the brain, lungs, tendons, and other parts of the body—images that can be used by health professionals to better serve patients.[25]
- According to Vidar, as more countries take on this new way of capturing an image, it has been found that image digitalization in medicine has been increasingly beneficial for both patient and medical staff. Positive ramifications of going paperless and heading towards digitization includes the overall reduction of cost in medical care, as well as an increased global, real-time, accessibility of these images.
- There is a program called Digital Imaging in Communications and Medicine (DICOM) that is changing the medical world as we know it. DICOM is not only a system for taking high quality images of the aforementioned internal organs, but also is helpful in processing those images. It is a universal system that incorporates image processing, sharing, and analyzing for the convenience of patient comfort and understanding. This service is all encompassing and is beginning a necessity.[26]
In the field of technology, digital image processing has become more useful than analog image processing when considering the modern technological advancement.
- Image sharpen & reinstatement –
- Image sharpens & reinstatement is the procedure of images which is capture by the contemporary camera making them an improved picture or manipulating the pictures in the way to get chosen product. This comprises the zooming process, the blurring process, the sharpening process, the gray scale to color translation process, the picture recovery process and the picture identification process.
- Facial Recognition –
- Face recognition is a PC innovation that decides the positions and sizes of human faces in self-assertive digital pictures. It distinguishes facial components and overlooks whatever, for example, structures, trees & bodies.
- Remote detection –
- Remote detecting is little or substantial scale procurement of data of article or occurrence, with the utilization of recording or ongoing detecting apparatus which is not in substantial or close contact with an article. Practically speaking, remote detecting is face-off accumulation using an assortment of gadgets for collecting data on particular article or location.
- Pattern detection –
- The pattern detection is the study or investigation from picture processing. In the pattern detection, image processing is utilized for recognizing elements in the images and after that machine study is utilized to instruct a framework for variation in pattern. The pattern detection is utilized in computer-aided analysis, detection of calligraphy, identification of images, and many more.
- Color processing –
- The color processing comprises processing of colored pictures and diverse color locations which are utilized. This moreover involves study of transmit, store, and encode of the color pictures.
Augmented reality
[edit]Digital Imaging for Augmented Reality (DIAR) is a comprehensive field within the broader context of Augmented Reality (AR) technologies. It involves the creation, manipulation, and interpretation of digital images for use in augmented reality environments. DIAR plays a significant role in enhancing the user experience, providing realistic overlays of digital information onto the real world, thereby bridging the gap between the physical and the virtual realms.[27][28]
DIAR is employed in numerous sectors including entertainment, education, healthcare, military, and retail. In entertainment, DIAR is used to create immersive gaming experiences and interactive movies. In education, it provides a more engaging learning environment, while in healthcare, it assists in complex surgical procedures. The military uses DIAR for training purposes and battlefield visualization. In retail, customers can virtually try on clothes or visualize furniture in their home before making a purchase.[29]
With continuous advancements in technology, the future of DIAR is expected to witness more realistic overlays, improved 3D object modeling, and seamless integration with the Internet of Things (IoT). The incorporation of haptic feedback in DIAR systems could further enhance the user experience by adding a sense of touch to the visual overlays. Additionally, advancements in artificial intelligence and machine learning are expected to further improve the context-appropriateness and realism of the overlaid digital images.[30]
Theoretical application
[edit]Although theories are quickly becoming realities in today's technological society, the range of possibilities for digital imaging is wide open. One major application that is still in the works is that of child safety and protection. How can we use digital imaging to better protect our kids? Kodak's program, Kids Identification Digital Software (KIDS) may answer that question. The beginnings include a digital imaging kit to be used to compile student identification photos, which would be useful during medical emergencies and crimes. More powerful and advanced versions of applications such as these are still developing, with increased features constantly being tested and added.[31]
But parents and schools aren't the only ones who see benefits in databases such as these. Criminal investigation offices, such as police precincts, state crime labs, and even federal bureaus have realized the importance of digital imaging in analyzing fingerprints and evidence, making arrests, and maintaining safe communities. As the field of digital imaging evolves, so does our ability to protect the public.[32]
Digital imaging can be closely related to the social presence theory especially when referring to the social media aspect of images captured by our phones. There are many different definitions of the social presence theory but two that clearly define what it is would be "the degree to which people are perceived as real" (Gunawardena, 1995), and "the ability to project themselves socially and emotionally as real people" (Garrison, 2000). Digital imaging allows one to manifest their social life through images in order to give the sense of their presence to the virtual world. The presence of those images acts as an extension of oneself to others, giving a digital representation of what it is they are doing and who they are with. Digital imaging in the sense of cameras on phones helps facilitate this effect of presence with friends on social media. Alexander (2012) states, "presence and representation is deeply engraved into our reflections on images...this is, of course, an altered presence...nobody confuses an image with the representation reality. But we allow ourselves to be taken in by that representation, and only that 'representation' is able to show the liveliness of the absentee in a believable way." Therefore, digital imaging allows ourselves to be represented in a way so as to reflect our social presence.[33]
Photography is a medium used to capture specific moments visually. Through photography our culture has been given the chance to send information (such as appearance) with little or no distortion. The Media Richness Theory provides a framework for describing a medium's ability to communicate information without loss or distortion. This theory has provided the chance to understand human behavior in communication technologies. The article written by Daft and Lengel (1984,1986) states the following:
Communication media fall along a continuum of richness. The richness of a medium comprises four aspects: the availability of instant feedback, which allows questions to be asked and answered; the use of multiple cues, such as physical presence, vocal inflection, body gestures, words, numbers and graphic symbols; the use of natural language, which can be used to convey an understanding of a broad set of concepts and ideas; and the personal focus of the medium (pp. 83).
The more a medium is able to communicate the accurate appearance, social cues and other such characteristics the more rich it becomes. Photography has become a natural part of how we communicate. For example, most phones have the ability to send pictures in text messages. Apps Snapchat and Vine have become increasingly popular for communicating. Sites like Instagram and Facebook have also allowed users to reach a deeper level of richness because of their ability to reproduce information. Sheer, V. C. (January–March 2011). Teenagers' use of MSN features, discussion topics, and online friendship development: the impact of media richness and communication control. Communication Quarterly, 59(1).
Methods
[edit]A digital photograph may be created directly from a physical scene by a camera or similar device. Alternatively, a digital image may be obtained from another image in an analog medium, such as photographs, photographic film, or printed paper, by an image scanner or similar device. Many technical images—such as those acquired with tomographic equipment, side-scan sonar, or radio telescopes—are actually obtained by complex processing of non-image data. Weather radar maps as seen on television news are a commonplace example. The digitalization of analog real-world data is known as digitizing, and involves sampling (discretization) and quantization. Projectional imaging of digital radiography can be done by X-ray detectors that directly convert the image to digital format. Alternatively, phosphor plate radiography is where the image is first taken on a photostimulable phosphor (PSP) plate which is subsequently scanned by a mechanism called photostimulated luminescence.
Finally, a digital image can also be computed from a geometric model or mathematical formula. In this case, the name image synthesis is more appropriate, and it is more often known as rendering.
Digital image authentication is an issue[34] for the providers and producers of digital images such as health care organizations, law enforcement agencies, and insurance companies. There are methods emerging in forensic photography to analyze a digital image and determine if it has been altered.
Previously digital imaging depended on chemical and mechanical processes, now all these processes have converted to electronic. A few things need to take place for digital imaging to occur, the light energy converts to electrical energy – think of a grid with millions of little solar cells. Each condition generates a specific electrical charge. Charges for each of these "solar cells" are transported and communicated to the firmware to be interpreted. The firmware is what understands and translates the color and other light qualities. Pixels are what is noticed next, with varying intensities they create and cause different colors, creating a picture or image. Finally, the firmware records the information for a future date and for reproduction.
Advantages
[edit]There are several benefits of digital imaging. First, the process enables easy access of photographs and word documents. Google is at the forefront of this 'revolution,' with its mission to digitize the world's books. Such digitization will make the books searchable, thus making participating libraries, such as Stanford University and the University of California Berkeley, accessible worldwide.[35] Digital imaging also benefits the medical world because it "allows the electronic transmission of images to third-party providers, referring dentists, consultants, and insurance carriers via a modem".[35] The process "is also environmentally friendly since it does not require chemical processing".[35] Digital imaging is also frequently used to help document and record historical, scientific and personal life events.[36]
Benefits also exist regarding photographs. Digital imaging will reduce the need for physical contact with original images.[37] Furthermore, digital imaging creates the possibility of reconstructing the visual contents of partially damaged photographs, thus eliminating the potential that the original would be modified or destroyed.[37] In addition, photographers will be "freed from being 'chained' to the darkroom," will have more time to shoot and will be able to cover assignments more effectively.[38] Digital imaging 'means' that "photographers no longer have to rush their film to the office, so they can stay on location longer while still meeting deadlines".[39]
Another advantage to digital photography is that it has been expanded to camera phones. We are able to take cameras with us wherever as well as send photos instantly to others. It is easy for people to us as well as help in the process of self-identification for the younger generation[40]
Criticisms
[edit]Critics of digital imaging cite several negative consequences. An increased "flexibility in getting better quality images to the readers" will tempt editors, photographers and journalists to manipulate photographs.[38] In addition, "staff photographers will no longer be photojournalists, but camera operators... as editors have the power to decide what they want 'shot'".[38]
See also
[edit]References
[edit]- ^ Federal Agencies Digital Guidelines Initiative Glossary
- ^ Brown, Barbara N. (November 2002). "GCI/HRC Research World's First Photograph". Abbey Newsletter. Vol. 26, no. 3. Archived from the original on 2019-08-03.
- ^ a b c d Trussell H &Vrhel M (2008). "Introduction". Fundamental of Digital Imaging: 1–6.
- ^ "The Birth of Digital Phototelegraphy", the papers of Technical Meeting in History of Electrical Engineering IEEE, Vol. HEE-03, No. 9-12, pp 7-12 (2003)
- ^ U.S. Patent 3,277,302, titled "X-Ray Apparatus Having Means for Supplying An Alternating Square Wave Voltage to the X-Ray Tube", granted to Weighart on October 4, 1964, showing its patent application date as May 10, 1963 and at lines 1-6 of its column 4, also, noting James F. McNulty's earlier filed co-pending application for an essential component of invention
- ^ U.S. Patent 3,289,000, titled "Means for Separately Controlling the Filament Current and Voltage on a X-Ray Tube", granted to McNulty on November 29, 1966 and showing its patent application date as March 5, 1963
- ^ James R. Janesick (2001). Scientific charge-coupled devices. SPIE Press. pp. 3–4. ISBN 978-0-8194-3698-6.
- ^ Williams, J. B. (2017). The Electronics Revolution: Inventing the Future. Springer. pp. 245–8. ISBN 978-3-319-49088-5.
- ^ Boyle, William S; Smith, George E. (1970). "Charge Coupled Semiconductor Devices". Bell Syst. Tech. J. 49 (4): 587–593. Bibcode:1970BSTJ...49..587B. doi:10.1002/j.1538-7305.1970.tb01790.x.
- ^ a b c d e Fossum, Eric R.; Hondongwa, D. B. (2014). "A Review of the Pinned Photodiode for CCD and CMOS Image Sensors". IEEE Journal of the Electron Devices Society. 2 (3): 33–43. doi:10.1109/JEDS.2014.2306412.
- ^ U.S. Patent 4,484,210: Solid-state imaging device having a reduced image lag
- ^ Fossum, Eric R. (12 July 1993). "Active pixel sensors: Are CCDS dinosaurs?". In Blouke, Morley M. (ed.). Charge-Coupled Devices and Solid State Optical Sensors III. Vol. 1900. International Society for Optics and Photonics. pp. 2–14. Bibcode:1993SPIE.1900....2F. CiteSeerX 10.1.1.408.6558. doi:10.1117/12.148585. S2CID 10556755.
{{cite book}}:|journal=ignored (help) - ^ Fossum, Eric R. (2007). "Active Pixel Sensors" (PDF). Eric Fossum. S2CID 18831792.
- ^ Matsumoto, Kazuya; et al. (1985). "A new MOS phototransistor operating in a non-destructive readout mode". Japanese Journal of Applied Physics. 24 (5A): L323. Bibcode:1985JaJAP..24L.323M. doi:10.1143/JJAP.24.L323. S2CID 108450116.
- ^ "CMOS Image Sensor Sales Stay on Record-Breaking Pace". IC Insights. May 8, 2018. Retrieved 6 October 2019.
- ^ Ahmed, Nasir (January 1991). "How I Came Up With the Discrete Cosine Transform". Digital Signal Processing. 1 (1): 4–5. Bibcode:1991DSP.....1....4A. doi:10.1016/1051-2004(91)90086-Z.
- ^ "T.81 – DIGITAL COMPRESSION AND CODING OF CONTINUOUS-TONE STILL IMAGES – REQUIREMENTS AND GUIDELINES" (PDF). CCITT. September 1992. Retrieved 12 July 2019.
- ^ "The JPEG image format explained". BT.com. BT Group. 31 May 2018. Retrieved 5 August 2019.
- ^ Reed, Mike (2002). "Graphic arts, digital imaging and technology education". THE Journal. 21 (5): 69+. Retrieved 28 June 2012.(subscription required)
- ^ Murray, Susan (August 2008). "Digital Images, Photo-Sharing, and Our Shifting Notions of Everyday Aesthetics". Journal of Visual Culture. 7 (2): 147–163. doi:10.1177/1470412908091935. S2CID 194064049.(subscription required)
- ^ Castella, T. D. (2012, 1, 12). Five ways the digital camera changed us. BBC.
- ^ "Impacts of Digital Imaging versus Drawing on Student Learning in Undergraduate Biodiversity Labs" (PDF). eric.ed.gov. Retrieved 22 December 2016.
- ^ Richardson, Ronny (2003). "Digital imaging: The wave of the future". THE Journal. 31 (3). Retrieved 28 June 2012.
- ^ Reed, Mike (2002). "Graphic arts, digital imaging and technology education". THE Journal. 21 (5): 69+. Retrieved 28 June 2012.
- ^ Bachur, R. G.; Hennelly, K.; Callahan, M. J.; Chen, C.; Monuteaux, M. C. (2012). "Diagnostic Imaging and Negative Appendectomy Rates in Children: Effects of Age and Gender". Pediatrics. 129 (5): 877–884. doi:10.1542/peds.2011-3375. PMID 22508920. S2CID 18881885.
- ^ Planykh, Oleg, S. (2009). Digital Imaging in Communications in Medicine: A Practical Introduction and Survival Guide. Boston, Mass.: Springer. pp. 3–5. ISBN 978-3-642-10849-5.
{{cite book}}: CS1 maint: multiple names: authors list (link) - ^ Lui, Tsz-Wai (2020). Augmented reality and virtual reality: Changing realities in a dynamic world. Cham. ISBN 978-3-030-37868-4.
- ^ Prodromou, Theodosia (2020-01-01). Augmented Reality in Educational Settings. BRILL. doi:10.1163/9789004408845. ISBN 978-90-04-40883-8. S2CID 226667545.
- ^ Piroozfar, Poorang (2018). The application of Augmented Reality (AR) in the Architecture Engineering and Construction (AEC) industry.
- ^ Huang, Weidong (2012). Human Factors in Augmented Reality Environments. Springer Science & Business Media.
- ^ Willis, William (1997). "Digital imaging is innovative, useful, and now within educators' reach". THE Journal. 25 (2): 24+. Retrieved 28 June 2012.
- ^ Cherry, Michael; Edward Imwinkelried (2006). "A cautionary note about fingerprint analysis and reliance on digital technology". Judicature. 89 (6): 334+. Retrieved 28 June 2012.
- ^ Alexander, J. C. (2012). Iconic Power: Materiality and meaning in social life. New York: Palgrave Macmillan.
- ^ "Digital image authentication for evidence" (PDF). Archived from the original (PDF) on 2016-03-07. Retrieved 2011-03-05.
- ^ a b c Michels, S. (December 30, 2009). "Google's Goal: Digitize Every Book Ever Printed". PBS Newshour. Archived from the original on 29 September 2012. Retrieved 2 October 2012.
- ^ Gustavson, T. (2009). Camera: A history of photography from daguerreotype to digital. New York: Sterling Innovation.
- ^ a b Frey S (1999). "Digital Imaging as a Tool for Preservation". IADA Preprints: 191–4.
- ^ a b c Parker D (1988). "Ethical Implications of Electronic Still Cameras and Computer Digital Imaging in the Print Media". Journal of the Mass Media. 3 (2): 47–59. doi:10.1080/08900528809358322.
- ^ Fahmy S, Smith CZ (2003). "Photographers Note Digital's Advantages, Disadvantages". Newspaper Research Journal. 24 (2): 82–96. doi:10.1177/073953290302400206. S2CID 107853874.
- ^ Gai, B. (2009). "A World Through the Camera Phone Lens: A Case Study of Beijing Camera Phone Use". Knowledge, Technology & Policy. 22 (3): 195–204. doi:10.1007/s12130-009-9084-x. S2CID 109060999.
External links
[edit]- Latest Tech in Digital Imaging Speed: 12,000 Pages Per Hour.
- Rochester Institute of Technology. Digital Imaging and Remote Sensing Lab
- Cornell University. Digital imaging tutorial Archived 2004-02-11 at the Wayback Machine
- Digital Imaging FAQ/Frequently Asked Questions. Digital Imaging FAQ Archived 2015-09-22 at the Wayback Machine
- Dartmouth, Hany Farid. Digital Image Forensics
- Lectures on Image Processing, by Alan Peters. Vanderbilt University. Updated 7 January 2016.
- http://electronics.howstuffworks.com/cameras-photography/digital/digital-camera.htm
Digital imaging
View on GrokipediaFundamentals
Definition and Principles
Digital imaging is the process of capturing, storing, processing, and displaying visual information using computers, where continuous analog scenes are converted into discrete numerical representations composed of pixels.[8] This differs from analog imaging, which relies on continuous signals, as digital imaging employs analog-to-digital converters (ADCs) to sample and quantize analog inputs into binary data suitable for computational manipulation.[9] ADCs perform this conversion through sampling, which captures signal values at discrete intervals, followed by quantization, which maps those values to a finite set of digital levels, and encoding into binary format.[9] At its core, a digital image consists of pixels—the fundamental units representing sampled points of color or intensity arranged in a two-dimensional grid with Cartesian coordinates.[8] Most digital images are raster-based, formed by a fixed array of pixels where each holds a specific value, making them resolution-dependent and ideal for capturing detailed photographs or scanned visuals.[10] In contrast, vector imaging represents graphics through mathematical equations defining lines, curves, and shapes, enabling infinite scalability without quality loss and suiting applications like logos or illustrations.[10] Color and intensity in digital images are encoded using standardized models to replicate visual perception. The RGB model, an additive system for digital displays, combines red, green, and blue channels to produce a wide gamut of colors, with full intensity yielding white.[11] CMYK, a subtractive model for printing, uses cyan, magenta, yellow, and black inks to absorb light and form colors, though it covers a narrower gamut than RGB.[11] Grayscale representations simplify this to a single channel of intensity values ranging from black to white, often used for monochrome images or to emphasize luminance.[12] The mathematical foundations of digital imaging ensure faithful representation without distortion. The Nyquist-Shannon sampling theorem establishes that the sampling frequency must be at least twice the highest spatial frequency in the original signal () to allow perfect reconstruction and prevent aliasing, where high frequencies masquerade as lower ones.[13] This criterion implies a sampling interval no greater than half the period of the maximum frequency component, directly informing pixel density for adequate resolution.[8] Bit depth further refines precision by defining the number of discrete intensity levels per pixel; an 8-bit image per channel offers 256 levels, providing basic dynamic range for standard displays, whereas a 16-bit image expands to 65,536 levels, enhancing gradient smoothness and capturing subtler tonal variations in high-contrast scenes.[14]Core Components
Digital imaging systems rely on an integrated pipeline that transforms analog visual data into digital form and manages its processing, storage, and display. This pipeline generally starts with capture from sensors, proceeds through analog-to-digital converters (ADCs) that sample and quantize the signal into discrete pixel values, followed by digital signal processors (DSPs) for initial handling such as noise reduction and color correction, and culminates in output via interfaces like USB for data transfer or HDMI for video display.[15] The architecture ensures efficient data flow, with ADCs typically employing pipeline designs for high-speed conversion rates up to 100 MS/s in imaging applications.[16] Key hardware components include input devices, storage media, and output displays, each playing a critical role in the creation and handling of digital images. Scanners serve as essential input devices by optically capturing printed images or documents and converting them into digital formats through line-by-line sensor readout, enabling the digitization of physical media for further processing.[17] Storage media such as hard disk drives (HDDs), solid-state drives (SSDs), and memory cards (e.g., SD cards) store the resulting image data; HDDs provide high-capacity archival storage via magnetic platters, while SSDs and memory cards offer faster read/write speeds using flash memory, making them ideal for portable imaging workflows.[18] Displays, particularly liquid crystal displays (LCDs) and organic light-emitting diode (OLED) panels, render digital images for viewing; LCDs use backlighting and liquid crystals to modulate light for color reproduction, whereas OLEDs emit light directly from organic compounds, achieving superior contrast ratios exceeding 1,000,000:1 and wider viewing angles.[19] Software elements, including file formats and basic editing tools, standardize and facilitate the manipulation of digital image data. Common image file formats structure pixel data with metadata; for instance, JPEG employs lossy compression via discrete cosine transform to reduce file size while preserving perceptual quality, PNG uses lossless deflate compression with alpha channel support for transparency, and TIFF supports multiple layers and uncompressed data for professional archiving.[20] Basic software tools, such as raster graphics editors, enable viewing and simple editing of these files by operating on pixel grids; examples include Adobe Photoshop for layer-based adjustments and the open-source GIMP for cropping, resizing, and filtering operations.[21] Resolution metrics quantify the quality and fidelity of digital images across spatial and temporal dimensions. Spatial resolution measures the detail captured or displayed, often expressed as pixels per inch (PPI) for screens—indicating pixel density—or dots per inch (DPI) for printing, where higher values like 300 DPI ensure sharp reproduction of fine details.[22] In video imaging, temporal resolution refers to the frame rate, typically 24–60 frames per second, which determines smoothness and the ability to capture motion without artifacts like blurring.[23] These components collectively operationalize pixel-based representations from foundational principles, forming the backbone of digital imaging systems.Historical Development
Early Innovations
The origins of digital imaging trace back to the mid-20th century, with pioneering efforts to convert analog photographs into digital form for computer processing. In 1957, Russell A. Kirsch and his colleagues at the National Institute of Standards and Technology (NIST), then known as the National Bureau of Standards, developed the first drum scanner, a rotating cylinder device that mechanically scanned images using a light source and photomultiplier tube to produce electrical signals converted into binary data. This innovation produced the world's first digital image: a 176 by 176 pixel grayscale photograph of Kirsch's three-month-old son, Walden, scanned from a printed photo mounted on the drum. The resulting 30,976-pixel image demonstrated the feasibility of digitizing visual content, laying the groundwork for image processing algorithms despite its low resolution by modern standards.[4] During the 1960s and 1970s, NASA's space exploration programs accelerated the adoption of digital imaging techniques, particularly for handling vast amounts of visual data from remote probes. The Ranger 7 mission, launched on July 28, 1964, marked a significant milestone as the first successful U.S. lunar probe to transmit close-up images of the Moon's surface, capturing 4,316 photographs in its final 17 minutes before impact on July 31. These analog video signals were received on Earth and digitized using early computer systems at the Jet Propulsion Laboratory (JPL), where custom image processing software enhanced contrast and reconstructed the data into usable digital formats, totaling over 17,000 images across the Ranger series. This effort established JPL's Image Processing Laboratory as a hub for digital techniques, addressing challenges like signal noise and data volume that foreshadowed compression needs in later systems. Concurrently, frame grabbers emerged as key hardware in the 1960s and 1970s to capture and digitize analog video frames into computer memory, enabling real-time image analysis in scientific applications; early examples included IBM's 1963 Scanistor, a scanning storage tube for converting video to digital signals.[24][25][26] Institutional advancements in the 1960s further propelled digital imaging through dedicated research facilities at leading universities. At MIT, Project MAC (Multi-Access Computer), established in 1963, integrated computer graphics research, building on Ivan Sutherland's 1963 Sketchpad system, which introduced interactive vector graphics on the TX-2 computer and influenced early digital display technologies. Similarly, Stanford University fostered graphics innovation through its ties to industry and research initiatives, including work at the Stanford Artificial Intelligence Laboratory (SAIL), founded in 1963, where experiments in raster graphics and image synthesis began in the mid-1960s using systems like the PDP-6. These labs emphasized algorithmic foundations for rendering and manipulation, transitioning from line drawings to pixel-based representations.[27] A pivotal transition from analog to digital capture occurred with the invention of the charge-coupled device (CCD) in 1969 by Willard Boyle and George E. Smith at Bell Laboratories. While brainstorming semiconductor memory alternatives, they conceived the CCD as a light-sensitive array that shifts charge packets corresponding to photons, enabling electronic image sensing without mechanical scanning. This breakthrough, detailed in their 1970 paper, allowed for high-sensitivity digital readout of images, revolutionizing acquisition by replacing bulky vidicon tubes in cameras and paving the way for compact sensors in subsequent decades. Boyle and Smith shared the 2009 Nobel Prize in Physics for this contribution, which fundamentally impacted space and consumer imaging.[28]Key Technological Milestones
In the 1980s, digital imaging transitioned from experimental prototypes to early commercial viability. Sony introduced the Mavica in 1981, recognized as the world's first electronic still video camera, which captured analog images on a 2-inch video floppy disk and displayed them on a television screen, marking a pivotal shift away from film-based photography.[29] This innovation laid groundwork for portable electronic capture, though it relied on analog signals rather than fully digital processing. Concurrently, Kodak advanced digital camera technology through engineer Steven Sasson's prototype, with the company securing U.S. Patent 4,131,919 in 1978 for an electronic still camera that used a charge-coupled device (CCD) sensor to produce a 0.01-megapixel black-and-white image stored on cassette tape, though widespread commercialization was delayed.[30] The 1990s saw the rise of consumer-accessible digital cameras and foundational standards that enabled broader adoption. Casio's QV-10, launched in 1995, became the first consumer digital camera with a built-in LCD screen for instant review, featuring a 0.3-megapixel resolution and swivel design that popularized point-and-shoot digital photography for everyday users.[31] This model, priced affordably at around $650, spurred market growth with 2 MB of built-in internal flash memory, allowing storage of approximately 96 images at its resolution. Complementing hardware advances, the Joint Photographic Experts Group (JPEG) finalized its image compression standard in 1992 (ISO/IEC 10918-1), based on discrete cosine transform algorithms, which dramatically reduced file sizes for color and grayscale images while maintaining visual quality, becoming essential for digital storage and web distribution.[32] By the 2000s, digital imaging integrated deeply into mobile devices, with sensor technologies evolving for efficiency. Apple's iPhone, released in 2007, embedded a 2-megapixel camera into a smartphone, revolutionizing imaging by combining capture, editing, and sharing in a single device, which accelerated the decline of standalone digital cameras as mobile photography captured over 90% of images by the decade's end.[33] Parallel to this, complementary metal-oxide-semiconductor (CMOS) sensors gained dominance over CCDs by the mid-2000s, offering lower power consumption, faster readout speeds, and on-chip processing that reduced costs and enabled compact designs in consumer electronics.[34] The 2010s and 2020s brought exponential improvements in resolution and intelligence, driven by computational methods. Smartphone sensors exceeded 100 megapixels by 2020, exemplified by Samsung's ISOCELL HM1 in the Galaxy S20 Ultra, which used pixel binning to deliver high-detail images from smaller pixels, enhancing zoom and low-light capabilities without proportionally increasing sensor size. Google's Pixel series, starting in 2016, pioneered AI-driven computational photography with features like HDR+ for multi-frame noise reduction and dynamic range enhancement, leveraging machine learning algorithms to produce professional-grade results from modest hardware.[35]Acquisition Technologies
Image Sensors
Image sensors are semiconductor devices that convert incident light into electrical signals, forming the foundation of digital image acquisition through the photoelectric effect, where photons generate electron-hole pairs in a photosensitive material such as silicon.[36] This process relies on the absorption of photons with energy above the silicon bandgap (approximately 1.1 eV), producing charge carriers that are collected and measured to represent light intensity.[37] The efficiency of this conversion is quantified by quantum efficiency (QE), defined as the ratio of electrons generated to incident photons, typically ranging from 20% to 90% depending on wavelength and sensor design, with peak QE around 550 nm for visible light.[38] The primary types of image sensors are charge-coupled devices (CCDs) and complementary metal-oxide-semiconductor (CMOS) sensors. CCDs, invented in 1969 by Willard Boyle and George E. Smith at Bell Laboratories, operate by transferring accumulated charge packets across an array of capacitors to a single output amplifier, enabling high-quality imaging with uniform response.[39] In contrast, CMOS sensors integrate amplification and processing circuitry directly on the chip, allowing for parallel readout from multiple pixels and lower power consumption.[40] Within CMOS architectures, active-pixel sensors (APS) incorporate a source-follower amplifier in each pixel to buffer the signal, reducing noise during readout compared to passive-pixel sensors (PPS), which rely solely on a photodiode and access transistor without per-pixel amplification, resulting in higher susceptibility to noise. For color imaging, most sensors employ a color filter array, such as the Bayer filter, patented by Bryce E. Bayer at Eastman Kodak in 1976, which overlays a mosaic of red, green, and blue filters on the pixel array in a 50% green, 25% red, and 25% blue pattern to mimic human vision sensitivity.[41] This arrangement captures single-color information per pixel, with interpolation used to reconstruct full-color images. Noise in image sensors arises from multiple sources, including shot noise, which is Poisson-distributed and stems from the random arrival of photons and dark current electrons, and thermal noise (Johnson-Nyquist noise), generated by random electron motion in resistive elements, particularly prominent at higher temperatures.[42] Key performance metrics include fill factor, the ratio of photosensitive area to total pixel area, often below 50% in early CMOS designs due to on-chip circuitry but improved via microlens arrays that focus light onto the photodiode, potentially increasing effective fill factor by up to three times.[43][44] Dynamic range, measuring the span from minimum detectable signal to saturation, typically achieves 12-14 stops in modern sensors, balancing signal-to-noise ratio and well capacity.[45] CMOS sensors have evolved significantly since the 1990s, offering advantages in power efficiency (often milliwatts versus watts for CCDs) and integration of analog-to-digital converters on-chip, with backside-illuminated (BSI) CMOS designs, introduced commercially by Sony in 2009, flipping the silicon to expose the photodiode directly to light, thereby enhancing QE by 2-3 times and reducing crosstalk.[40][46]Digital Cameras and Scanners
Digital cameras are complete imaging devices that integrate image sensors with optical systems, electronics, and user interfaces to capture still and moving images. They encompass various types tailored to different user needs and applications. Digital single-lens reflex (DSLR) cameras use a mirror and optical viewfinder to provide a real-time preview of the scene through the lens, allowing for precise composition and focus before capture.[47] Mirrorless cameras, lacking the mirror mechanism, offer a more compact design while using electronic viewfinders or rear LCD screens for preview, often resulting in faster autofocus and quieter operation compared to DSLRs.[48] Compact point-and-shoot cameras prioritize portability and simplicity, featuring fixed lenses and automated settings for everyday photography without the need for interchangeable components.[47] Smartphone cameras, embedded in mobile devices, leverage computational photography techniques to produce high-quality images from small sensors, enabling advanced features like hyperspectral imaging for applications in medicine and agriculture.[49] Action cameras, such as those from GoPro, are rugged, waterproof devices designed for extreme environments, capturing wide-angle video and photos during activities like sports or underwater exploration.[50] Central to digital cameras are optical features that control light intake and focus. Lenses determine the focal length, which dictates the angle of view and subject magnification; shorter focal lengths provide wider perspectives, while longer ones offer narrower fields with greater zoom.[51] The aperture, measured in f-stops, regulates the amount of light entering the camera—lower f-numbers like f/2.8 allow more light for low-light conditions and shallower depth of field, enhancing creative control over background blur.[52] Autofocus systems enhance usability: phase-detection autofocus, common in DSLRs and high-end mirrorless models, splits incoming light to quickly determine focus direction and distance, enabling rapid locking on subjects.[53] In contrast, contrast-detection autofocus, often used in live view or compact cameras, analyzes image sharpness by detecting contrast edges, which can be slower but effective for static scenes.[54] Image stabilization mitigates blur from hand movement; optical image stabilization (OIS) shifts lens elements to counteract shake, while in-body image stabilization (IBIS) moves the sensor itself, providing broader compatibility across lenses.[55] Data handling in digital cameras supports flexible capture and sharing workflows. Burst modes allow continuous shooting at high frame rates, such as up to 40 frames per second in RAW burst on advanced models, ideal for capturing fast action like sports.[56] RAW format preserves the full 14-bit sensor data without processing, offering maximum post-capture editing flexibility, whereas JPEG applies in-camera compression for smaller files suitable for quick sharing but with reduced dynamic range.[57] Modern cameras integrate wireless capabilities, including Wi-Fi for high-speed image transfer to computers or cloud storage and Bluetooth for low-energy connections to smartphones, facilitating seamless remote control and instant uploads via apps like SnapBridge.[58] Scanners are specialized devices for converting physical media into digital images, primarily through linear or area sensors that systematically capture reflected or transmitted light. Flatbed scanners, the most common type for general use, feature a flat glass platen where documents or photos are placed face-down, with a moving light source and sensor array scanning line by line to produce high-resolution digital files. They are widely applied in document digitization projects, such as archiving cultural heritage materials, where they handle bound books or fragile items without damage by avoiding mechanical feeding.[59] Drum scanners, historically significant for professional prepress work, wrap originals around a rotating drum illuminated by LED or laser sources, achieving superior color accuracy and resolution for high-end reproductions like artwork or film.[60] 3D scanners employ structured light or laser triangulation to capture surface geometry, generating point clouds that form digital 3D models for applications in reverse engineering or cultural preservation.[61][62] In document digitization, these devices enable the preservation of historical records by creating searchable, accessible digital archives, often integrated with optical character recognition for text extraction.[63]Processing Techniques
Image Compression
Image compression is a fundamental technique in digital imaging that reduces the size of image files by eliminating redundancy while aiming to preserve visual quality, addressing the challenges posed by large pixel data volumes in storage and transmission.[64] It operates on the principle of encoding image data more efficiently, often leveraging mathematical transforms and statistical properties of pixel values. Two primary categories exist: lossless compression, which allows exact reconstruction of the original image, and lossy compression, which discards less perceptible information to achieve higher reduction ratios.[64] Lossless compression techniques ensure no data loss, making them suitable for applications requiring pixel-perfect fidelity, such as medical imaging or archival storage. A prominent example is the Portable Network Graphics (PNG) format, which employs the DEFLATE algorithm—a combination of LZ77 dictionary coding for redundancy reduction and Huffman coding for entropy encoding of symbols based on their frequency. Huffman coding assigns shorter binary codes to more frequent symbols, optimizing bit usage without altering the image content; for instance, PNG achieves compression ratios of 2:1 to 3:1 for typical photographic images while remaining fully reversible. Other lossless methods include run-length encoding (RLE) for simple images and arithmetic coding, but DEFLATE's integration in PNG has made it widely adopted due to its balance of efficiency and computational simplicity.[65] In contrast, lossy compression prioritizes significant size reduction for bandwidth-constrained scenarios like web delivery, accepting some quality degradation. The Joint Photographic Experts Group (JPEG) standard, formalized in 1992, exemplifies this through its baseline algorithm, which divides images into 8x8 pixel blocks and applies the discrete cosine transform (DCT) to convert spatial data into frequency coefficients.[66] The DCT concentrates energy in low-frequency components, enabling coarse quantization of high-frequency details that are less visible to the human eye, followed by Huffman or arithmetic entropy encoding to further minimize bits.[66] This process yields compression ratios up to 20:1 with acceptable quality, though artifacts like blocking—visible edges between blocks—emerge at higher ratios due to quantization errors.[66] JPEG variants, such as JFIF (JPEG File Interchange Format) for container structure and EXIF for metadata embedding, extend its utility in consumer photography.[66] Advancing beyond DCT, the JPEG 2000 standard (ISO/IEC 15444-1) introduces wavelet transforms for superior performance, particularly in progressive and scalable decoding.[67] The discrete wavelet transform (DWT) decomposes the image into subbands using biorthogonal filters (e.g., 9/7-tap for lossy coding), separating low- and high-frequency content across multiple resolution levels without block boundaries.[67] Quantization and embedded block coding with optimized truncation (EBCOT) then encode coefficients, supporting both lossy (via irreversible wavelets) and lossless (via reversible integer wavelets) modes; JPEG 2000 typically outperforms JPEG by 20-30% in compression efficiency at equivalent quality levels, reducing artifacts like ringing or blocking.[67] Modern standards like High Efficiency Image Format (HEIF, ISO/IEC 23008-12) build on High Efficiency Video Coding (HEVC/H.265) for even greater efficiency, achieving up to 50% file size reduction over JPEG at similar quality by using intra-frame prediction, transform coding, and advanced entropy encoding within an ISO base media file format container.[68][69] HEIF supports features like image bursts and transparency, with HEVC's block partitioning and deblocking filters minimizing artifacts, making it ideal for mobile and high-resolution imaging.[69] Other contemporary formats include WebP, developed by Google and standardized by the IETF (RFC 9649 in 2024), which uses VP8 or VP9 intra-frame coding for lossy compression and a custom lossless algorithm, achieving 25-34% smaller files than JPEG at comparable quality levels while supporting animation and transparency.[70] Similarly, AVIF (AV1 Image File Format, ISO/IEC 23000-22 finalized in 2020) leverages the AV1 video codec within the HEIF container for royalty-free encoding, offering 30-50% file size reductions over JPEG through advanced block partitioning, intra prediction, and transform coding, with broad support for HDR and wide color gamuts; it excels in web and mobile applications with minimal artifacts at high compression ratios.[71] Quality assessment in image compression relies on metrics that balance rate (bits per pixel) and distortion. Peak Signal-to-Noise Ratio (PSNR) quantifies reconstruction fidelity by comparing the maximum signal power to mean squared error (MSE) between original and compressed images, expressed in decibels; higher values (e.g., >30 dB) indicate better quality, though PSNR correlates imperfectly with human perception. Underpinning these is rate-distortion theory, pioneered by Claude Shannon, which defines the rate-distortion function R(D) as the infimum of mutual information rates needed to achieve average distortion D, guiding optimal trade-offs in lossy schemes.| Standard | Transform Type | Compression Type | Typical Ratio (at ~30-40 dB PSNR) | Key Artifacts |
|---|---|---|---|---|
| JPEG | DCT | Lossy | 10:1 to 20:1 | Blocking |
| PNG | DEFLATE (LZ77 + Huffman) | Lossless | 2:1 to 3:1 | None |
| JPEG 2000 | DWT (Wavelet) | Lossy/Lossless | 15:1 to 25:1 | Ringing |
| HEIF/HEVC | HEVC Intra | Lossy | 20:1 to 50:1 | Minimal |
| WebP | VP8/VP9 Intra | Lossy/Lossless | 15:1 to 30:1 | Minimal |
| AVIF | AV1 Intra | Lossy/Lossless | 20:1 to 50:1 | Minimal |
