Hubbry Logo
Video camera tubeVideo camera tubeMain
Open search
Video camera tube
Community hub
Video camera tube
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Video camera tube
Video camera tube
from Wikipedia
Vidicon tube 23 inch (17 mm) in diameter
A display of numerous video camera tubes from the 1930s and 1940s, photographed in 1954, with iconoscope inventor Vladimir K. Zworykin.

Video camera tubes are devices based on the cathode-ray tube that were used in television cameras to capture television images, prior to the introduction of charge-coupled device (CCD) image sensors in the 1980s. Several different types of tubes were in use from the early 1930s, and as late as the 1990s.

In these tubes, an electron beam is scanned across an image of the scene to be broadcast focused on a target. This generated a current that is dependent on the brightness of the image on the target at the scan point. The size of the striking ray is tiny compared to the size of the target, allowing 480–486 horizontal scan lines per image in the NTSC format, 576 lines in PAL,[1] and as many as 1035 lines in Hi-Vision.

Cathode-ray tube

[edit]

Any vacuum tube which operates using a focused beam of electrons, originally called cathode rays, is known as a cathode-ray tube (CRT). These are usually seen as display devices as used in older (i.e., non-flat panel) television receivers and computer displays. The camera pickup tubes described in this article are also CRTs, but they display no image.[2]

Early research

[edit]

In June 1908, the scientific journal Nature published a letter in which Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), discussed how a fully electronic television system could be realized by using cathode-ray tubes (or "Braun" tubes, after their inventor, Karl Braun) as both imaging and display devices.[3] He noted that the "real difficulties lie in devising an efficient transmitter", and that it was possible that "no photoelectric phenomenon at present known will provide what is required".[3] A cathode-ray tube was successfully demonstrated as a displaying device by the German Professor Max Dieckmann in 1906; his experimental results were published by the journal Scientific American in 1909.[4] Campbell-Swinton later expanded on his vision in a presidential address given to the Röntgen Society in November 1911. The photoelectric screen in the proposed transmitting device was a mosaic of isolated rubidium cubes.[5][6] His concept for a fully electronic television system was later popularized as the "Campbell-Swinton Electronic Scanning System" by Hugo Gernsback and H. Winfield Secor in the August 1915 issue of the popular magazine Electrical Experimenter[7] and by Marcus J. Martin in the 1921 book The Electrical Transmission of Photographs.[8][9][10]

In a letter to Nature published in October 1926, Campbell-Swinton also announced the results of some "not very successful experiments" he had conducted with G. M. Minchin and J. C. M. Stanton. They had attempted to generate an electrical signal by projecting an image onto a selenium-coated metal plate that was simultaneously scanned by a cathode ray beam.[11][12] These experiments were conducted before March 1914, when Minchin died,[13] but they were later repeated by two different teams in 1937, by H. Miller and J. W. Strange from EMI,[14] and by H. Iams and A. Rose from RCA.[15] Both teams succeeded in transmitting "very faint" images with the original Campbell-Swinton's selenium-coated plate, but much better images were obtained when the metal plate was covered with zinc sulphide or selenide,[14] or with aluminum or zirconium oxide treated with caesium.[15] These experiments would form the base of the future vidicon. A description of a CRT imaging device also appeared in a patent application filed by Edvard-Gustav Schoultz in France in August 1921, and published in 1922,[16] although a working device was not demonstrated until some years later.[15]

Experiments with image dissectors

[edit]
Farnsworth Image Dissector tube 1931.jpg

An image dissector is a camera tube that creates an "electron image" of a scene from photocathode emissions (electrons) which pass through a scanning aperture to an anode, which serves as an electron detector.[17][1] Among the first to design such a device were German inventors Max Dieckmann and Rudolf Hell,[12][18] who had titled their 1925 patent application Lichtelektrische Bildzerlegerröhre für Fernseher (Photoelectric Image Dissector Tube for Television).[19] The term may apply specifically to a dissector tube employing magnetic fields to keep the electron image in focus,[1] an element lacking in Dieckmann and Hell's design, and in the early dissector tubes built by American inventor Philo Farnsworth.[12][20]

Dieckmann and Hell submitted their application to the German patent office in April 1925, and a patent was issued in October 1927.[19] Their experiments on the image dissector were announced in September 1927 issue of the popular magazine Discovery[21][22] and in the May 1928 issue of the magazine Popular Radio.[23] However, they never transmitted a clear and well focused image with such a tube.[citation needed]

In January 1927, American inventor and television pioneer Philo T. Farnsworth applied for a patent for his Television System that included a device for "the conversion and dissecting of light".[20] Its first moving image was successfully transmitted on September 7 of 1927,[24] and a patent was issued in 1930.[20] Farnsworth quickly made improvements to the device, among them introducing an electron multiplier made of nickel[25][26] and using a "longitudinal magnetic field" in order to sharply focus the electron image.[27] The improved device was demonstrated to the press in early September 1928.[12][28][29] The introduction of a multipactor in October 1933[30][31] and a multi-dynode "electron multiplier" in 1937[32][33] made Farnsworth's image dissector the first practical version of a fully electronic imaging device for television.[34] It had very poor light sensitivity, and was therefore primarily useful only where illumination was exceptionally high (typically over 685 cd/m2).[35][36][37] However, it was ideal for industrial applications, such as monitoring the bright interior of an industrial furnace. Due to their poor light sensitivity, image dissectors were rarely used in television broadcasting, except to scan film and other transparencies.[citation needed]

In April 1933, Farnsworth submitted a patent application also entitled Image Dissector, but which actually detailed a CRT-type camera tube.[38] This is among the first patents to propose the use of a "low-velocity" scanning beam and RCA had to buy it in order to sell image orthicon tubes to the general public.[39] However, Farnsworth never transmitted a clear and well focused image with such a tube.[40][41]

Dissectors were used only briefly for research in television systems before being replaced by different much more sensitive tubes based on the charge-storage phenomenon like the iconoscope during the 1930s. Although camera tubes based on the idea of image dissector technology quickly and completely fell out of use in the field of television broadcasting, they continued to be used for imaging in early weather satellites and the Lunar lander, and for star attitude tracking in the Space Shuttle and the International Space Station.

Operation

[edit]

The optical system of the image dissector focuses an image onto a photocathode mounted inside a high vacuum. As light strikes the photocathode, electrons are emitted in proportion to the intensity of the light (see photoelectric effect). The entire electron image is deflected and a scanning aperture permits only those electrons emanating from a very small area of the photocathode to be captured by the detector at any given time. The output from the detector is an electric current whose magnitude is a measure of the brightness of the corresponding area of the image. The electron image is periodically deflected horizontally and vertically ("raster scanning") such that the entire image is read by the detector many times per second, producing an electrical signal that can be conveyed to a display device, such as a CRT monitor, to reproduce the image.[17][1]

The image dissector has no "charge storage" characteristic; the vast majority of electrons emitted by the photocathode are excluded by the scanning aperture,[18] and thus wasted rather than being stored on a photo-sensitive target.

Charge-storage tubes

[edit]

Iconoscope

[edit]
A graphic from Kálmán Tihanyi's "Radioskop" patent from 1926 (part of the UNESCO's Memory of the World Programme)[42]
Zworykin holding an iconoscope tube
Diagram of the iconoscope, from Zworykin's 1931 patent

The early electronic camera tubes (like the image dissector) suffered from a very disappointing and fatal flaw: They scanned the subject and what was seen at each point was only the tiny piece of light viewed at the instant that the scanning system passed over it. A practical functional camera tube needed a different technological approach, which later became known as Charge - Storage camera tube. It was based on a new physical phenomenon which was discovered and patented in Hungary in 1926, but became widely understood and recognised only from around 1930.[43]

An iconoscope is a camera tube that projects an image on a special charge storage plate containing a mosaic of electrically isolated photosensitive granules separated from a common plate by a thin layer of isolating material, somewhat analogous to the human eye's retina and its arrangement of photoreceptors. Each photosensitive granule constitutes a tiny capacitor that accumulates and stores electrical charge in response to the light striking it. An electron beam periodically sweeps across the plate, effectively scanning the stored image and discharging each capacitor in turn such that the electrical output from each capacitor is proportional to the average intensity of the light striking it between each discharge event.[44][45]

After Hungarian engineer Kálmán Tihanyi studied Maxwell's equations, he discovered a new hitherto unknown physical phenomenon, which led to a break-through in the development of electronic imaging devices. He named the new phenomenon as charge-storage principle. The problem of low sensitivity to light resulting in low electrical output from transmitting or camera tubes would be solved with the introduction of charge-storage technology by Tihanyi in the beginning of 1925.[46] His solution was a camera tube that accumulated and stored electrical charges (photoelectrons) within the tube throughout each scanning cycle. The device was first described in a patent application he filed in Hungary in March 1926 for a television system he dubbed Radioskop.[42] After further refinements included in a 1928 patent application,[46] Tihanyi's patent was declared void in Great Britain in 1930,[47] and so he applied for patents in the United States. Tihanyi's charge storage idea remains a basic principle in the design of imaging devices for television to the present day.

In 1924, while employed by the Westinghouse Electric Corporation in Pittsburgh, Pennsylvania, Russian-born American engineer Vladimir Zworykin presented a project for a totally electronic television system to the company's general manager.[48][49] In July 1925, Zworykin submitted a patent application titled Television System that included a charge storage plate constructed of a thin layer of isolating material (aluminum oxide) sandwiched between a screen (300 mesh) and a colloidal deposit of photoelectric material (potassium hydride) consisting of isolated globules.[50] The following description can be read between lines 1 and 9 in page 2: "The photoelectric material, such as potassium hydride, is evaporated on the aluminum oxide, or other insulating medium, and treated so as to form a colloidal deposit of potassium hydride consisting of minute globules. Each globule is very active photoelectrically and constitutes, to all intents and purposes, a minute individual photoelectric cell". Its first image was transmitted in late summer of 1925,[12] and a patent was issued in 1928.[50] However the quality of the transmitted image failed to impress H.P. Davis, the general manager of Westinghouse, and Zworykin was asked "to work on something useful".[12] A patent for a television system was also filed by Zworykin in 1923, but this filing is not a definitive reference because extensive revisions were done before a patent was issued fifteen years later[39] and the file itself was divided into two patents in 1931.[51][52]

The first practical iconoscope was constructed in 1931 by Sanford Essig, when he accidentally left a silvered mica sheet in the oven too long. Upon examination with a microscope, he noticed that the silver layer had broken up into a myriad of tiny isolated silver globules.[53] He also noticed that, "the tiny dimension of the silver droplets would enhance the image resolution of the iconoscope by a quantum leap".[18] As head of television development at Radio Corporation of America (RCA), Zworykin submitted a patent application in November 1931, and it was issued in 1935.[45] Nevertheless, Zworykin's team was not the only engineering group working on devices that used a charge storage plate. In 1932, the EMI engineers Tedham and McGee under the supervision of Isaac Shoenberg applied for a patent for a new device they dubbed the "Emitron".[54] A 405-line broadcasting service employing the Emitron began at studios in Alexandra Palace in 1936, and patents were issued in the United Kingdom in 1934 and in the US in 1937.[55]

The iconoscope was presented to the general public at a press conference in June 1933,[56] and two detailed technical papers were published in September and October of the same year.[57][58][59] Unlike the Farnsworth image dissector, the Zworykin iconoscope was much more sensitive, useful with an illumination on the target between 40 and 215 lux (4–20 ft-c). It was also easier to manufacture and produced a very clear image.[citation needed] The iconoscope was the primary camera tube used by RCA broadcasting from 1936 until 1946, when it was replaced by the image orthicon tube.[60][61]

Super-Emitron and image iconoscope

[edit]

The original iconoscope was noisy, had a high ratio of interference to signal, and ultimately gave disappointing results, especially when compared to the high definition mechanical scanning systems then becoming available.[62][63] The EMI team under the supervision of Isaac Shoenberg analyzed how the Emitron (or iconoscope) produces an electronic signal and concluded that its real efficiency was only about 5% of the theoretical maximum. This is because secondary electrons released from the mosaic of the charge storage plate when the scanning beam sweeps across it may be attracted back to the positively charged mosaic, thus neutralizing many of the stored charges.[64] Lubszynski, Rodda, and McGee realized that the best solution was to separate the photo-emission function from the charge storage one, and so communicated their results to Zworykin.[63][64]

The new video camera tube developed by Lubszynski, Rodda and McGee in 1934 was dubbed "the super-Emitron". This tube is a combination of the image dissector and the Emitron. It has an efficient photocathode that transforms the scene light into an electron image; the latter is then accelerated towards a target specially prepared for the emission of secondary electrons. Each individual electron from the electron image produces several secondary electrons after reaching the target, so that an amplification effect is produced. The target is constructed of a mosaic of electrically isolated metallic granules separated from a common plate by a thin layer of isolating material, so that the positive charge resulting from the secondary emission is stored in the granules. Finally, an electron beam periodically sweeps across the target, effectively scanning the stored image, discharging each granule, and producing an electronic signal like in the iconoscope.[65][66][67]

The super-Emitron was between ten and fifteen times more sensitive than the original Emitron and iconoscope tubes and, in some cases, this ratio was considerably greater.[64] It was used for an outside broadcast by the BBC, for the first time, on Armistice Day 1937, when the general public could watch in a television set how the King laid a wreath at the Cenotaph. This was the first time that anyone could broadcast a live street scene from cameras installed on the roof of neighboring buildings.[68]

On the other hand, in 1934, Zworykin shared some patent rights with the German licensee company Telefunken.[69] The image iconoscope (Superikonoskop in Germany) was produced as a result of the collaboration. This tube is essentially identical to the super-Emitron, but the target is constructed of a thin layer of isolating material placed on top of a conductive base, the mosaic of metallic granules is missing. The production and commercialization of the super-Emitron and image iconoscope in Europe were not affected by the patent war between Zworykin and Farnsworth, because Dieckmann and Hell had priority in Germany for the invention of the image dissector, having submitted a patent application for their Lichtelektrische Bildzerlegerröhre für Fernseher (Photoelectric Image Dissector Tube for Television) in Germany in 1925,[19] two years before Farnsworth did the same in the United States.[20]

The image iconoscope (Superikonoskop) became the industrial standard for public broadcasting in Europe from 1936 until 1960, when it was replaced by the vidicon and plumbicon tubes. Indeed, it was the representative of the European tradition in electronic tubes competing against the American tradition represented by the image orthicon.[70][71] The German company Heimann produced the Superikonoskop for the 1936 Berlin Olympic Games,[72] later Heimann also produced and commercialized it from 1940 to 1955, finally the Dutch company Philips produced and commercialized the image iconoscope and multicon from 1952 until 1963,[71][73] when it was replaced by the much better Plumbicon.[74][75]

Operation

[edit]

The super-Emitron is a combination of the image dissector and the Emitron. The scene image is projected onto an efficient continuous-film semitransparent photocathode that transforms the scene light into a light-emitted electron image, the latter is then accelerated (and focused) via electromagnetic fields towards a target specially prepared for the emission of secondary electrons. Each individual electron from the electron image produces several secondary electrons after reaching the target, so that an amplification effect is produced, and the resulting positive charge is proportional to the integrated intensity of the scene light. The target is constructed of a mosaic of electrically isolated metallic granules separated from a common plate by a thin layer of isolating material, so that the positive charge resulting from the secondary emission is stored in the capacitor formed by the metallic granule and the common plate. Finally, an electron beam periodically sweeps across the target, effectively scanning the stored image and discharging each capacitor in turn such that the electrical output from each capacitor is proportional to the average intensity of the scene light between each discharge event (as in the iconoscope).[65][66][67]

The image iconoscope is essentially identical to the super-Emitron, but the target is constructed of a thin layer of isolating material placed on top of a conductive base, the mosaic of metallic granules is missing. Therefore, secondary electrons are emitted from the surface of the isolating material when the electron image reaches the target, and the resulting positive charges are stored directly onto the surface of the isolated material.[70]

Orthicon and CPS Emitron

[edit]

The original iconoscope was very noisy[62] due to the secondary electrons released from the photoelectric mosaic of the charge storage plate when the scanning beam swept it across.[64] An obvious solution was to scan the mosaic with a low-velocity electron beam which produced less energy in the neighborhood of the plate such that no secondary electrons were emitted at all. That is, an image is projected onto the photoelectric mosaic of a charge storage plate, so that positive charges are produced and stored there due to photo-emission and capacitance, respectively. These stored charges are then gently discharged by a low-velocity electron scanning beam, preventing the emission of secondary electrons.[76][18] Not all the electrons in the scanning beam may be absorbed in the mosaic, because the stored positive charges are proportional to the integrated intensity of the scene light. The remaining electrons are then deflected back into the anode,[38][44] captured by a special grid,[77][78][79] or deflected back into an electron multiplier.[80]

Low-velocity scanning beam tubes have several advantages; there are low levels of spurious signals and high efficiency of conversion of light into signal, so that the signal output is maximum. However, there are serious problems as well, because the electron beam spreads and accelerates in a direction parallel to the target when it scans the image's borders and corners, so that it produces secondary electrons and one gets an image that is well focused in the center but blurry in the borders.[41][81] Henroteau was among the first inventors to propose in 1929 the use of low-velocity electrons for stabilizing the potential of a charge storage plate,[82] but Lubszynski and the EMI team were the first engineers in transmitting a clear and well focused image with such a tube.[40] Another improvement is the use of a semitransparent charge storage plate. The scene image is then projected onto the back side of the plate, while the low-velocity electron beam scans the photoelectric mosaic at the front side. This configurations allows the use of a straight camera tube, because the scene to be transmitted, the charge storage plate, and the electron gun can be aligned one after the other.[18]

CPS Emitron television camera

The first fully functional low-velocity scanning beam tube, the CPS Emitron, was invented and demonstrated by the EMI team under the supervision of Sir Isaac Shoenberg.[83] In 1934, the EMI engineers Blumlein and McGee filed for patents for television transmitting systems where a charge storage plate was shielded by a pair of special grids, a negative (or slightly positive) grid lay very close to the plate, and a positive one was placed further away.[77][78][79] The velocity and energy of the electrons in the scanning beam were reduced to zero by the decelerating electric field generated by this pair of grids, and so a low-velocity scanning beam tube was obtained.[76][84] The EMI team kept working on these devices, and Lubszynski discovered in 1936 that a clear image could be produced if the trajectory of the low-velocity scanning beam was nearly perpendicular (orthogonal) to the charge storage plate in a neighborhood of it.[40][85] The resulting device was dubbed the cathode potential stabilized Emitron, or CPS Emitron.[76][86] The industrial production and commercialization of the CPS Emitron had to wait until the end of the Second World War;[84] it was widely used in the UK until 1963, when it was replaced by the much better Plumbicon.[74][75]

On the other side of the Atlantic, the RCA team led by Albert Rose began working in 1935 on a low-velocity scanning beam device they came to dub the orthicon.[87][88] Iams and Rose solved the problem of guiding the beam and keeping it in focus by installing specially designed deflection plates and deflection coils near the charge storage plate to provide a uniform axial magnetic field.[41][80][89] The orthicon's performance was similar to that of the image iconoscope,[90] but it was also unstable under sudden flashes of bright light, producing "the appearance of a large drop of water evaporating slowly over part of the scene".[18]

Image orthicon

[edit]
Schematic of image orthicon tube
A 1960s-era RCA Radiotron image orthicon TV camera tube
A 1960s-era RCA Radiotron Image Orthicon TV Camera Tube

The image orthicon (sometimes abbreviated IO), was common in American broadcasting from 1946 until 1968.[61] A combination of the image dissector and the orthicon technologies, it replaced the iconoscope in the United States, which required a great deal of light to work adequately.[91]

The image orthicon tube was developed at RCA by Albert Rose, Paul K. Weimer, and Harold B. Law. It represented a considerable advance in the television field, and after further development work, RCA created original models between 1939 and 1940.[61] The National Defense Research Committee entered into a contract with RCA where the NDRC paid for its further development. Upon RCA's development of the more sensitive image orthicon tube in 1943, RCA entered into a production contract with the U.S. Navy, the first tubes being delivered in January 1944.[92] RCA began production of image orthicons for civilian use in the second quarter of 1946.[61][93]

While the iconoscope and the intermediate orthicon used capacitance between a multitude of small but discrete light sensitive collectors and an isolated signal plate for reading video information, the image orthicon employed direct charge readings from a continuous electronically charged collector. The resultant signal was immune to most extraneous signal crosstalk from other parts of the target, and could yield extremely detailed images. Image orthicon cameras were still being used by NASA for capturing Apollo/Saturn rockets nearing orbit, although the television networks had phased the cameras out. [94][failed verification]

An image orthicon camera can take television pictures by candlelight because of the more ordered light-sensitive area and the presence of an electron multiplier at the base of the tube, which operated as a high-efficiency amplifier. It also has a logarithmic light sensitivity curve similar to the human eye. However, it tends to flare in bright light, causing a dark halo to be seen around the object; this anomaly was referred to as blooming in the broadcast industry when image orthicon tubes were in operation.[95] Image orthicons were used extensively in the early color television cameras such as the RCA TK-40/41, where the increased sensitivity of the tube was essential to overcome the very inefficient, beam-splitting optical system of the camera.[95][96]

The image orthicon tube was at one point colloquially referred to as an Immy. Harry Lubcke, the then-President of the Academy of Television Arts & Sciences, decided to have their award named after this nickname. Since the statuette was female, it was feminized into Emmy.[97] The Image orthicon was used until the end of black and white television production in the 1960s.[98]

Operation

[edit]

An image orthicon consists of three parts: a photocathode with an image store (target), a scanner that reads this image (an electron gun), and a multistage electron multiplier.[99]

In the image store, light falls upon the photocathode which is a photosensitive plate at a very negative potential (approx. -600 V), and is converted into an electron image (a principle borrowed from the image dissector). This electron rain is then accelerated towards the target (a very thin glass plate acting as a semi-isolator) at ground potential (0 V), and passes through a very fine wire mesh (nearly 200 or 390[100] wires per cm), very near (a few hundredths of a cm) and parallel to the target, acting as a screen grid at a slightly positive voltage (approx +2 V). Once the image electrons reach the target, they cause a splash of electrons by the effect of secondary emission. On average, each image electron ejects several splash electrons (thus adding amplification by secondary emission), and these excess electrons are soaked up by the positive mesh effectively removing electrons from the target and causing a positive charge on it in relation to the incident light in the photocathode. The result is an image painted in positive charge, with the brightest portions having the largest positive charge.[101]

A sharply focused beam of electrons (a cathode ray) is generated by the electron gun at ground potential and accelerated by the anode (the first dynode of the electron multiplier) around the gun at a high positive voltage (approx. +1500 V). Once it exits the electron gun, its inertia makes the beam move away from the dynode towards the back side of the target. At this point the electrons lose speed and get deflected by the horizontal and vertical deflection coils, effectively scanning the target. Thanks to the axial magnetic field of the focusing coil, this deflection is not in a straight line, thus when the electrons reach the target they do so perpendicularly avoiding a sideways component. The target is nearly at ground potential with a small positive charge, thus when the electrons reach the target at low speed they are absorbed without ejecting more electrons. This adds negative charge to the positive charge until the region being scanned reaches some threshold negative charge, at which point the scanning electrons are reflected by the negative potential rather than absorbed (in this process the target recovers the electrons needed for the next scan). These reflected electrons return down the cathode-ray tube toward the first dynode of the electron multiplier surrounding the electron gun which is at high potential. The number of reflected electrons is a linear measure of the target's original positive charge, which, in turn, is a measure of brightness.[102]

Dark halo

[edit]
Dark halo around bright rocket flame in the television image of John Glenn's Mercury-Atlas 6 liftoff, 1962.

The mysterious dark "orthicon halo" around bright objects in an orthicon-captured image (also known as "blooming") is based on the fact that the IO relies on the emission of photoelectrons, but very bright illumination can produce more of them locally than the device can successfully deal with. At a very bright point on a captured image, a great preponderance of electrons is ejected from the photosensitive plate. So many may be ejected that the corresponding point on the collection mesh can no longer soak them up, and thus they fall back to nearby spots on the target instead, much as water splashes in a ring when a rock is thrown into it. Since the resultant splashed electrons do not contain sufficient energy to eject further electrons where they land, they will instead neutralize any positive charge that has been built-up in that region. Since darker images produce less positive charge on the target, the excess electrons deposited by the splash will be read as a dark region by the scanning electron beam.[citation needed]

This effect was actually cultivated by tube manufacturers to a certain extent, as a small, carefully controlled amount of the dark halo has the effect of crispening the visual image due to the contrast effect. (That is, giving the illusion of being more sharply focused than it actually is). The later vidicon tube and its descendants (see below) do not exhibit this effect, and so could not be used for broadcast purposes until special detail correction circuitry could be developed.[103]

Vidicon

[edit]

A vidicon tube is a video camera tube design in which the target material is a photoconductor. The vidicon was developed in 1950 at RCA by P. K. Weimer, S. V. Forgue and R. R. Goodrich as a simple alternative to the structurally and electrically complex image orthicon.[98][104][105][106] While the initial photoconductor used was selenium, other targets—including silicon diode arrays—have been used. Vidicons with these targets are known as Si-vidicons or Ultricons.[107][108]

Schematic of vidicon tube

The vidicon is a storage-type camera tube in which a charge-density pattern is formed by the imaged scene radiation on a photoconductive surface which is then scanned by a beam of low-velocity electrons. This surface is on a glass plate and is also called the target.[100][109] More specifically, this glass plate is covered in a transparent, electrically conductive, indium tin oxide (ITO) layer, on top of which the photoconductive surface is formed by depositing photoconductive material which can be applied as small squares with insulation between the squares. The photoconductor is normally an insulator but becomes partially conductive when struck by electrons.[100] The output of the tube comes from the ITO layer.[107]

The target is kept at a positive voltage of 30 volts and the cathode in the tube is at a voltage of negative 30 volts. The cathode releases electrons which are modulated by grid G1 and accelerated by grid G2 creating an electron beam. Magnetic coils deflect, focus, and align the electron beam so it can scan the surface of the target. The beam deposits electrons on the target and when enough photons strike the target, a difference in current is produced between the two electrically conductive layers of the target, and due to a connection to an electrical resistor this difference is output as a voltage. The fluctuating voltage created in the target is coupled to a video amplifier[100] and used to reproduce the scene being imaged, in other words it is the video output. The electrical charge produced by an image will remain in the face plate until it is scanned or until the charge dissipates. Special Vidicons can have resolutions of up to 5,000 TV lines.[110]

By using a pyroelectric material such as triglycine sulfate (TGS) as the target, a vidicon sensitive over a broad portion of the infrared spectrum[111] is possible. This technology was a precursor to modern microbolometer technology, and mainly used in firefighting thermal cameras.[112]

Close-up of an RCA vidicon tube, showing the electron gun.

Prior to the design and construction of the Galileo probe to Jupiter, in the late 1970s to early 1980s NASA used vidicon cameras on nearly all the unmanned deep space probes equipped with the remote sensing ability.[113] Vidicon tubes were also used aboard the first three Landsat earth imaging satellites launched in 1972, as part of each spacecraft's Return Beam Vidicon (RBV) imaging system.[114][115][116] The Uvicon, a UV-variant Vidicon was also used by NASA for UV duties.[117]

Vidicon tubes were popular in 1970s and 1980s, after which they were rendered obsolete by solid-state image sensors, with the charge-coupled device (CCD) and then the CMOS sensor.

All vidicon and similar tubes are prone to image lag, better known as ghosting, smearing, burn-in, comet tails, luma trails and luminance blooming. Image lag is visible as noticeable (usually white or colored) trails that appear after a bright object (such as a light or reflection) has moved, leaving a trail that eventually fades into the image.[118] It cannot be avoided or eliminated, as it is inherent to the technology. To what degree the image generated by the Vidicon is affected will depend on the properties of the target material used on the Vidicon, and the capacitance of the target material (known as the storage effect) as well as the resistance of the electron beam used to scan the target. The higher the capacitance of the target, the higher the charge it can hold and the longer it will take for the trail to disappear. The remaining charges on the target eventually dissipate making the trail disappear.[119]

Vidicons can be damaged by high intensity light exposure.[120] Image burn-in occurs when an image is captured by a Vidicon for a long time and appears as a persistent outline of the image when it changes, and the outline disappears over time. Vidicons can become damaged by direct exposure to the sun which causes them to develop dark spots.[121][122] Vidicons often used antimony trisulfide as the photoconductive material.[107] They were not very successful because of image lag, which was seen in the RCA TK-42 color camera.[106]

Si-vidicon (1969)

[edit]

Si-vidicons, silicon vidicons[123] or Epicons,[124] Vidicons using arrays of silicon diodes for the target, were introduced in 1969 for the Picturephone.[125] They are very resistant to burn-in, have low image lag and very high sensitivity but are not considered suitable for broadcast TV production as they suffer from high image blooming and image non uniformity. The targets in these tubes are made on silicon substrates and require 10 volts to operate, they are made with semiconductor device fabrication processes.[124] These tubes could be used with an image intensifier in which case they were known as silicon intensified tubes (SITs) which had an additional photocathode in front of the target that produced large amounts of electrons when struck by photons, and the electrons were accelerated to the target with several hundred volts. These tubes were used for tracking satellite debris.[107]

Plumbicon (1965)

[edit]
Not-to-scale schematic of a Plumbicon tube (the width of the tube is exaggerated compared to the length)

Plumbicon is a registered trademark of Philips from 1963, for its lead(II) oxide (PbO) target vidicons.[126] It was demonstrated in 1965 at the NAB Show.[127][128] Used frequently in broadcast camera applications, these tubes have low output, but a high signal-to-noise ratio. They have excellent resolution compared to image orthicons, but lack the artificially sharp edges of IO tubes, which cause some of the viewing audience to perceive them as softer. CBS Labs invented the first outboard edge enhancement circuits to sharpen the edges of Plumbicon generated images.[129][130][131] Philips received the 1966 Technology & Engineering Emmy Award for the Plumbicon.[132] Targets in Plumbicons have two layers: a pure PbO layer, and a doped PbO layer. The pure PbO is an intrinsic I type semiconductor, and a layer of it is doped to create a P type PbO semiconductor, thus creating a semiconductor junction.[133] The PbO is in crystalline form.[134]

Plumbicons were the first commercially successful version of the Vidicon. They were smaller, had lower noise, higher sensitivity and resolution, had less image lag than Vidicons,[106] and were a defining factor in the development of color TV cameras.[98] The most widely used camera tubes in TV production were the Plumbicons and the Saticon.[107] Compared to Saticons, Plumbicons have much higher resistance to burn-in, and comet and trailing artifacts from bright lights in the shot. Saticons though, usually have slightly higher resolution. After 1980, and the introduction of the diode-gun Plumbicon tube, the resolution of both types was so high, compared to the maximum limits of the broadcasting standard, that the Saticon's resolution advantage became moot. While broadcast cameras migrated to solid-state charge-coupled devices, Plumbicon tubes remained a staple imaging device in the medical field.[129][130][131] High resolution Plumbicons were made for the HD-MAC standard.[135] Since PbO is not stable in air, the deposition of PbO on the target is challenging.[136] Vistacons developed by RCA[137] and Leddicons made by EEV[138] also use PbO in their targets.[98]

Until 2016, Narragansett Imaging was the last company making Plumbicons, using factories Philips built in Rhode Island, USA. While still a part of Philips, the company purchased EEV's (English Electric Valve) lead oxide camera tube business, and gained a monopoly in lead-oxide tube production.[129][130][131] Lead oxide tubes were also made by Matsushita.[139][140]

Saticon (1973)

[edit]

Saticon is a registered trademark of Hitachi from 1973, also produced by Thomson and Sony. It was developed in a joint effort by Hitachi and NHK Science & Technology Research Laboratories (NHK is The Japan Broadcasting Corporation). Introduced in 1973,[141][142] Its surface consists of selenium with trace amounts of arsenic and tellurium added (SeAsTe) to make the signal more stable. SAT in the name is derived from (SeAsTe).[143] Saticon tubes have an average light sensitivity equivalent to that of 64 ASA film.[144] Compared to the Plumbicon it has a less advantageous operating temperature range and has more image lag.[107] The target in a Saticon has a transparent Tin oxide transparent electrically conductive layer, followed by a SeAsTe layer, a SeAs layer, and an Antimony trisulfide layer which faces the electron beam.[141]

A high-gain avalanche rushing amorphous photoconductor (HARP) made of amorphous Selenium (a-Se) can be used to increase light sensitivity to up to 10 times that of conventional saticons, and Saticons with this kind of target are known as HARPICONs. The target in HARPICONs is made up of ITO (indium-tin oxide), CeO2 (Cerium oxide), Selenium doped with Arsenic and Lithium Fluoride, Selenium doped with Arsenic and Tellurium, amorphous Selenium made by doping it with Arsenic, and antimony trisulfide.[145][146][147][144] Saticons were made for the Sony HDVS system, used to produce early analog high-definition television using multiple sub-Nyquist sampling encoding (MUSE).[144]

Pasecon (1972)

[edit]

Originally developed by Toshiba in 1972 as chalnicon, Pasecon is a registered trademark of Heimann GmbH from 1977. Its surface consists of cadmium selenide trioxide (CdSeO3). Due to its wide spectral response, it is labelled as panchromatic selenium vidicon, hence the acronym 'pasecon'.[143][148] It is not considered suitable for broadcast TV production, as it suffers from high image lag.[107]

Newvicon (1974)

[edit]

Newvicon is a registered trademark of Matsushita from 1973.[149] Introduced in 1974,[150][151] The Newvicon tubes were characterized by high light sensitivity. Its surface consists of a combination of zinc selenide (ZnSe) and zinc cadmium Telluride (ZnCdTe).[143] It is not considered suitable for broadcast TV production, as it suffers from high image lag and non uniformity.[107]

Trinicon (1971)

[edit]

Trinicon is a registered trademark of Sony from 1971.[152] It uses a vertically striped RGB color filter over the faceplate of an otherwise standard vidicon imaging tube to segment the scan into corresponding red, green and blue segments. Only one tube was used in the camera, instead of a tube for each color, as was standard for color cameras used in television broadcasting. It is used mostly in low-end consumer cameras, such as the HVC-2200 and HVC-2400 models, though Sony also used it in some moderate cost professional cameras in the 1970s and 1980s, such as the DXC-1600 series.[153]

Although the idea of using color stripe filters over the target was not new, the Trinicon was the only tube to use the primary RGB colors. This necessitated an additional electrode buried in the target to detect where the scanning electron beam was relative to the stripe filter. Previous color stripe systems had used colors where the color circuitry was able to separate the colors purely from the relative amplitudes of the signals.[154] As a result, the Trinicon featured a larger dynamic range of operation.

Sony later combined the Saticon tube with the Trinicon's RGB color filter, providing low-light sensitivity and superior color. This type of tube was known as the SMF Trinicon tube, or Saticon Mixed Field. SMF Trinicon tubes were used in the HVC-2800 and HVC-2500 consumer cameras, the DXC-1800 and BVP-1 professional cameras, as well as the first Betamovie camcorders. Toshiba offered a similar tube in 1974,[155] and Hitachi also developed a similar Saticon with a color filter in 1981.[156]

Light biasing

[edit]

All the vidicon type tubes except the vidicon itself were able to use a light biasing technique to improve the sensitivity and contrast. The photosensitive target in these tubes suffered from the limitation that the light level had to rise to a particular level before any video output resulted. Light biasing was a method whereby the photosensitive target was illuminated from a light source just enough that no appreciable output was obtained, but such that a slight increase in light level from the scene was enough to provide discernible output. The light came from either an illuminator mounted around the target, or in more professional cameras from a light source on the base of the tube and guided to the target by light piping. The technique would not work with the baseline vidicon tube because it suffered from the limitation that as the target was fundamentally an insulator, the constant low light level built up a charge which would manifest itself as a form of fogging. The other types had semiconducting targets which did not have this problem.

Color cameras

[edit]

Early color cameras used the obvious technique of using separate red, green and blue image tubes in conjunction with a color separator, a technique still in use with 3CCD solid state cameras today. It was also possible to construct a color camera that used a single image tube. One technique has already been described (Trinicon above). A more common technique and a simpler one from the tube construction standpoint was to overlay the photosensitive target with a color striped filter having a fine pattern of vertical stripes of green, cyan and clear filters (i.e. green; green and blue; and green, blue and red) repeating across the target. The advantage of this arrangement was that for virtually every color, the video level of the green component was always less than the cyan, and similarly the cyan was always less than the white. Thus the contributing images could be separated without any reference electrodes in the tube. If the three levels were the same, then that part of the scene was green. This method suffered from the disadvantage that the light levels under the three filters were almost certain to be different, with the green filter passing not more than one third of the available light.

Variations on this scheme exist, the principal one being to use two filters with color stripes overlaid such that the colors form vertically oriented lozenge shapes overlaying the target. The method of extracting the color is similar however.

Field-sequential color system

[edit]

During the 1930s and 1940s, field-sequential color systems were developed which used synchronized motor-driven color-filter disks at the camera's image tube and at the television receiver. Each disk consisted of red, blue, and green transparent color filters. In the camera, the disk was in the optical path, and in the receiver, it was in front of the CRT. Disk rotation was synchronized with vertical scanning so that each vertical scan in sequence was for a different primary color. This method allowed regular black-and-white image tubes and CRTs to generate and display color images. A field-sequential system developed by Peter Goldmark for CBS was demonstrated to the press on September 4, 1940,[157][158][159] and was first shown to the general public on January 12, 1950.[160] Guillermo González Camarena independently developed a field-sequential color disk system in Mexico in the early 1940s, for which he requested a patent in Mexico on August 19 of 1940 and in the US in 1941.[161] Gonzalez Camarena produced his color television system in his laboratory Gon-Cam for the Mexican market and exported it to the Columbia College of Chicago, who regarded it as the best system in the world.[162][163]

Magnetic focusing in typical camera tubes

[edit]

The phenomenon known as magnetic focusing was discovered by A. A. Campbell-Swinton in 1896. He found that a longitudinal magnetic field generated by an axial coil can focus an electron beam.[164] This phenomenon was immediately corroborated by J. A. Fleming, and Hans Busch gave a complete mathematical interpretation in 1926.[165]

Diagrams in this article show that the focus coil surrounds the camera tube; it is much longer than the focus coils for earlier TV CRTs. Camera-tube focus coils, by themselves, have essentially parallel lines of force, very different from the localized semi-toroidal magnetic field geometry inside a TV receiver CRT focus coil. The latter is essentially a magnetic lens; it focuses the "crossover" (between the CRT's cathode and G1 electrode, where the electrons pinch together and diverge again) onto the screen.

The electron optics of camera tubes differ considerably. Electrons inside these long focus coils take helical paths as they travel along the length of the tube. The center (think local axis) of one of those helices is like a line of force of the magnetic field. While the electrons are traveling, the helices essentially don't matter. Assuming that they start from a point, the electrons will focus to a point again at a distance determined by the strength of the field. Focusing a tube with this kind of coil is simply a matter of trimming the coil's current. In effect, the electrons travel along the lines of force, although helically, in detail.

These focus coils are essentially as long as the tubes themselves, and surround the deflection yoke (coils). Deflection fields bend the lines of force (with negligible defocusing), and the electrons follow the lines of force.

In a conventional magnetically deflected CRT, such as in a TV receiver or computer monitor, basically the vertical deflection coils are equivalent to coils wound around an horizontal axis. That axis is perpendicular to the neck of the tube; lines of force are basically horizontal. (In detail, coils in a deflection yoke extend some distance beyond the neck of the tube, and lie close to the flare of the bulb; they have a truly distinctive appearance.)

In a magnetically focused camera tube (there are electrostatically focused vidicons), the vertical deflection coils are above and below the tube, instead of being on both sides of it. One might say that this sort of deflection starts to create S-bends in the lines of force, but doesn't become anywhere near to that extreme.

Size

[edit]

The size of video camera tubes is simply the overall outside diameter of the glass envelope. This differs from the size of the sensitive area of the target which is typically two thirds of the size of the overall diameter. Tube sizes are always expressed in inches for historical reasons. A one-inch camera tube has a sensitive area of approximately two thirds of an inch on the diagonal or about 16 mm.

Although the video camera tube is now technologically obsolete, the size of solid-state image sensors is still expressed as the equivalent size of a camera tube. For this purpose a new term was coined and it is known as the optical format. The optical format is approximately the true diagonal of the sensor multiplied by 32. The result is expressed in inches and is usually, though not always, rounded to a convenient fraction (hence the approximation). For instance, a 6.4 mm × 4.8 mm (0.25 in × 0.19 in) sensor has a diagonal of 8.0 mm (0.31 in) and therefore an optical format of 8.0 × 32 = 12 mm (0.47 in), which is rounded to the convenient imperial fraction of 12 inch (13 mm). The parameter is also the source of the "Four Thirds" in the Four Thirds system and its Micro Four Thirds extension—the imaging area of the sensor in these cameras is approximately that of a 43-inch (3.4 cm) video-camera tube at approximately 22 millimetres (0.87 in).[166]

Although the optical format size bears no relationship to any physical parameter of the sensor, its use means that a lens that would have been used with (say) a 43-inch camera tube will give roughly the same angle of view when used with a solid-state sensor with an optical format of 43 of an inch.

Late use and decline

[edit]

The lifespan of videotube technology reached as far as the 90s, when high definition, 1035-line videotubes were used in the early MUSE HD broadcasting system. While CCDs were tested for this application, as of 1993 broadcasters still found them inadequate due to issues achieving the necessary high resolution without compromising image quality with undesirable side-effects.[167]

Modern charge-coupled device (CCD) and CMOS-based sensors offer many advantages over their tube counterparts. These include a lack of image lag, high overall picture quality, high light sensitivity and dynamic range, a better signal-to-noise ratio and significantly higher reliability and ruggedness. Other advantages include the elimination of the respective high and low-voltage power supplies required for the electron beam and heater filament, elimination of the drive circuitry for the focusing coils, no warm-up time and a significantly lower overall power consumption. Despite these advantages, the acceptance and incorporation of solid-state sensors into television and video cameras was not immediate. Early sensors were of lower resolution and performance than picture tubes, and were initially relegated to consumer-grade video recording equipment.[167]

Also, video tubes had progressed to a high standard of quality and were standard issue equipment to networks and production entities. Those entities had a substantial investment in not only tube cameras, but also in the ancillary equipment needed to correctly process tube-derived video. A switch-over to solid-state image sensors rendered much of that equipment (and the investments behind it) obsolete, and required new equipment optimized to work well with solid-state sensors; as the old equipment was optimized for tube-sourced video.

Due to their relative insensitivity to radiation, compared to semi-conductor based devices, video camera tubes are still occasionally used in high radiation environments such as nuclear power plants.[citation needed]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A video camera tube, also known as a pickup tube or camera tube, is an analog vacuum tube device employed in early television cameras from the 1920s to the 1980s to convert incoming optical images into electrical video signals by means of photoconductive or photoemissive materials scanned by an electron beam. These tubes represented a pivotal advancement in electronic imaging, enabling live television broadcasting by replacing mechanical scanning systems and film-based capture with fully electronic processes. The development of video camera tubes began in the 1920s amid the race to invent practical television technology, with key early inventions including Philo T. Farnsworth's image dissector in 1927, which dissected the image into electronic lines but suffered from low light sensitivity, and Vladimir Zworykin's iconoscope in 1923 (patented 1938), the first charge-storage tube that stored and amplified the image signal on a mosaic target. By the 1930s, refinements like the Super-Ikonoskop (an improved iconoscope variant) were used for landmark broadcasts, such as the 1936 Berlin Olympics, demonstrating enhanced sensitivity for low-light conditions. The image orthicon, introduced by RCA in 1946, became the broadcast standard in the United States through the 1960s due to its high sensitivity and ability to handle 500 picture elements per raster height, powering early . Among the most versatile types was the vidicon, developed in the 1950s by RCA and others, which used a photoconductive target like for compact, low-cost operation in industrial, scientific, and space applications, including NASA's Apollo missions where it captured lunar surface images despite extreme conditions. These tubes operated on principles of charge storage: light exposure created an electrostatic charge pattern on the target, which an electron beam scanned to generate a varying voltage output corresponding to the image brightness, typically requiring high-voltage supplies of 5–25 kV and achieving resolutions up to 9.9 line pairs per millimeter. By the late , video camera tubes were largely supplanted by solid-state charge-coupled devices (CCDs), which offered greater reliability, smaller size, and immunity to magnetic fields, though tubes persisted in niche high-end studio uses into the 1990s for their superior in certain scenarios.

Basic Principles

Cathode-ray tube foundations

The cathode-ray tube (CRT) is a that generates and controls a beam of electrons to interact with a target surface, forming the foundational technology for various display and imaging devices. It was invented in 1897 by German physicist Karl Ferdinand Braun, who developed the first practical CRT as a means to visualize electrical oscillations, earning it the name "Braun tube." This innovation built upon earlier experiments with by scientists like , but Braun's design incorporated deflection mechanisms to direct the electron beam precisely. Key components of a CRT include the , which produces and focuses the electron beam; the , which accelerates the electrons; and deflection plates or coils that steer the beam electromagnetically or electrostatically. In display-oriented CRTs, the beam ultimately strikes a phosphor-coated screen to produce visible light, whereas pickup tubes for imaging applications replace this with a photosensitive target to generate electrical signals. The entire assembly operates within a high-vacuum envelope to prevent by residual gas molecules. Electrons in a CRT are emitted from the cathode primarily through , where thermal energy from a heated filament overcomes the of the cathode material, or alternatively via photoelectric emission using light to eject electrons. Both methods require a environment, typically at pressures below 10^{-6} , to ensure mean free paths long enough for electrons to travel without collisions. The accelerating anode applies a high voltage to the electron beam, imparting kinetic energy given by E=eVE = eV, where ee is the elementary charge (1.602×10191.602 \times 10^{-19} C) and VV is the accelerating voltage, typically ranging from 5 to 30 kV in early CRTs. This equation derives from conservation of energy, equating the potential energy loss to kinetic energy gain as electrons move through the electric field. Braun's CRT served as the basis for early oscilloscopes, enabling the visualization of electrical waveforms by deflecting the beam in response to input signals, and later for television displays where scanned beams created raster images. These applications established the principles of electron beam control that were adapted for image scanning in tubes.

Image capture and scanning mechanisms

In video camera tubes, the process begins with the formation of an optical on a photosensitive target, where incoming from the scene is focused by an objective lens onto the target's surface, creating a spatial pattern of light intensity variations. This interacts with the target material to generate either photoelectrons through photoemission or charge carriers via , storing a corresponding charge distribution that represents the . The stored charges are then read out by an electron beam, generated from a and accelerated toward the target, which scans the surface to discharge or modulate the charges, converting the optical information into an electrical signal. The scanning mechanism employs a raster pattern, where the electron beam is deflected horizontally and vertically to systematically cover the entire target area line by line, mimicking the display raster in television receivers. Horizontal deflection occurs at a line frequency of approximately 15.75 kHz for systems, tracing each line across the image width, while vertical deflection advances the beam to the next line, completing a full frame in about 1/30 second at 30 frames per second, often using interlaced fields to reduce flicker. Deflection is achieved through magnetic or electrostatic fields applied to coils or plates, ensuring precise synchronization with the broadcast standard. The electron beam operates as a scanning tool akin to that in cathode-ray tubes, depositing or extracting electrons to probe the target without permanently altering it. Signal generation relies on two primary principles: photoemission, where light ejects electrons from the target surface into the , or , where light increases the electrical conductivity of the target by generating electron-hole pairs. Target materials typically include alkali metals, such as cesium, for efficient photoemission due to their low , or semiconductors like lead oxide or for , which allow charge storage proportional to illumination. As the scanning beam encounters areas of varying charge, it produces a modulated current or voltage output directly proportional to the local light intensity, forming the core video signal. Basic signal processing involves amplifying the beam-modulated current from the target, converting it to a voltage that varies with , and adding pulses—horizontal pulses at the end of each line and vertical pulses between fields—to maintain timing for display reconstruction. These sync pulses, typically negative-going and below the , ensure the receiver's scanning matches the camera's, with the overall signal bandwidth limited by the raster resolution. Challenges in this analog scanning include lag, where residual charges cause image smearing from slow decay in photoconductive targets; bloom, an overflow of charge from bright areas spilling into adjacent regions, leading to halos; and geometric distortion, such as or barrel effects from non-uniform beam deflection, all of which degrade image fidelity and require corrective circuitry or target design adjustments.

Historical Development

Early research and prototypes

The pioneering work on video camera tubes began in the early with efforts to combine mechanical scanning systems with cathode-ray tubes (CRTs) for image transmission. In 1907, Russian scientist Boris Rosing filed patents for a system that utilized a mechanical scanner at the transmitter end paired with a CRT receiver, marking one of the first attempts to electrically transmit images over distance using photoelectric cells to detect light variations. Rosing's demonstrations in subsequent years, including transmissions of simple static images like shapes and grids, laid foundational concepts for electronic imaging, though the systems remained hybrid mechanical-electronic. Advancing toward fully electronic solutions, Vladimir Zworykin, a Russian-born engineer working at Westinghouse Laboratories, filed a U.S. on December 29, 1923, for the , an early charge-storage video camera tube designed to capture and transmit images without mechanical parts. Although the prototype was not constructed until later, Zworykin's design theoretically addressed through electron bombardment on a photosensitive , representing a significant conceptual leap despite initial skepticism from his employers. Independently, American inventor Philo Farnsworth conceptualized the image dissector tube in 1927, an all-electronic camera that used electron beams to scan and dissect light images without storage. Farnsworth achieved the first transmission demonstration of this device on September 7, 1927, successfully sending a simple straight line image across rooms in his San Francisco laboratory. He followed with a public press demonstration on September 3, 1928, transmitting a static dollar sign image, which validated the feasibility of electronic scanning. By 1930, Farnsworth demonstrated a complete all-electronic television system capable of transmitting dynamic scenes over distances, solidifying his contributions to practical video capture. In , parallel research emerged in the late 1920s, with exploring for imaging applications, initially drawing from principles to develop scanning methods for . , a major radio firm, began integrating these ideas into early electronic experiments by the early 1930s, adapting from to prototype video tubes for broadcast trials. These early prototypes faced substantial hurdles, including low light sensitivity that necessitated extremely bright illumination—often hundreds of times stronger than —to produce usable signals. between scanning mechanisms and receivers proved challenging, leading to image distortion and instability in hybrid systems. Additionally, the overall low electrical output from photosensitive surfaces limited resolution and reliability, prompting ongoing refinements in emission and signal amplification.

Image dissector innovations

Philo Taylor Farnsworth invented the tube in 1927, marking it as the first fully electronic video camera tube and a pivotal departure from mechanical scanning methods. Farnsworth filed a for the device on January 7, 1927, envisioning a system that converted optical images into electronic signals without physical components for image dissection. The tube's core design incorporated a photocathode, typically coated with , which emitted proportional to incident light intensity, forming an image. Electrostatic lenses focused this cloud, while magnetic deflection fields scanned it across a small , allowing sequential sampling of the image elements to generate a video signal. The image dissector's non-storage scanning mechanism offered distinct advantages, including high and freedom from image lag or retention. Early implementations achieved resolutions around 240 lines, with the tube's design enabling potential for up to 1000 lines or more in optimized versions due to precise and aperture control. Without charge accumulation or persistence, the instantaneous scanning eliminated artifacts like glow or smear, providing clear, real-time reproduction ideal for applications requiring sharp, lag-free . Despite these strengths, the tube suffered from severe limitations in sensitivity and signal persistence. Its photocathode required intense illumination—often from hot arc lamps—to produce usable emission, rendering it impractical for typical indoor or natural conditions. The absence of image storage meant electrons were discarded after passing the , resulting in no signal retention between scans and necessitating continuous high-speed deflection to maintain video output. Key experimental milestones underscored the dissector's promise amid these challenges. On September 7, 1927, Farnsworth transmitted the first image—a simple straight line—using the prototype at his Green Street in . By 1928, laboratory demonstrations advanced to more detailed subjects, such as a dollar sign projected for investors and the press, validating the all-electronic approach. In , RCA engineers trialed the technology following a 1930 visit to Farnsworth's lab, where Vladimir Zworykin examined the device; this contributed to a interference suit resolved in Farnsworth's favor in 1935, after which RCA licensed his patents despite prioritizing alternative tubes due to sensitivity issues. Compared to contemporary mechanical systems like the , which used rotating perforated disks for sequential image sampling, the represented pure electronic purity with no mechanical wear, enabling faster scan rates and inherently higher resolution without physical limitations.

Iconoscope and early charge-storage designs

The , the first practical charge-storage video camera tube, was developed by during his time at from 1923 to 1925, with further refinement after joining RCA in 1929. Zworykin filed initial patents for the device in 1923 and 1925, describing a system that used an to scan a photosensitive target, marking a shift from mechanical scanning to fully electronic image capture. By 1929, prototypes demonstrated viable image formation, laying the groundwork for electronic television broadcasting despite initial challenges in achieving consistent performance. The core of the iconoscope was its mosaic target, consisting of millions of tiny silver beads coated with cesium and cesium oxide on one side of a thin sheet, with the opposite side coated in silver to form a conductive signal plate. During fabrication, the was exposed to cesium vapor and oxygen, creating a photosensitive surface of cesium oxides on each that enabled photoelectric emission. This structure functioned as an array of isolated capacitors, each capable of storing charge independently to represent picture elements. The optical image was focused onto the via a lens, liberating photoelectrons from illuminated areas and leaving behind a latent positive charge pattern proportional to the light intensity. In operation, the charge-storage principle relied on this positive charge image persisting until scanned by a low-velocity beam from the tube's , which deposited to neutralize the charges without causing secondary emission. The resulting flow of to the signal plate generated a video output current corresponding to the stored , allowing integration of over the frame time for enhanced sensitivity compared to non-storage devices like the . This scanning referenced the image capture mechanisms, where the beam's raster aligned with the optical focus to read out the entire field sequentially. However, the design produced only low output currents, typically 10-100 μA, necessitating high-gain amplification. The provided a significant sensitivity improvement over the , operating effectively at illumination levels of 10-100 —far lower than the thousands of required by dissectors—enabling practical use in studio settings without excessive lighting. Despite this advance, it exhibited arising from irregularities in the structure, manifesting as persistent spurious signals or "scintillation" in the output image. Early variants addressed these issues through refinements to the fabrication, such as improved bead uniformity and surface treatments, which reduced spurious signals and enhanced overall signal fidelity. The iconoscope's commercial debut occurred in 1936, powering the first regular electronic television broadcasts by in the United States from and by the in the using adapted iconoscope-based systems. These transmissions marked the transition to practical electronic TV, with 's July 7 demonstration showcasing live studio programming to a small audience of receivers. The tube's adoption solidified its role in early until more advanced designs emerged in the .

Advanced Storage Tubes

Super-Emitron and image iconoscope

The Super-Emitron, developed by the British firm in 1934, represented a significant advancement in charge-storage camera tubes by incorporating an electron multiplier to amplify the output signal from the basic Emitron design. Patented by engineers Hans G. Lubszynski and Sidney Rodda under UK Patent No. 442666 (filed May 1934, granted February 1936), the tube separated the photoemissive from the storage mosaic, allowing photoelectrons to be focused and accelerated onto the target to build a charge image more efficiently. This configuration built on the 's storage principle but addressed its low signal output by integrating secondary emission amplification directly at the target. In operation, an optical image is projected onto a translucent photocathode, liberating photoelectrons that are electrostatically imaged and deposited onto the mosaic target, where they create positive charges proportional to light intensity. A low-velocity scanning beam, typically operating at around 0.5–1 kV, neutralizes these charges element by element, but instead of directly collecting the returning electrons, the design leverages secondary electron emission from the target surface. These secondary electrons are then directed to a multi-stage electron multiplier within the tube, which cascades them through successive dynodes coated for high secondary yield, providing significant signal amplification compared to the unamplified Emitron. The mosaic is scanned at 50 fields per second (interlaced for 25 frames per second) to match European broadcast standards. This amplification enabled greater sensitivity, reducing the required scene illumination by a substantial margin and allowing for more natural lighting in transmissions. A parallel development occurred in Germany with the image iconoscope (also known as the Super-Ikonoskop), introduced around 1936 by Telefunken and the German Post Office Ministry, which employed a similar electron multiplier approach to boost signal strength. This tube was notably deployed for the live television coverage of the 1936 Berlin Olympic Games, where German broadcasters used multiple iconoscope-based cameras, including amplified variants, to transmit events to public viewing halls across the country, marking one of the earliest major outdoor broadcasts. The enhanced output facilitated reliable signal transmission over coaxial cables and radio links under varying lighting conditions. Despite these improvements, the Super-Emitron and image iconoscope introduced added complexity due to the precise alignment required for the multiplier stages and , increasing manufacturing challenges and tube fragility. Additionally, the secondary emission process could lead to halo artifacts around bright image elements, as charges spread laterally on the target before amplification, causing slight blurring or glow in high-contrast scenes. These limitations, while manageable, paved the way for further refinements in subsequent designs.

Orthicon and CPS Emitron

The orthicon represented a significant advancement in return-beam storage tubes during the late , transitioning from earlier charge-storage designs by employing a low-velocity scanning beam to read out the stored image charge. Invented at RCA Laboratories by Albert Rose, with contributions from Harley Iams, the orthicon was conceived around 1937 and publicly announced in June 1939. In operation, photoelectrons from an image-generating photocathode deposit charge on a low-conductivity target, similar to prior storage principles; however, the scanning beam approaches the target at low velocity (typically a few volts), allowing excess electrons not needed to neutralize the stored charge to return toward the , where their modulated current generates the video signal. This return-beam modulation, proportional to the stored charge variations, minimized secondary emission artifacts and provided high signal fidelity. The orthicon's design offered key advantages in sensitivity and noise performance over predecessors like the , achieving 10 to 50 times greater light sensitivity while reducing spurious signals through efficient charge readout. The low-velocity scanning ensured that beam current modulation directly reflected image brightness without high-velocity impacts degrading the target, resulting in lower noise levels as the return beam's variations carried the primary signal with minimal unmodulated background interference. Capable of resolving 200 to 400 lines, the orthicon supported emerging standards and demonstrated practical utility in low-light conditions, with effective sensitivity down to approximately 10 in optimized setups. During , modified orthicon tubes were deployed in U.S. military and guided munitions, enabling real-time imaging from high-altitude flights and bomb noses for tactical applications. Parallel to RCA's work, British engineers at EMI developed the CPS (Cathode Potential Stabilised) Emitron in the early 1940s as a return-beam storage tube variant, building on the Emitron lineage but incorporating low-velocity scanning akin to the orthicon. Under the supervision of Isaac Shoenberg, the EMI team refined the design post-war, with the CPS Emitron entering BBC service around 1946; it featured a stabilized cathode potential to suppress stray electron effects and lag, enhancing image stability during readout. Like the orthicon, the CPS Emitron used a returning beam modulated by target charge for signal generation, with the stabilized potential distinguishing it by reducing lag and improving stability. This design improved sensitivity to 1-10 lux levels, matching or exceeding the orthicon in low-light scenarios while maintaining 525-line resolution for broadcast use. The CPS Emitron saw prominent deployment in BBC outside broadcasts, including the 1948 London Olympics, where its high sensitivity and low-noise output facilitated reliable coverage in varied lighting.

Image orthicon

The image orthicon, developed by researchers at RCA including Albert Rose, P. K. Weimer, and H. B. Law, represented a major advancement in television camera tube technology during the 1940s. Presented in a seminal 1946 paper, the tube integrated principles from the image converter and the earlier orthicon design, achieving approximately 100 times the sensitivity of the iconoscope through electron multiplication and low-velocity scanning. This innovation enabled superior low-light performance, making it suitable for professional broadcast applications where earlier tubes struggled with dim scenes. The structure of the image orthicon comprises three primary sections: the image section, featuring a photocathode that emits photoelectrons focused electrostatically onto a low-velocity storage target; the storage section, where incident photoelectrons accumulate charge on the target surface proportional to light intensity; and the gun section, which generates a scanning beam to read out the stored charge via the return-beam adapted from the orthicon. An integral electron multiplier amplifies the returning beam current, enhancing signal strength without additional . This design allowed for precise charge storage and readout, supporting high-fidelity in demanding environments. In performance, the image orthicon excelled with sensitivity enabling operation at approximately 0.1 illumination on the photocathode, corresponding to effective low-light capture equivalent to candlelight conditions, and horizontal resolution of up to 700 TV lines in optimal setups. It became the standard pickup tube in television studios from the through the , powering live broadcasts such as the U.S. presidential conventions, where RCA's TK-10 and TK-30 cameras captured events in real time for national transmission. Despite its achievements, the image orthicon had notable limitations, including the need for high operating voltages reaching 15 kV in the image section for photoelectron acceleration and multiplication. Additionally, its reliance on magnetic focusing coils made it highly sensitive to external , necessitating careful shielding and alignment in studio setups to prevent image distortion.

Photoconductive Pickup Tubes

Vidicon and silicon variants

The vidicon represents a significant advancement in photoconductive pickup tubes, developed by RCA engineers P. K. Weimer, S. V. Forgue, and R. R. Goodrich in the early as a simpler alternative to earlier designs. Its core component is a thin photoconductive layer of (Sb₂S₃) evaporated onto a transparent conductive faceplate, forming the image-sensing target. This configuration allowed for reliable image capture in a more compact form factor compared to photoconductive-emissive tubes. In its operational principle, light from the scene illuminates the Sb₂S₃ target, generating -hole pairs that reduce the material's resistivity and create a latent charge image stored as across the layer. A low-velocity beam, generated by an , scans the rear surface of the target in a raster pattern, neutralizing the charge differentially based on local ; this process yields an output signal current proportional to the incident light intensity at each point, which is then amplified to form the video signal. The design's reliance on photoconduction without secondary emission or distinguishes it from more complex tubes, enabling straightforward signal generation. Key advantages of the vidicon include its compact dimensions—typically fitting within a 1-inch —low power requirements, as it operates without the high voltages needed for , and inherent ruggedness that made it ideal for portable and industrial television cameras. These traits facilitated widespread adoption in non-broadcast applications, such as closed-circuit and , where simplicity and durability were prioritized over ultra-high sensitivity. A notable evolution, the silicon vidicon (or Si-vidicon), emerged in with a target comprising an array of reverse-biased photodiodes, replacing the Sb₂S₃ layer to enhance performance characteristics. This upgrade provided superior sensitivity in the blue spectrum—extending effective response to shorter wavelengths—and improved resistance to , making it suitable for harsh environments. The tube saw critical deployment in space missions, including the Apollo program's cameras, where its stability under and proved essential for reliable imagery transmission. Representative performance metrics for vidicon and silicon variants include horizontal resolutions of 500–800 and faceplate illumination sensitivities in the range of 100–500 for acceptable signal-to-noise ratios, underscoring their balance of practicality and image quality in everyday use.

Plumbicon and lead-oxide tubes

The Plumbicon, a high-performance photoconductive pickup tube, was developed by Research Laboratories in , , with research initiated in the early and the tube trademarked in 1963. It was first presented publicly in April 1964 and demonstrated at the in 1965, marking a significant advancement in for broadcast applications. The key innovation lies in its photoconductive target, an evaporated microcrystalline layer of lead monoxide (PbO), approximately 10-20 microns thick, structured as a p-i-n to enable efficient charge storage and readout. This lead-oxide target provided superior performance compared to earlier antimony trisulfide-based vidicons by minimizing lag and enhancing overall image quality. The Plumbicon's primary benefits stem from the PbO target's properties, including low dark current and high-speed response independent of light intensity. retention, or lag, is minimal, with persistence negligible and signal stabilization occurring within 3-10 frames, often effectively less than one frame for practical video rates due to less than 10% retention after a single frame exposure. This low lag enables smooth rendering of fast-moving subjects without smearing, a common issue in prior tubes. The tube exhibits a near-linear gamma response of 0.8 to 1, closely approximating unity and ideal for accurate color reproduction without extensive electronic correction. Resolution reaches 50% modulation depth at 5 MHz, comparable to larger image orthicons, while sensitivity averages 300-400 µA per lumen under standard illumination (e.g., 2870 source), supporting effective operation at scene illuminations of 50-200 with typical f/2.8 lenses. These characteristics made the Plumbicon particularly suited for professional (ENG) cameras, where reliable performance in varied lighting was essential. Lead-oxide tube variants evolved from the original Plumbicon design to address specific needs, such as enhanced low-light performance. Low-light-level (LLL) versions extended sensitivity for operations down to approximately 10 , incorporating optimizations like higher target voltages or refined PbO doping to maintain low noise and lag in dim conditions, though these were less common than standard models. By the , Plumbicons had become a broadcast standard, powering three-tube color cameras in studios and field production, where their superior color fidelity—due to balanced spectral response and linear transfer—outperformed vidicons in rendering accurate hues and contrasts without distortion. Despite these advantages, Plumbicon tubes faced practical limitations, including higher manufacturing costs due to the specialized PbO evaporation process and reliance on ' production facilities, which constrained supply in the mid-1960s. Additionally, the lead-oxide target showed some vulnerability to environmental factors, such as dust-induced spots or operational constraints like avoiding inverted mounting, though humidity sensitivity was mitigated in sealed designs. These factors contributed to their premium positioning in professional rather than consumer applications.

Saticon, Newvicon, and other alloys

The Saticon, introduced by in 1973 in collaboration with , featured a photoconductive target composed of a selenium-arsenic-tellurium (Se-As-Te) alloy. Arsenic doping was incorporated to prevent in the selenium layer, enhancing reliability, while tellurium doping near the signal electrode extended sensitivity into the red spectrum, addressing limitations in earlier selenium-based tubes. The Newvicon, developed by Matsushita Electric (now ) and introduced in 1974, utilized an advanced alloy formulation to further minimize image lag compared to prior designs, while achieving high sensitivity across the , operable down to approximately 20 illumination. This alloy optimized charge storage and discharge characteristics, making it suitable for broadcast applications requiring low-light performance without excessive afterimages. Other alloy-based tubes included the Pasecon, developed by around 1972, which employed a lead-selenium-arsenic (Pb-Se-As) target for improved panchromatic response in photoconductive operation. The Trinicon, introduced by in 1971, relied on a selenium-arsenic-tellurium target similar to the Saticon for monochrome imaging, providing a foundation for later color adaptations. These alloy tubes offered balanced , particularly enhancing red response over lead-oxide predecessors like the Plumbicon, alongside high resolution of approximately 700 TV lines, enabling sharp imagery in professional setups. By the late , Saticon and Newvicon tubes saw widespread adoption in Japanese television production, powering high-end broadcast cameras such as Ikegami's HL-77 and contributing to (ENG) workflows.

Color Imaging Tubes

Field-sequential color adaptation

The field-sequential color adaptation enables video camera tubes to produce color images through time-multiplexed capture, utilizing a single tube equipped with a rotating color disk divided into , , and filter segments positioned in front of the tube's faceplate. The disk synchronizes precisely with the field's scanning rate, allowing the tube to image the scene through one filter per field; this yields approximately 20 fields per second per color, with the sequential fields combined in the display via the persistence of vision to form a full-color frame. This mechanical approach modifies existing storage or photoconductive tubes, such as the or orthicon, without altering their internal or target structure. In the 1950s, Laboratories pioneered practical implementations of this system for broadcast television, adapting and orthicon tubes with a rotating at 1440 to match 144-field-per-second scans, enabling live color transmissions totaling over 100 hours of experimental broadcasts in 1951. These experiments demonstrated viable color pickup but were ultimately abandoned due to excessive bandwidth demands from the elevated field rates required to suppress flicker and inherent incompatibilities with standard receivers, which could not decode the sequential signals without additional hardware. Compared to multi-tube simultaneous systems, the field-sequential method provided notable advantages in simplicity and cost, as it leveraged a single pickup tube and basic mechanical synchronization rather than complex dichroic prisms and multiple aligned tubes, making it more accessible for early color experimentation. However, it introduced significant drawbacks, including motion-induced color breakup where rapid object movement caused visible fringing or "" artifacts as the eye failed to fully integrate the separated color fields, alongside the need for specialized receivers equipped with matching color wheels to avoid decoding errors. The technique experienced revivals in the for specialized portable applications, particularly with photoconductive vidicon tubes adapted for field-sequential operation, such as the secondary electron conduction (SEC) vidicon in NASA's Apollo missions where a compact spinning enabled color video from space under low-light conditions. These implementations highlighted the method's suitability for lightweight, single-tube designs in mobile or constrained environments, though persistent motion artifacts limited broader adoption until solid-state sensors emerged.

Simultaneous color systems

Simultaneous color systems in video camera tubes captured , , and (RGB) components concurrently, enabling high-fidelity color reproduction without the temporal artifacts inherent in sequential methods. These designs emerged prominently in the as broadcast television transitioned to color, offering superior motion handling and resolution compared to field-sequential adaptations, which relied on rotating filters to alternate color capture frame by frame. Three-tube systems dominated professional applications, utilizing dichroic prisms to split incoming light into its spectra, directing each beam to a dedicated pickup tube. Typically, three matched Plumbicon tubes—one for each RGB channel—processed the separated light, generating independent color signals that were combined downstream for encoding into standards like or PAL. Developed by in the early 1960s, these configurations became the broadcast standard from the mid-1960s through the 1980s, prized for their precise color separation and minimal light loss due to the prisms' wavelength-selective coatings. For instance, the PC60 camera employed 30 mm Plumbicon tubes with a 20 mm image diagonal, achieving horizontal resolutions up to 3 MHz (approximately 400 lines) per color channel at 65-75% modulation. Single-tube simultaneous systems addressed the complexity and alignment challenges of multi-tube setups by integrating color separation directly onto the photoconductive target. Sony's Trinicon, introduced in 1971, featured a vidicon-derived tube with a vertically striped RGB color filter deposited over the faceplate, allowing a single electron beam to scan and retrieve mixed color signals from the patterned target. This design simplified optics and reduced size, making it suitable for portable and consumer-grade cameras like the 1974 DXC-1600, while maintaining compatibility with vidicon scanning electronics. Later variants, such as the MF Trinicon, enhanced low-light performance through refined target materials, delivering horizontal resolutions around 300-400 lines in practical use. These systems provided full for each color channel—typically 400-600 lines—avoiding the halved resolution or color fringing seen in sequential approaches, and were integral to studio cameras until the mid-1980s shift to charge-coupled devices (CCDs). Signal-to-noise ratios exceeded 34 dB for in Plumbicon-based three-tube setups, ensuring robust performance in controlled lighting. Evolutions included four-tube designs, where a dedicated luminance tube augmented the three color channels, boosting overall and detail without compromising ; the EMI 2001 camera exemplified this in the 1960s, using four Plumbicons for exceptional picture quality in productions.

Technical Features

Magnetic focusing and deflection

Magnetic focusing in video camera tubes utilizes solenoidal coils to produce a uniform axial along the tube's axis, which causes the beam to follow helical paths and converge to a fine spot on the photoconductive target, ensuring sharp imaging. Typical field strengths range from 70 to 100 gauss, depending on the tube type and operating voltage, to maintain beam convergence without excessive aberration. The ff of this system is given by the approximate f8mVeB2Lf \approx \frac{8 m V}{e B^2 L}, where VV is the accelerating voltage of the beam, BB is the strength, LL is the effective length of the field region, mm is the , and ee is the charge; this relation derives from the acting on the beam via the paraxial ray , balancing their velocity and the field's rotational effect. Deflection of the focused beam to scan the target in a raster pattern is achieved through transverse generated by pairs of saddle-shaped coils positioned around the tube neck. These coils are driven by sawtooth current waveforms: horizontal deflection at approximately 15.75 kHz and vertical at 60 Hz for standard systems, producing peak fields of about 20-25 gauss to sweep the beam across the target area without significant defocusing. The sawtooth profile ensures linear scanning, with the rapid flyback occurring during the horizontal or vertical blanking intervals to avoid artifacts. The focusing and deflection systems are integrated into a single assembly encircling the tube's neck, combining the for focus with orthogonal deflection coils for precise beam control and alignment. Adjustments within the yoke, such as varying coil currents or adding small transverse fields, correct for by compensating radial components in the focusing field, ensuring uniform beam spot size across the scan. While early camera tubes, such as the , relied on electrostatic focusing and deflection via internal electrodes for simplicity in small-scale designs, magnetic systems predominated in later tubes like the image orthicon and plumbicon due to their scalability for larger diameters and superior handling of high-voltage beams. Electrostatic methods suffered from focus inconsistencies and linearity issues under deflection, whereas magnetic approaches provided more stable performance over extended tube lengths. Key challenges in these systems include field curvature, which leads to off-axis defocusing as the beam edges experience weaker effective fields, and pincushion distortion, where scan lines bow inward at the edges due to nonlinear field interactions. These are mitigated through yoke geometry optimization and auxiliary correction coils that introduce compensating fields, maintaining geometric fidelity in the scanned image.

Physical dimensions and formats

Video camera tubes were constructed with varying physical dimensions to suit different applications, from large studio setups to portable field equipment. Early broadcast image orthicons, such as the RCA 7295-A, had diameters up to 4.5 inches, allowing for expansive target areas that supported high-resolution imaging in controlled environments. In studio and professional settings, more compact tubes like vidicons and plumbicons typically featured 1-inch target formats, with outer bulb diameters of approximately 28.6 mm to balance performance and integration into camera housings. Portable variants, including 2/3-inch vidicons, measured approximately 17 mm in diameter, facilitating lighter, more maneuverable systems for . Tube lengths generally spanned 6 to 9 inches, comprising a bulb section for the photoconductive target and a elongated neck housing the , though overall assemblies could extend to 12-18 inches when including mounting flanges. For instance, standard 1-inch vidicon tubes had maximum lengths of 162 mm and diameters of 28.6 mm, enclosed in vacuum-sealed glass envelopes to maintain internal integrity. These envelopes often incorporated C-mount interfaces aligned with EIAJ compatibility standards, ensuring standardized lens attachment and mechanical interchangeability across broadcast equipment. During the , miniaturization advanced significantly for applications, reducing plumbicon and vidicon sizes to 2/3-inch and even 1/2-inch formats, which weighed under 25 pounds in full camera configurations and enabled shoulder-mounted portability. Larger diameters, such as those in 4.5-inch orthicons, delivered superior resolution through bigger photosensitive areas, while smaller formats prioritized compactness for camcorders and field reporting, directly influencing camera and deployment flexibility. The dimensions also dictated the scale of external magnetic focusing and deflection coils needed to steer the electron beam precisely across the target.

Decline and Modern Context

Late industrial and broadcast uses

In the 1980s, broadcast studios continued to rely on video camera tubes for their established performance in professional environments, with networks like maintaining use of plumbicon tubes in select applications due to their high sensitivity and image fidelity. These tubes were particularly valued in studio settings where consistent quality outweighed the shift toward newer technologies, as seen in RCA's TK-44A color cameras that remained in service into the decade. Industrial applications extended the lifespan of vidicon variants into specialized fields, including medical endoscopes where vidicon tubes provided reliable imaging in constrained environments before being displaced by charge-coupled devices in the late . In military surveillance, silicon-intensified target (SIT) vidicon tubes were employed in space-based systems during the for their radiation hardness and ability to capture images in low-light orbital conditions, as documented in U.S. space surveillance workshops. Niche uses persisted in film scanning telecine machines, where high-resolution plumbicon tubes delivered superior detail for converting motion picture to video signals well into the , leveraging their photoconductive targets for accurate color reproduction. The longevity of these tubes in such high-end scenarios stemmed from their proven reliability, especially in low-light conditions, where plumbicons offered high sensitivity and low , requiring minimal illumination for effective operation. This made them ideal for demanding broadcast and industrial tasks until digital alternatives matured. The final major deployments of tubes occurred in camcorders during the early , particularly in ENG () models like Sony's BVP-3, which utilized saticon or plumbicon tubes for broadcast-quality footage before the widespread adoption of CCD sensors around 1994.

Shift to solid-state sensors

The (CCD), a solid-state , emerged as a pivotal innovation in 1969 when physicists and invented it at Bell Laboratories while exploring concepts adaptable for . This breakthrough addressed key limitations of -based tubes by enabling electronic charge transfer to capture and read out light-induced signals without mechanical scanning. In the 1970s, advanced the technology through prototypes, producing the first commercial CCD imagers around 1970 and linear arrays by 1974, which demonstrated viability for and scientific applications. These early devices offered advantages such as resistance to and image lag—issues plaguing photoconductive tubes—along with potential due to the absence of fragile envelopes and high-voltage requirements, and seamless integration with digital processing circuits. The shift accelerated in the for consumer markets, as CCDs enabled compact, portable video equipment; Sony's CCD-V8, launched in 1985, marked the first 8mm with a 250,000-pixel CCD , rapidly popularizing handheld recording over bulky tube-based systems. By the , professional broadcast adopted CCDs widely, with Ikegami's series and similar models incorporating frame interline transfer (FIT) CCDs for high-resolution studio and field production, supplanting tube cameras in major networks. Key drivers included CCDs' progressively lower production costs through scaling, superior reliability from solid-state durability without vacuum seals or guns prone to failure, and extended operational life exceeding the roughly 7-10 years typical for tubes under intensive use. Tube techniques left a legacy on CCD evolution, particularly in scanning methods; frame-transfer CCD architectures, which rapidly shift exposed charges to a masked storage area for readout, minimize exposure interruptions without shutters. This conceptual bridge facilitated the transition, allowing early CCDs to mimic analog video formats while paving the way for standards. As of 2025, video camera tubes are fully obsolete in commercial, broadcast, and consumer contexts, persisting solely in niche vintage restorations for historical or rare extreme environments like high-radiation settings where CCD degradation might occur faster.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.