Hubbry Logo
Number One Electronic Switching SystemNumber One Electronic Switching SystemMain
Open search
Number One Electronic Switching System
Community hub
Number One Electronic Switching System
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Number One Electronic Switching System
Number One Electronic Switching System
from Wikipedia
View of 1AESS frames

The Number One Electronic Switching System (1ESS) was the first large-scale stored program control (SPC) telephone exchange or electronic switching system in the Bell System. It was manufactured by Western Electric and first placed into service in Succasunna, New Jersey, in May 1965.[1] The switching fabric was composed of a reed relay matrix controlled by wire spring relays which in turn were controlled by a central processing unit (CPU).

The 1AESS central office switch was a plug compatible, higher capacity upgrade from 1ESS with a faster 1A processor that incorporated the existing instruction set for programming compatibility, and used smaller remreed switches, fewer relays, and featured disk storage.[2] It was in service from 1976 to 2017.

Switching fabric

[edit]

The voice switching fabric plan was similar to that of the earlier 5XB switch in being bidirectional and in using the call-back principle.[clarification needed][citation needed] The largest full-access matrix switches (the 12A line grids had partial access) in the system, however, were 8x8 rather than 10x10 or 20x16. Thus they required eight stages rather than four to achieve large enough junctor groups in a large office. Crosspoints being more expensive in the new system but switches cheaper, system cost was minimized with fewer crosspoints organized into more switches. The fabric was divided into Line Networks and Trunk Networks of four stages, and partially folded to allow connecting line-to-line or trunk-to-trunk without exceeding eight stages of switching.

The traditional implementation of a nonblocking minimal spanning switch able to connect input customers to output customers simultaneously—with the connections initiated in any order—the connection matrix scaled on . This being impractical, statistical theory is used to design hardware that can connect most of the calls, and block others when traffic exceeds the design capacity. These blocking switches are the most common in modern telephone exchanges. They are generally implemented as smaller switch fabrics in cascade. In many, a randomizer is used to select the start of a path through the multistage fabric so that the statistical properties predicted by the theory can be gained. In addition, if the control system is able to rearrange the routing of existing connections on the arrival of a new connection, a full non-blocking matrix requires fewer switch points.

Line and trunk networks

[edit]

Each four stage Line Network (LN) or Trunk Network (TN) was divided into Junctor Switch Frames (JSF) and either Line Switch Frames (LSF) in the case of a Line Network, or Trunk Switch Frames (TSF) in the case of a Trunk Network. Links were designated A, B, C, and J for Junctor. A Links were internal to the LSF or TSF; B Links connected LSF or TSF to JSF, C were internal to JSF, and J links or Junctors connected to another net in the exchange.

All JSFs had a unity concentration ratio, that is the number of B links within the network equalled the number of junctors to other networks. Most LSFs had a 4:1 Line Concentration Ratio (LCR); that is the lines were four times as numerous as the B links. In some urban areas 2:1 LSF were used. The B links were often multipled to make a higher LCR, such as 3:1 or (especially in suburban 1ESS) 5:1. Line Networks always had 1024 Junctors, arranged in 16 grids that each switched 64 junctors to 64 B links. Four grids were grouped for control purposes in each of four LJFs.

TSF had a unity concentration, but a TN could have more TSFs than JSFs. Thus their B links were usually multipled to make a Trunk Concentration Ratio (TCR) of 1.25:1 or 1.5:1, the latter being especially common in 1A offices. TSFs and JSFs were identical except for their position in the fabric and the presence of a ninth test access level or no-test level in the JSF. Each JSF or TSF was divided into 4 two-stage grids.

Early TNs had four JSF, for a total of 16 grids, 1024 J links and the same number of B links, with four B links from each Trunk Junctor grid to each Trunk Switch grid. Starting in the mid-1970s, larger offices had their B links wired differently, with only two B links from each Trunk Junctor Grid to each Trunk Switch Grid. This allowed a larger TN, with 8 JSF containing 32 grids, connecting 2048 junctors and 2048 B links. Thus the junctor groups could be larger and more efficient. These TN had eight TSF, giving the TN a unity trunk concentration ratio.

Within each LN or TN, the A, B, C and J links were counted from the outer termination to the inner. That is, for a trunk, the trunk Stage 0 switch could connect each trunk to any of eight A links, which in turn were wired to Stage 1 switches to connect them to B links. Trunk Junctor grids also had Stage 0 and Stage 1 switches, the former to connect B links to C links, and the latter to connect C to J links also called Junctors. Junctors were gathered into cables, 16 twisted pairs per cable constituting a Junctor Subgroup, running to the Junctor Grouping Frame where they were plugged into cables to other networks. Each network had 64 or 128 subgroups, and was connected to each other network by one or (usually) several subgroups.

The original 1ESS Ferreed switching fabric was packaged as separate 8x8 switches or other sizes, tied into the rest of the speech fabric and control circuitry by wire wrap connections.[3][4][5] The transmit/receive path of the analog voice signal is through a series of magnetic-latching reed switches (very similar to latching relays).[6]

The much smaller Remreed crosspoints, introduced at about the same time as 1AESS, were packaged as grid boxes of four principal types. Type 10A Junctor Grids and 11A Trunk Grids were a box about 16x16x5 inches (40x40x12 cm) with sixteen 8x8 switches inside. Type 12A Line Grids with 2:1 LCR were only about 5 inches (12 cm) wide, with eight 4x4 Stage 0 line switches with ferrods and cutoff contacts for 32 lines, connected internally to four 4x8 Stage 1 switches connecting to B-links. Type 14A Line Grids with 4:1 LCR were about 16x12x5 inches (40x30x12 cm) with 64 lines, 32 A-links and 16 B-links. The boxes were connected to the rest of the fabric and control circuitry by slide-in connectors. Thus the worker had to handle a much bigger, heavier piece of equipment, but did not have to unwrap and rewrap dozens of wires.

Fabric error

[edit]

The two controllers in each Junctor Frame had no-test access to their Junctors via their F-switch, a ninth level in the Stage 1 switches which could be opened or closed independently of the crosspoints in the grid. When setting up each call through the fabric, but before connecting the fabric to the line and/or trunk, the controller could connect a test scan point to the talk wires in order to detect potentials. Current flowing through the scan point would be reported to the maintenance software, resulting in a "False Cross and Ground" (FCG) teleprinter message listing the path. Then the maintenance software would tell the call completion software to try again with a different junctor.

With a clean FCG test, the call completion software told the "A" relay in the trunk circuit to operate, connecting its transmission and test hardware to the switching fabric and thus to the line. Then, for an outgoing call, the trunk's scan point would scan for the presence of an off hook line. If the short was not detected, the software would command the printing of a "Supervision Failure" (SUPF) and try again with a different junctor. A similar supervision check was performed when an incoming call was answered. Any of these tests could alert for the presence of a bad crosspoint.

Staff could study a mass of printouts to find which links and crosspoints (out of, in some offices, a million crosspoints) were causing calls to fail on first tries. In the late 1970s, teleprinter channels were gathered together in Switching Control Centers (SCC), later Switching Control Center System, each serving a dozen or more 1ESS exchanges and using their own computers to analyze these and other kinds of failure reports. They generated a so-called histogram (actually a scatterplot) of parts of the fabric where failures were particularly numerous, usually pointing to a particular bad crosspoint, even if it failed sporadically rather than consistently. Local workers could then busy out the appropriate switch or grid and replace it.

When a test access crosspoint itself was stuck closed, it would cause sporadic FCG failures all over both grids that were tested by that controller. Since the J links were externally connected, switchroom staff discovered that such failures could be found by making busy both grids, grounding the controller's test leads, and then testing all 128 J links, 256 wires, for a ground.

Given the restrictions of 1960s hardware, unavoidable failure occurred. Though detected, the system was designed to connect the calling party to the wrong person rather than a disconnect, intercept, etc.[7]

Scan and distribute

[edit]

The computer received input from peripherals via magnetic scanners, composed of ferrod sensors, similar in principle to magnetic core memory except that the output was controlled by control windings analogous to the windings of a relay. Specifically, the ferrod was a transformer with four windings. Two small windings ran through holes in the center of a rod of ferrite. A pulse on the Interrogate winding was induced into the Readout winding, if the ferrite was not magnetically saturated. The larger control windings, if current was flowing through them, saturated the magnetic material, hence decoupling the Interrogate winding from the Readout winding which would return a Zero signal. The Interrogate windings of 16 ferrods of a row were wired in series to a driver, and the Readout windings of 64 ferrods of a column were wired to a sense amp. Check circuits ensured that an Interrogate current was indeed flowing.

Scanners were Line Scanners (LSC), Universal Trunk Scanners (USC), Junctor Scanners (JSC) and Master Scanners (MS). The first three only scanned for supervision, while Master Scanners did all other scan jobs. For example, a DTMF Receiver, mounted in a Miscellaneous Trunk frame, had eight demand scan points, one for each frequency, and two supervisory scan points, one to signal the presence of a valid DTMF combination so the software knew when to look at the frequency scan points, and the other to supervise the loop. The supervisory scan point also detected Dial Pulses, with software counting the pulses as they arrived. Each digit when it became valid was stored in a software hopper to be given to the Originating Register.

Ferrods were mounted in pairs, usually with different control windings, so one could supervise a switchward side of a trunk and the other the distant office. Components inside the trunk pack, including diodes, determined for example, whether it performed reverse battery signaling as an incoming trunk, or detected reverse battery from a distant trunk; i.e. was an outgoing trunk.

Line ferrods were also provided in pairs, of which the even numbered one had contacts brought out to the front of the package in lugs suitable for wire wrap so the windings could be strapped for loop start or ground start signaling. The original 1ESS packaging had all the ferrods of an LSF together, and separate from the line switches, while the later 1AESS had each ferrod at the front of the steel box containing its line switch. Odd numbered line equipment could not be made ground start, their ferrods being inaccessible.

The computer controlled the magnetic latching relays by Signal Distributors (SD) packaged in the Universal Trunk frames, Junctor frames, or in Miscellaneous Trunk frames, according to which they were numbered as USD, JSD or MSD. SD were originally contact trees of 30-contact wire spring relays, each driven by a flipflop. Each magnetic latching relay had one transfer contact dedicated to sending a pulse back to the SD, on each operate and release. The pulser in the SD detected this pulse to determine that the action had occurred, or else alerted the maintenance software to print a FSCAN report. In later 1AESS versions SD were solid state with several SD points per circuit pack generally on the same shelf or adjacent shelf to the trunk pack.

A few peripherals that needed quicker response time, such as Dial Pulse Transmitters, were controlled via Central Pulse Distributors, which otherwise were mainly used for enabling (alerting) a peripheral circuit controller to accept orders from the Peripheral Unit Address Bus.

1ESS computer

[edit]

The duplicate Harvard architecture central processor or CC (Central Control) for the 1ESS operated at approximately 200 kHz. It comprised five bays, each two meters high and totaling about four meters in length per CC. Packaging was in cards approximately 4x10 inches (10x25 centimeters) with an edge connector in the back. Backplane wiring was cotton covered wire-wrap wires, not ribbons or other cables. CPU logic was implemented using discrete diode–transistor logic. One hard plastic card commonly held the components necessary to implement, for example, two gates or a flipflop.

A great deal of logic was given over to diagnostic circuitry. CPU diagnostics could be run that would attempt to identify failing card(s). In single card failures, first attempt to repair success rates of 90% or better were common. Multiple card failures were not uncommon and the success rate for first time repair dropped rapidly.

The CPU design was quite complex - using three way interleaving of instruction execution (later called instruction pipeline) to improve throughput. Each instruction would go through an indexing phase, an actual instruction execution phase and an output phase. While an instruction was going through the indexing phase, the previous instruction was in its execution phase and the instruction before it was in its output phase.

In many instructions of the instruction set, data could be optionally masked and/or rotated. Single instructions existed for such esoteric functions as "find first set bit (the rightmost bit that is set) in a data word, optionally reset the bit and tell me the position of the bit". Having this function as an atomic instruction (rather than implementing as a subroutine) dramatically sped scanning for service requests or idle circuits. The central processor was implemented as a hierarchical state machine.

Memory card for 64 words of 44 bits

Memory had a 44-bit word length for program stores, of which six bits were for Hamming error correction and one was used for an additional parity check. This left 37 bits for the instruction, of which usually 22 bits were used for the address. This was an unusually wide instruction word for the time.

Program stores also contained permanent data, and could not be written online. Instead, the aluminum memory cards, also called twistor planes,[5] had to be removed in groups of 128 so their permanent magnets could be written offline by a motorized writer, an improvement over the non motorized single card writer used in Project Nike. All memory frames, all busses, and all software and data were fully dual modular redundant. The dual CCs operated in lockstep and the detection of a mismatch triggered an automatic sequencer to change the combination of CC, busses and memory modules until a configuration was reached that could pass a sanity check. Busses were twisted pairs, one pair for each address, data or control bit, connected at the CC and at each store frame by coupling transformers, and ending in terminating resistors at the last frame.

Call Stores were the system's read/write memory, containing the data for calls in progress and other temporary data. They had a 24-bit word, of which one bit was for parity check. They operated similar to magnetic core memory, except that the ferrite was in sheets with a hole for each bit, and the coincident current address and readout wires passed through that hole. The first Call Stores held 8 kilowords, in a frame approximately a meter wide and two meters tall.

The separate program memory and data memory were operated in antiphase, with the addressing phase of Program Store coinciding with the data fetch phase of Call Store and vice versa. This resulted in further overlapping, thus higher program execution speed than might be expected from the slow clock rate.

Programs were mostly written in machine code. Bugs that previously went unnoticed became prominent when 1ESS was brought to big cities with heavy telephone traffic, and delayed the full adoption of the system for a few years. Temporary fixes included the Service Link Network (SLN), which did approximately the job of the Incoming Register Link and Ringing Selection Switch of the 5XB switch, thus diminishing CPU load and decreasing response times for incoming calls, and a Signal Processor (SP) or peripheral computer of only one bay, to handle simple but time-consuming tasks such as the timing and counting of Dial Pulses. 1AESS eliminated the need for SLN and SP.

The half inch tape drive was write only, being used only for Automatic Message Accounting. Program updates were executed by shipping a load of Program Store cards with the new code written on them.

The Basic Generic program included constant "audits" to correct errors in the call registers and other data. When a critical hardware failure in the processor or peripheral units occurred, such as both controllers of a line switch frame failing and unable to receive orders, the machine would stop connecting calls and go into a "phase of memory regeneration", "phase of reinitialization", or "Phase" for short. The Phases were known as Phase 1,2,4 or 5. Lesser phases only cleared the call registers of calls that were in an unstable state that is not yet connected, and took less time.

During a Phase, the system, normally roaring with the sound of relays operating and releasing, would go quiet as no relays were getting orders. The Teletype Model 35 would ring its bell and print a series of P's while the phase lasted. For Central office staff this could be a scary time as seconds and then perhaps minutes passed while they knew subscribers who picked up their phones would get dead silence until the phase was over and the processor regained "sanity" and resumed connecting calls. Greater phases took longer, clearing all call registers, thus disconnecting all calls and treating any off-hook line as a request for dial tone. If the automated phases failed to restore system sanity, there were manual procedures to identify and isolate bad hardware or buses.[8]

1AESS

[edit]
Head on view of 1AESS Master Control Center

Most of the thousands of 1ESS and 1AESS offices in the USA were replaced in the 1990s by DMS-100, 5ESS Switch and other digital switches, and since 2010 also by packet switches. As of late 2014, just over 20 1AESS installations remained in the North American network, which were located mostly in AT&T's legacy BellSouth and AT&T's legacy Southwestern Bell states, especially in the Atlanta GA metro area, the Saint Louis MO metro area, and in the Dallas/Fort Worth TX metro area. In 2015, AT&T did not renew a support contract with Alcatel-Lucent (now Nokia) for the 1AESS systems still in operation and notified Alcatel-Lucent of its intent to remove them all from service by 2017. As a result, Alcatel-Lucent dismantled the last 1AESS lab at the Naperville Bell Labs location in 2015, and announced the discontinuation of support for the 1AESS.[9][10] In 2017, AT&T completed the removal of remaining 1AESS systems by moving customers to other newer technology switches, typically with Genband switches with TDM trunking only.

The last known 1AESS switch was in Odessa, TX (Odessa Lincoln Federal wirecenter ODSSTXLI). It was disconnected from service around June 3, 2017 and cut over to a Genband G5/G6 packet switch.[11][12][13][14]

Other electronic switching systems

[edit]

The No. 1 Electronic Switching System Arranged with Data Features (No. 1 ESS ADF) was an adaptation of the Number One Electronic Switching System to create a store and forward message switching system. It used both single and multi-station lines for transmitting teletypewriter and data messages. It was created to respond to a growing need for rapid and economical delivery of data and printed copy.[15]

Features

[edit]

The No. 1 ESS ADF had a large number of features, including:[16]

  • Mnemonic addresses: Alphanumeric codes used to address stations
  • Group code addresses: Mnemonic codes used to address a specific combination of stations
  • Precedence: Message delivery according to four levels of precedence
  • Date and time services: Optional date and time of message origination and delivery
  • Multiline hunting groups: Distribution of messages to the next available station in a group
  • Alternate delivery: Optional routing of all messages addressed to one station to another station

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Number One Electronic Switching System (No. 1 ESS), also known as 1ESS, was the first large-scale, stored-program-controlled electronic telephone switching system designed and developed by Bell Telephone Laboratories for the , capable of handling local, toll, and tandem switching applications with capacities ranging from 4,000 to 65,000 lines and up to 100,000 calls per busy hour. Introduced as a general-purpose switching machine, it marked a pivotal shift from electromechanical systems to electronic control, emphasizing flexibility for existing and future telephone services while ensuring high reliability through duplicated central processors and automated fault detection mechanisms. The system was first placed into commercial service on May 30, 1965, in , following laboratory testing in , earlier that year. Development of the No. 1 ESS began with foundational research at in 1945 and built upon the experimental electronic switching trial installed in , in 1959–1960, which validated key concepts like stored-program operation and maintenance diagnostics. As the largest engineering project in ' history at the time, it involved interdisciplinary teams from , device development, and , culminating in a design finalized by early 1964 that incorporated novel hardware such as ferreed cross-point switches in an eight-stage space-division network and low-level logic circuits using silicon transistors and diodes. The system's central processor featured a 24-bit word , a 5.5-microsecond cycle time, and memory technologies including twistor and ferrite-sheet stores with error-correcting Hamming codes, enabling efficient call processing and scalability for growing telephone networks. Key objectives of the No. 1 ESS included economic competitiveness with prior electromechanical switches, minimal service interruptions (targeting no more than two hours of over 40 years), and adaptability to new services like and multi-line hunting through programmable software rather than hardwired logic. Innovations such as dual duplicated controls, an emergency-action sequencer for rapid fault recovery, and the PROCESS III compiler-assembler for managing over 100,000 instructions facilitated its role as a foundational technology in . By the early , over 250 No. 1 ESS offices were in operation across the , serving as the dominant voice switching platform until the widespread adoption of digital systems in the .

History and Development

Origins and Design Objectives

The development of the Number One Electronic Switching System (1ESS) was initiated in the late by as part of a broader effort to modernize the Bell System's network. Development of the No. 1 ESS built upon foundational research in electronic switching that began at in 1945. This project built upon exploratory research in electronic switching that had begun after , including a pivotal experimental trial in , starting in 1960, which demonstrated the feasibility of stored-program control using cold-cathode gas tubes for serving approximately 400 subscribers. The primary motivation was to overcome the limitations of existing electromechanical systems, such as Step-by-Step and Crossbar switches, which were inefficient in space, power consumption, and adaptability to the rapidly growing demand for telephone services and new features. Key design objectives centered on achieving exceptional reliability and to support the Bell System's nationwide operations. The system targeted 99.999% uptime, limiting to no more than two hours over 40 years through extensive and automated diagnostics. It was engineered to handle up to 65,000 lines and approximately 100,000 calls per busy hour, with configurations supporting 4:1 concentration ratios for up to 16,384 trunks. Additionally, 1ESS was designed for versatile two-wire or four-wire switching to accommodate local, toll, and tandem applications, while incorporating robust mechanisms—such as parity checks, Hamming codes, and continuous self-diagnostics—to ensure uninterrupted operation. The design emphasized innovative shifts from traditional electromechanical relays to electronic control, utilizing reed relays for switching and ferrite-core memory for program and call data storage, which enabled stored-program control for greater flexibility. Modularity was a core principle, allowing scalable growth through standardized frames and circuit packs without major rewiring, facilitating future upgrades and adaptations to evolving service needs. The project, recognized as Bell Labs' largest development effort to date, was led by key engineers including J. R. Harris, who played a central role in the central processor design and overall system planning.

Initial Deployment and Milestones

The first commercial deployment of the Number One Electronic Switching System (1ESS) took place on May 30, 1965, in Succasunna, , where installed the system to serve an initial 4,000 customers with plans for expansion to full capacity. This installation marked the transition of the to large-scale electronic stored-program control switching, fulfilling design objectives for high-capacity handling of up to 80,000 calls per hour in configured systems. The system's organization and objectives had been outlined earlier in a series of articles published in the Bell System Technical Journal in September 1964, providing the foundational documentation for its architecture and goals. Following the Succasunna cutover, 1ESS saw rapid adoption within AT&T's network, with subsequent installations enabling nationwide service integration and replacing electromechanical systems in key locations. By the mid-1970s, thousands of 1ESS and its variants were deployed across the , handling a significant share of traffic and demonstrating the system's for urban and suburban offices. Early reliability testing highlighted its robustness, with duplicated controls and diagnostics contributing to extended operational uptime during initial field trials. A major milestone came in 1976 with the introduction of the 1AESS upgrade, which offered a plug-compatible enhancement to the original 1ESS through a faster central processor and reduced equipment volume, facilitating smoother evolution in existing offices. The system also supported adaptations for specialized applications, including the No. 101 ESS variant for remote service using time-division switching units connected to central office controls. In the , many 1ESS installations underwent retrofits, such as the addition of digital interface and time slot assigners, to accommodate emerging digital signaling requirements in the evolving network.

System Architecture

Switching Fabric

The switching fabric of the Number One Electronic Switching System (1ESS) consists of an eight-stage space-division network employing ferreed crosspoints, which combine reed relays with ferrite cores for magnetic latching and reliable metallic transmission paths. These crosspoints are organized into basic building blocks such as , 8x4, 4x4, and 16x8 matrices, enabling non-blocking connectivity across line link and trunk link networks. The structure supports path hunting by the central control, which selects idle paths through sequential stage activation using wire-spring relays for selection logic, ensuring efficient call routing without mechanical crossbars. Each stage incorporates ferreed crosspoints in frame-based assemblies, with approximately 1,000 reed relays per stage to handle the matrix configurations and provide scalability for office sizes up to 65,000 lines. Error detection in the switching fabric is integrated through specialized circuits that monitor states and path integrity, including group check circuits that verify single paths and test verticals equipped with current-sensing ferrod detectors to identify false crosses or grounds. These mechanisms perform parity-like checks on crosspoint activations during call setup and supervision, triggering false cross/ground (FCG) tests if anomalies are sensed. Upon detection, the system initiates automatic rerouting via alternate paths in the multi-stage design, while diagnostic buses and maintenance programs isolate faulty crosspoints by sequentially testing stages, quarantining defective elements, and switching to duplicated controllers for continued operation without service interruption. The fabric's capacity supports 4:1 concentration in trunk networks, allowing efficient for varying traffic loads, with a total switching capacity of up to 100,000 calls per busy hour in a typical configuration. This design achieves minimal blocking probability, less than 0.5% under normal operating conditions (e.g., approximately 0.04% at 0.15 erlang per line), through multiple access paths and flexible junctor redistribution. The switching fabric integrates briefly with line interfaces to accept incoming signals at the initial stages, facilitating seamless entry into the network for call processing.

Line and Trunk Interfaces

The line networks (LN) in the Number One Electronic Switching System (1ESS) serve as the primary interfaces for connecting subscriber lines to the switching fabric, supporting up to 65,000 lines organized into 512-line groups for efficient management and scalability. These networks employ line link frames equipped with ferreed switches in 4x4 or 8x4 configurations and utilize a 2:1 concentration ratio to optimize resource allocation, where concentrators—such as eight units each handling 32 lines in 2:1 setups—route calls from lines to the core network. Key functions include supervision via ferrod sensors that detect off-hook conditions through line current flow, with scanners cyclically monitoring 1,024-point modules every 100 milliseconds to identify service requests; ringing is provided by dedicated circuits delivering 20-cycle voltage from 110A generators, adjustable by subscriber class and ceasing within 0.25 seconds upon answer; and dial detection occurs through scanner interrupts or customer dial receivers interrogating pulsing relays up to 100 times per second to accurately count and correct distortions. Trunk networks (TN) interface with external toll and tandem trunks, accommodating up to 10,000 trunks per office through trunk link networks that connect to the switching fabric for path completion. These networks feature a four-stage switch design with 4:1 concentration in certain configurations, using trunk switching frames that support 256 trunks each and junctor switching frames providing up to 4,096 access paths per trunk group, while enabling four-wire transmission to minimize signal loss. Signaling protocols such as single frequency (SF), multifrequency, and dial are handled by trunk circuits that detect incoming signals and generate outgoing ones under central control, with supervision via Type 1C and 1D ferrods monitoring trunk states. Both line and trunk interfaces incorporate robust protection mechanisms, including gas tube arrestors in protector frames to safeguard against lightning surges and overvoltages, with each module handling up to 6,000 outside plant pairs. Scanner interfaces, distributed across network frames, use 64-row matrices of 16 points each to monitor line and trunk status, feeding data to the central processor via peripheral buses with 17-lead answer and 48-pair monitor lines at rates up to every 5.5 microseconds. Distributor outputs, employing signal distributors with 768 points per frame and duplicated controllers, actuate relays for ringing, cutoff, and trunk operations using bipolar pulses from central pulse distributors. These components ensure reliable signal conditioning and endpoint connectivity while integrating with the overall switching network for call handling.

Control and Processing

Central Processor

The central processor of the Number One Electronic Switching System (1ESS), known as the central control, features a dual redundant CPU architecture in a Harvard configuration, with separate program and call stores to enable parallel instruction fetching and access. Each CPU utilizes high-speed logic and operates with a cycle time of 5.5 microseconds (approximately 182 kHz), executing 44-bit instruction words that include 37 bits and 7 check bits. This design supports efficient processing of switching tasks, with the two CPUs running in synchrony and continuously cross-comparing outputs for fault detection. The program store employs , a read-only medium providing 131,072 words of 44 bits each, for storing the system's and fixed programs. To ensure reliability, it incorporates for single-error correction, using 7 parity bits calculated as approximately log2(n)+1\lceil \log_2(n) \rceil + 1, where nn is the number of data bits (applied here to the 37 data bits per 44-bit word, enabling correction within the overall structure). In contrast, the call store consists of ferrite sheet (RAM) with capacities ranging from 8,192 words (basic unit) up to approximately 300,000 words of 24 bits each, depending on configuration, dedicated to temporary data for active calls and system state. Processing occurs via execution, where the central control handles call setup and teardown by performing logical operations, such as path hunting and connection establishment, in response to inputs from peripheral scanners. is achieved through full duplication of the processors, program stores, and call stores, with switchover to the standby unit in less than 10 milliseconds upon fault detection, minimizing service disruption. The processors and associated stores consume several kilowatts of power, drawn from dedicated battery plants to maintain operation during outages.

Scan and Distribute Mechanisms

The scan and distribute mechanisms in the Number One Electronic Switching System (1ESS) serve as essential peripheral units that interface between the central control and the external environment, enabling efficient monitoring of subscriber lines and trunks while executing control commands with high reliability. These subsystems employ ferromagnetic (ferrod) to handle the demands of large-scale switching, supporting up to 65,000 lines and trunks in a typical installation. By periodically polling for events such as off-hook conditions or dialed digits and distributing orders to networks, they ensure real-time responsiveness without overburdening the central processor. The scan subsystem utilizes ferrod magnetic scanners to detect service requests and supervise ongoing calls by sampling the states of lines, trunks, and diagnostic points at discrete intervals. These scanners, configured in 1024-point matrices (such as 64 by 16 arrays), employ specialized ferrod sensors—types 1B, 1C, 1D for line and trunk monitoring, and type 1E for —to sense current changes indicative of events like off-hook detection or digit reception. Scanning operates on a 10-millisecond cycle for directed tasks, interrogating all office lines approximately 10 times per second, while cyclic scans for administrative purposes occur every 100 milliseconds; dial detection may use faster 5- to 10-millisecond intervals triggered by interrupts or a 5-millisecond clock. The resulting is serialized and recorded in temporary locations or call store tables, such as the scanner appearance (SCA) or line link () areas, for subsequent processing by the central control. To distribute the workload evenly, scans are balanced across 5-millisecond intervals, with half the signal receivers polled per period. The distribute subsystem, comprising signal distributors and the central pulse distributor (CPD), activates drivers in response to central control commands broadcast over communication buses, thereby controlling peripheral like trunk s and solenoids. Signal distributors manage low-speed operations through a -tree structure, supporting up to 1 million points via magnetic latching s, mercury s, or wirespring types, with drivers capable of 1.35-amp pulses over twisted-pair cables. The CPD, an all-electronic unit, handles high-speed actions by connecting specific peripheral units to the buses and generating bipolar or unipolar pulses—such as 0.5-microsecond high-speed pulses or 300-microsecond nominal pulses at 2.5 to 9 amperes—using for precise timing in operations like network enabling. Commands are buffered in peripheral order buffers (POBs) and executed at a rate of up to 100,000 per hour, with verification through scan points or current-sensing circuits to confirm states. Reliability in both subsystems is achieved through and self-checking features, including duplicated controllers for scan matrices—since a controller fault could affect an entire group, while individual ferrod failures impact only one line or trunk—and fault-detection programs that monitor error counters and parity checks. The ferrod matrices themselves are non-duplicated for cost efficiency, but the overall design targets rates below 10 per 10^9 device-hours, with bit error rates exceeding 1 in 10^7 triggering module rewrites and single-bit errors corrected via program store checks; this contributes to a predicted one component per month per central control and less than 0.02% incorrect calls. Hardware safeguards, such as group check circuits and emergency-action sequencers, further ensure fault isolation and system continuity.
ParameterValueDescription
Maximum Lines/Trunks65,000Total supported by scan matrices
Relay Points1,000,000Controlled by distribute relay trees
Scan Cycle Time10 ms per frameFor directed line and digit scans
Commands per Hour100,000Distributed orders to peripherals
Device Failure Rate<10 per 10^9 hoursOverall reliability metric

Variants and Upgrades

1AESS Improvements

The 1AESS represented a major upgrade to the original 1ESS, introduced in the mid-1970s to enhance performance, capacity, and reliability while maintaining compatibility with existing peripheral equipment. Key improvements centered on the new 1A processor, which operated at an effective clock speed of approximately 1 MHz—about five times faster than the original 1ESS processor running at around 200 kHz—enabling more than double the call-processing capacity through faster instruction execution (700 ns cycles) and to call and program stores. This processor upgrade supported up to 240,000 peak busy-hour calls and over 100,000 line terminations, significantly expanding the system's scalability for larger urban deployments. Hardware enhancements included the replacement of ferreed switches with remreed switches in the space-division network, which provided denser matrices and reduced backplane wiring complexity for improved reliability and . The switching fabric maintained a similar staged to the original 1ESS but benefited from these reed upgrades for higher density. Additionally, the introduction of magnetic in the file store—up to 64 megabits across four disks with a 2.4 MHz rate—allowed for larger program and , enabling remote software loading and updates without physical intervention. Error correction was bolstered with double parity bits per 26-bit word in core , interleaved parity schemes, and cyclic redundancy checks for disks, alongside software audits using hash sums to detect and recover from faults. The first 1AESS was placed into service in 1976, marking the beginning of widespread adoption as the standard for local switching. By the 1990s, thousands of 1AESS offices had been deployed across the , serving as the backbone for voice services. The last remaining U.S. 1AESS was decommissioned on June 3, 2017, in , at the Lincoln wirecenter, transitioning customers to modern packet-based switches.

Specialized Adaptations

The No. 1 Electronic Switching System Arranged with Data Features (1ESS ADF) represented a key specialized adaptation of the base 1ESS for non-telephony applications, particularly store-and-forward message switching in administrative and data networks operated by AT&T's Long Lines Department. Building on the core stored-program control architecture of the 1ESS, the ADF incorporated new peripheral units for data assembly, bulk storage via disk and magnetic tape subsystems, and message queuing, enabling efficient handling of teletypewriter and data traffic such as payroll reports, circuit order layouts, and plant service records. This configuration supported polling of remote stations, reception of variable-length messages up to several thousand characters, and delivery to multiple destinations with assured reliability comparable to telephone switching systems. Key features of the 1ESS ADF included mnemonic addressing for user-friendly , precedence levels to prioritize urgent messages, and comprehensive error logging mechanisms that maintained trails for and compliance in high-stakes environments like and networks. The system utilized a 60-megabit fixed-head disk for rapid message storage and retrieval, alongside for archival filing of current and historical messages, allowing it to manage at least 1,000 lines and exceed the throughput of contemporary electronic message switchers. Deployed as part of the nationwide Administrative Data Network (ADNet), the 1ESS ADF emphasized 24-hour operation with fault-tolerant , including redundant processors and automatic recovery from station or loop failures. Beyond data applications, the 1ESS was adapted for toll switching with enhanced four-wire transmission support, enabling tandem and long-distance connectivity in configurations distinct from standard two-wire local service. These toll variants integrated ferreed switches and interface modules optimized for voice-frequency signaling over metallic trunks, providing scalable handling for inter-office traffic without the full digital overhaul seen in later systems. Such adaptations were integral to early electronic toll networks, supporting both local and wide-area applications with the flexibility of the underlying 1ESS processor.

Features and Capabilities

Core Call Handling

The core call handling in the Number One Electronic Switching System (1ESS) commences with dialing detection facilitated by the system's scan mechanisms, which periodically monitor subscriber lines and trunks for service requests such as off-hook conditions. Scanners operate at intervals of 5-10 milliseconds to detect dialed digits via dial pulses, multifrequency tones, or Touch-Tone signals, collecting them in registers or hoppers for processing by the central processor. This initiates the call flow, where the stored program analyzes the digits and prepares for path establishment. Path setup proceeds through fabric hunting, employing an of sequential link selection that rotates through available positions—typically up to 16 per —and overflows to alternate paths via in the switching network to secure an idle connection between the originating and terminating points. The switching fabric plays a critical role in this phase by providing the physical paths through its space-division s, including line link networks and junctors. Upon successful path completion, the conversation phase begins, with ongoing supervision via scanners checking line and trunk status at nominal 100-millisecond intervals to detect any anomalies and maintain call integrity. Specific handling features include support for multiline hunting groups comprising up to 100 lines, enabling sequential distribution of incoming calls to the next idle line within the group to optimize availability. If the called party is busy, the system generates a busy tone through peripheral signal distributors employing precision oscillators for accurate signaling. Call teardown occurs via disconnect supervision, where scanners detect on-hook transitions, followed by a timed release protocol—ranging from 300 milliseconds to 10-12 seconds—to confirm disconnection and release network resources, preventing premature or delayed clearing. In terms of capacity, the 1ESS is engineered to process up to 100,000 calls per hour under peak load, with blocking probability assessed using the Erlang B formula to model trunk occupancy and concentration: B=ENN!k=0NEkk!B = \frac{E^N \cdot N!}{\sum_{k=0}^{N} \frac{E^k}{k!}} where EE represents the offered traffic load in Erlangs and NN the number of trunks or servers, ensuring acceptable loss rates (typically under 1-2%) when applied to trunk network (TN) concentration ratios. Traffic engineering assumes an average call duration of 3 minutes for local and tandem connections, aligning with contemporary telephony standards. The system further accommodates tandem switching, routing calls between offices via dedicated trunk-to-trunk junctors and outpulsing mechanisms for seamless interconnects.

Advanced Services

The stored-program control architecture of the Number One Electronic Switching System (1ESS) enabled a range of advanced services beyond basic , leveraging its central processor and memory systems to implement programmable enhancements. These services were demonstrated during the experimental trial in , in 1960, where features like automatic call transfer—now known as —and abbreviated dialing, or speed dialing, were shown as feasible and economical. Advanced services expanded over time; for example, and three-way calling became available as part of custom calling features in the late 1960s. provided an audible alert to a user engaged in a call upon receiving an incoming one, while allowed redirection of incoming calls to an alternate number, both activated through software instructions without hardware alterations. Speed dialing permitted users to employ abbreviated codes for frequently called numbers, streamlining access in both residential and business environments. These features were stored in the call store , a high-speed, random-access using ferrite-sheet or twistor technologies, which held call-associated in 24-bit words with parity checking for reliability. Activation occurred via central processor instructions, where the duplicated processors—operating at a 5.5-microsecond cycle time—executed tasks from the program store, enabling real-time processing of up to 100,000 busy-hour calls. Multiline services, such as executive override, allowed authorized users to busy lines for priority access, supporting complex business applications. The scan mechanisms briefly detected service triggers, like off-hook states or dialed codes, to initiate these routines without disrupting core operations. Time-of-day routing directed calls based on scheduled parameters, utilizing the system's , which generated interrupts every 5 milliseconds to synchronize timing-sensitive functions, including date and time services for logging or restrictions. For private branch exchange (PBX) applications in later configurations like ESSX-1, uniform call distribution evenly allocated incoming calls across available agents, enhancing efficiency in high-volume settings like centers. By the 1970s, the 1ESS supported a wide variety of custom features, including up to 56 classes of service, through its generic program design, allowing tailored configurations via software updates rather than physical rewiring.

Legacy and Impact

Technological Influence

The Number One Electronic Switching System (1ESS) pioneered the use of stored-program control (SPC) in large-scale telephone exchanges, enabling programmable logic for call processing that replaced rigid electromechanical wiring with flexible software instructions stored in ferrite-core memory. This innovation facilitated rapid updates to switching functions without hardware modifications, directly influencing subsequent AT&T developments such as the No. 4 ESS toll switch introduced in 1976, which built upon SPC principles for higher-capacity digital toll switching, and the 5ESS digital system deployed in the 1980s, which extended electronic control to fully digital voice and data services. The SPC architecture also enabled early implementation of advanced features like automatic call forwarding and conference calling, which evolved into standardized services in later electronic and digital switches worldwide. In terms of industry impact, the 1ESS demonstrated substantial reductions in maintenance costs compared to crossbar systems, primarily through automated fault detection and self-diagnostic routines that minimized manual interventions and . These efficiencies, achieved via electronic scanning and centralized processing, set a benchmark for operational savings in electronic switching, inspiring the global transition to SPC-based systems in the , including international designs that adopted similar modular control paradigms. ' extensive patent filings related to 1ESS components, such as logic circuits and switching matrices, further shaped the landscape for electronic . The 1ESS's reliability features, including duplicated central processors operating in synchronous lockstep mode for fault tolerance and Hamming code-based error correction in program stores, ensured high availability with projected downtime of just two hours over 40 years of service. These duplication and error-handling models provided a foundational legacy for fault-tolerant designs in telecommunications, influencing redundancy strategies in later systems. As AT&T's primary central office switch from its 1965 deployment until the early 1990s, the 1ESS served as the network workhorse, proving the scalability of electronic switching in production environments. It is cited in IEEE publications as the first fully scalable electronic switching system, validating SPC for widespread adoption.

Decommissioning and Modern Context

The decommissioning of the Number One Electronic Switching System (1ESS) and its primary upgrade, the 1AESS, progressed steadily as networks transitioned to digital and IP-based architectures. AT&T ended official support for remaining 1AESS systems in 2015 by not renewing its contract with (now ), prompting accelerated retirements among the few surviving installations. By 2010, approximately 59 1AESS switches were still operational across , mostly serving rural U.S. communities, though this number dwindled rapidly in subsequent years. The final 1AESS installation, located in Odessa, Texas (Odessa Lincoln Federal wirecenter), remained in service until June 3, 2017, when it was replaced by a Genband (now Sonus) G5/G6 packet switch. This shutdown signified the complete phase-out of the 1AESS variant from the (PSTN), concluding over five decades of operation for the 1ESS family and marking the end of the analog electronic switching era pioneered by . In the modern context, no large-scale active deployments of 1ESS or 1AESS exist within commercial networks, as they have been supplanted by softswitches and IP multimedia subsystems for greater scalability and cost efficiency. Legacy support is now provided through third-party vendors specializing in end-of-life telecom equipment maintenance, ensuring minimal functionality for isolated holdouts if needed. The systems are studied in telecommunications history for their role in bridging electromechanical to fully digital switching, highlighting challenges such as Y2K compliance modifications in the late 1990s and migrations to platforms like Cisco Packet Gateway (PGW) softswitches. Surviving components, including circuit packs, are preserved in institutions like the Smithsonian National Museum of American History.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.