350 nm process
View on Wikipediafrom Wikipedia
| Semiconductor device fabrication |
|---|
|
MOSFET scaling (process nodes) |
|
|
Future
|
The 350 nanometer process (350 nm process) is a level of semiconductor process technology that was reached in the 1995–1996 timeframe by leading semiconductor companies like Intel and IBM.
Example processes
[edit]- SGS-Thomson 5LM[1]
Products featuring 350 nm manufacturing process
[edit]- MTI VR4300i (1995), used in the Nintendo 64 game console.
- Intel Pentium (P54CS, 1995), Pentium Pro (1995) and initial Pentium II CPUs (Klamath, 1997).
- AMD K5 (1996) and original AMD K6 (Model 6, 1997) CPUs.
- МЦСТ-R150 (2001).
- Parallax Propeller (2006), 8 core microcontroller.[2]
- Atmel ATmega328, used in the Arduino UNO.[3][4]
- Nvidia RIVA 128 (1997) GPU[1]
References
[edit]- ^ a b Logan, Andrew (1997-09-10). "RIVA 128 gains support as preferred Direct3D developer platform press release". NVIDIA Home. Archived from the original on 1998-06-13. Retrieved 2023-12-03.
- ^ "Propeller I semiconductor process technology? Is it 350nm or 180nm?". Parallax Forums. Retrieved 2015-09-13.
{{cite web}}: CS1 maint: deprecated archival service (link) - ^ Petryk, Dmytro; Dyka, Zoya (2018). "Optical Fault Injections: a Setup Comparison". S2CID 198917285.
{{cite journal}}: Cite journal requires|journal=(help) - ^ Guillen, Oscar; Gruber, Michael; De Santis, Fabrizio (2017). "Low-Cost Setup for Localized Semi-invasive Optical Fault Injection Attacks: How Low Can We Go?". Constructive Side-Channel Analysis and Secure Design. Lecture Notes in Computer Science. Vol. 10348. pp. 207–222. doi:10.1007/978-3-319-64647-3_13. ISBN 978-3-319-64646-6.
| Preceded by 600 nm |
CMOS manufacturing processes | Succeeded by 250 nm |
350 nm process
View on Grokipediafrom Grokipedia
History and Development
Timeline of Introduction
The development of the 350 nm semiconductor process began with initial research and prototyping efforts in the early 1990s, building on the preceding 600 nm node that had entered production around 1993-1994.[6][1] These activities focused on scaling down feature sizes to improve transistor density and performance, guided by emerging industry roadmaps that established a roughly two-year cycle for node advancements.[1] An early milestone was Sony's 16 Mbit SRAM chip implemented on the 0.35 μm process in 1994. By late 1995, the process achieved commercial viability, with the first integrated circuit manufacturing starting that year, marking a significant step in high-volume semiconductor production.[7] A key milestone was Intel's launch of its CMOS-6 process in the first quarter of 1995, which utilized a 0.35-micron gate length and enabled broader compatibility with existing designs.[8] Parallel developments at IBM during 1995-1996 included advancements in 0.35-micron CMOS technology, such as sample availability for high-speed processors, contributing to the node's maturation.[9] Widespread adoption occurred between 1996 and 1997, as major foundries like TSMC scaled up production, with TSMC entering volume manufacturing of 0.35-micron SRAM in June 1996 ahead of schedule.[10] The process began phasing out in 1998-1999, supplanted by the emerging 250 nm node as part of the continued roadmap progression.[7][1]Key Companies and Innovations
Intel led the development of the 350 nm process through its CMOS-6 technology, introduced commercially in 1995, which featured advanced salicide using titanium disilicide (TiSi₂) to significantly reduce source/drain and gate resistances, enabling higher transistor performance and lower power consumption.[11] This innovation addressed key challenges in scaling CMOS devices by minimizing parasitic resistances while maintaining reliability in sub-micron features.[11] IBM contributed through extensive collaborative efforts, including joint ventures such as MiCRUS with Cirrus Logic established in 1994 and operational from 1995, which focused on CMOS scaling to 350 nm and achieved early yield improvements through shared fabrication expertise and process optimizations.[12] These partnerships facilitated broader adoption of 350 nm CMOS by integrating bipolar and CMOS technologies, enhancing overall manufacturing efficiency and device density.[12] TSMC played a pivotal role as a pure-play foundry, launching its 0.35 μm CMOS process in volume production in 1996, which democratized access to advanced 350 nm technology for non-integrated device manufacturers (non-IDM) by offering flexible, high-volume manufacturing without in-house design constraints.[10] This model spurred innovation across the industry by providing cost-effective scaling options and rapid prototyping for diverse applications.[10] Japanese firms, particularly NEC, advanced graphics-specific optimizations at the 350 nm node, leveraging their 0.35 μm process for high-performance embedded graphics processors, such as the Reality Co-Processor in the Nintendo 64, which integrated vector units and texture mapping with reduced latency through customized interconnects and transistor layouts.[13] Other Japanese companies like Hitachi contributed to similar refinements, emphasizing low-power graphics ICs suitable for consumer electronics.[7] Key patents and innovations further standardized the 350 nm process. Industry collaborations through SEMI developed design rules for 350 nm, promoting interoperability in lithography and interconnect standards to accelerate adoption across global fabs.[1]Technical Aspects
Lithography and Feature Sizes
The 350 nm process relied on i-line photolithography, utilizing ultraviolet light at a wavelength of 365 nm emitted by mercury arc lamps as the primary illumination source for patterning features onto photoresist-coated wafers.[14] This wavelength, part of the mercury spectrum's i-line (from ionized mercury), enabled projection systems such as steppers to achieve the necessary resolution for sub-micron features through reduction optics, typically with a 5:1 or 4:1 mask-to-wafer reduction ratio.[15] The process involved coating wafers with photoresist, exposing them through a photomask aligned via the stepper, and developing the resist to transfer the pattern for subsequent etching or deposition steps.[16] Key dimensional parameters defined the node's capabilities, with the drawn gate length (Ldrawn) specified at 0.35 µm to represent the nominal transistor channel length as laid out in the design.[17] However, due to effects like lateral dopant diffusion during implantation and thermal processing, the effective channel length (Leff) was typically reduced to around 0.25 µm, impacting short-channel effects and device performance.[8] Minimum feature sizes included metal pitches ranging from 0.7 to 1.0 µm, allowing for interconnect routing while maintaining manufacturability, and contact/via dimensions of approximately 0.4 µm to ensure reliable electrical connections without excessive resistance.[17] These dimensions were patterned using steppers with numerical apertures (NA) of 0.5 to 0.6, pushing the limits of optical imaging for the era.[18] The fundamental resolution limit followed the Rayleigh criterion, expressed as
where CD is the critical dimension (e.g., minimum resolvable feature), λ = 365 nm, NA ≈ 0.5–0.6, and k1 (a process factor accounting for mask quality, resist response, and illumination) ranged from 0.6 to 0.8.[19] This equation highlights how the 350 nm node balanced wavelength and optics to achieve CDs near 0.35 µm, though practical yields required optimizations like off-axis illumination to improve contrast.[20]
Significant challenges arose from optical proximity effects (OPE), where diffraction from adjacent features caused linewidth variations and reduced image fidelity, particularly in dense layouts.[21] To mitigate these, optical proximity correction (OPC) techniques adjusted mask patterns to compensate for distortions, while in high-density regions, phase-shift masks (PSMs)—such as attenuated or rim-type designs—were introduced to create constructive interference and enhance resolution by up to 20–30% for features like 0.35 µm contacts.[22] These advancements, often applied selectively to critical layers, enabled reliable patterning despite the diffraction-limited nature of i-line systems.[23]