Recent from talks
Nothing was collected or created yet.
| Unix | |
|---|---|
UNIX System III running on a PDP-11 simulator | |
| Developer | Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna at Bell Labs |
| Written in | C and assembly language |
| OS family | Unix |
| Source model | Historically proprietary software, while some Unix projects (including BSD family and Illumos) are open-source and historical Unix source code is archived. |
| Initial release | Development started in 1969 First manual published internally in November 1971[1] Announced outside Bell Labs in October 1973[2] |
| Available in | English |
| Kernel type | Varies; monolithic, microkernel, hybrid |
| Influenced by | CTSS,[3] Multics |
| Default user interface | Command-line interface and Graphical (Wayland and X Window System; Android SurfaceFlinger; macOS Quartz) |
| License | Varies; some versions are proprietary, others are free/libre or open-source software |
| Official website | opengroup |
| Internet history timeline |
|
Early research and development:
Merging the networks and creating the Internet:
Commercialization, privatization, broader access leads to the modern Internet:
Examples of Internet services:
|
Unix (/ˈjuːnɪks/ ⓘ, YOO-niks; trademarked as UNIX) is a family of multitasking, multi-user computer operating systems that derive from the original AT&T Unix, whose development started in 1969[1] at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others.[4] Initially intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Berkeley (BSD), Microsoft (Xenix), Sun Microsystems (SunOS/Solaris), HP/HPE (HP-UX), and IBM (AIX).
The early versions of Unix—which are retrospectively referred to as "Research Unix"—ran on computers such as the PDP-11 and VAX; Unix was commonly used on minicomputers and mainframes from the 1970s onwards.[5] It distinguished itself from its predecessors as the first portable operating system: almost the entire operating system is written in the C programming language (in 1973), which allows Unix to operate on numerous platforms.[6] Unix systems are characterized by a modular design that is sometimes called the "Unix philosophy". According to this philosophy, the operating system should provide a set of simple tools, each of which performs a limited, well-defined function.[7] A unified and inode-based filesystem and an inter-process communication mechanism known as "pipes" serve as the main means of communication,[4] and a shell scripting and command language (the Unix shell) is used to combine the tools to perform complex workflows.
Version 7 in 1979 was the final widely released Research Unix, after which AT&T sold UNIX System III, based on Version 7, commercially in 1982; to avoid confusion between the Unix variants, AT&T combined various versions developed by others and released it as UNIX System V in 1983. However as these were closed-source, the University of California, Berkeley continued developing BSD as an alternative. Other vendors that were beginning to create commercialized versions of Unix would base their version on either System V (like Silicon Graphics's IRIX) or BSD (like SunOS). Amid the "Unix wars" of standardization, AT&T alongside Sun merged System V, BSD, SunOS and Xenix, soldifying their features into one package as UNIX System V Release 4 (SVR4) in 1989, and it was commercialized by Unix System Laboratories, an AT&T spinoff.[8][9] A rival Unix by other vendors was released as OSF/1, however most commercial Unix vendors eventually changed their distributions to be based on SVR4 with BSD features added on top.
AT&T sold Unix to Novell in 1992, who later sold the UNIX trademark to a new industry consortium called The Open Group which would allow the use of the mark for certified operating systems that comply with the Single UNIX Specification (SUS).[8] Since the 1990s, Unix systems have appeared on home computers: BSD/OS was the first to be commercialized for i386 computers and since then free Unix-like clones of existing systems have been developed, such as FreeBSD and the combination of Linux and GNU, the latter of which have since eclipsed Unix in popularity. Unix was, until 2005, the most widely used server operating system.[10] However in the present day, Unix distributions like IBM AIX, Oracle Solaris and OpenServer continue to be widely used in certain fields.[11][12]
Overview
[edit]
Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers.[13][14][15] The system grew larger as the operating system started spreading in academic circles, and as users added their own tools to the system and shared them with colleagues.[16]
At first, Unix was not designed to support multi-tasking[17] or to be portable.[6] Later, Unix gradually gained multi-tasking and multi-user capabilities in a time-sharing configuration, as well as portability. Unix systems are characterized by various concepts: the use of plain text for storing data; a hierarchical file system; treating devices and certain types of inter-process communication (IPC) as files; and the use of a large number of software tools, small programs that can be strung together through a command-line interpreter using pipes, as opposed to using a single monolithic program that includes all of the same functionality. These concepts are collectively known as the "Unix philosophy". Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves".[18]
By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes.[19][20] The Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers.
Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to them both being ported to a wider variety of machine families than any other operating system.
The Unix operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low-level" tasks that most programs share, and schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the distinction of kernel space from user space, the latter being a lower priority realm where most application programs operate.
History
[edit]The origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, and General Electric were developing Multics, a time-sharing operating system for the GE 645 mainframe computer.[21] Multics featured several innovations, but also presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project. The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna,[17] who decided to reimplement their experiences in a new project of smaller scale. This new operating system was initially without organizational backing, and also without a name.
The new operating system was a single-tasking system.[17] In 1970, the group coined the name Unics for Uniplexed Information and Computing Service as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that "no one can remember" the origin of the final spelling Unix.[22] Dennis Ritchie,[17] Doug McIlroy,[1] and Peter G. Neumann[23] also credit Kernighan.
The operating system was originally written in the assembly language, but in 1973, Version 4 Unix was rewritten in C. Ken Thompson faced multiple challenges attempting the kernel port due to the evolving state of C, which lacked key features like structures at the time.[17][24] Version 4 Unix, however, still had much PDP-11 specific code, and was not suitable for porting. The first port to another platform was a port of Version 6, made four years later (1977) at the University of Wollongong for the Interdata 7/32,[25] followed by a Bell Labs port of Version 7 to the Interdata 8/32 during 1977 and 1978.[26]
Bell Labs produced several versions of Unix that are collectively referred to as Research Unix. In 1975, the first source license for UNIX was sold to Donald B. Gillies at the University of Illinois Urbana–Champaign (UIUC) Department of Computer Science.[27]
During the late 1970s and early 1980s, the influence of Unix in academic circles led to large-scale adoption of Unix (BSD and System V) by commercial startups, which in turn led to Unix fragmenting into multiple, similar — but often slightly and mutually incompatible — systems including DYNIX, HP-UX, SunOS/Solaris, AIX, and Xenix. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4 (SVR4), which was subsequently adopted by many commercial Unix vendors.
In the 1990s, Unix and Unix-like systems grew in popularity and became the operating system of choice for over 90% of the world's top 500 fastest supercomputers,[28] as BSD and Linux distributions were developed through collaboration by a worldwide network of programmers. In 2000, Apple released Darwin, also a Unix system, which became the core of the Mac OS X operating system, later renamed macOS.[29]
Unix-like operating systems are widely used in modern servers, workstations, and mobile devices.[30]
Standards
[edit]
In the late 1980s, an open operating system standardization effort now known as POSIX provided a common baseline for all operating systems; IEEE based POSIX around the common structure of the major competing variants of the Unix system, publishing the first POSIX standard in 1988. In the early 1990s, a separate but very similar effort was started by an industry consortium, the Common Open Software Environment (COSE) initiative, which eventually became the Single UNIX Specification (SUS) administered by The Open Group. Starting in 1998, the Open Group and IEEE started the Austin Group, to provide a common definition of POSIX and the Single UNIX Specification, which, by 2008, had become the Open Group Base Specification.
In 1999, in an effort towards compatibility, several Unix system vendors agreed on SVR4's Executable and Linkable Format (ELF) as the standard for binary and object code files. The common format allows substantial binary compatibility among different Unix systems operating on the same CPU architecture.
The Filesystem Hierarchy Standard was created to provide a reference directory layout for Unix-like operating systems; it has mainly been used in Linux.
Components
[edit]This section needs additional citations for verification. (October 2023) |
The Unix system is composed of several components that were originally packaged together. By including the development environment, libraries, documents and the portable, modifiable source code for all of these components, in addition to the kernel of an operating system, Unix was a self-contained software system. This was one of the key reasons it emerged as an important teaching and learning tool and has had a broad influence. See § Impact, below.
The inclusion of these components did not make the system large – the original V7 UNIX distribution, consisting of copies of all of the compiled binaries plus all of the source code and documentation occupied less than 10 MB and arrived on a single nine-track magnetic tape, earning its reputation as a portable system.[31] The printed documentation, typeset from the online sources, was contained in two volumes.
The names and filesystem locations of the Unix components have changed substantially across the history of the system. Nonetheless, the V7 implementation has the canonical early structure:
- Kernel – source code in /usr/sys, composed of several sub-components:
- conf – configuration and machine-dependent parts, including boot code
- dev – device drivers for control of hardware (and some pseudo-hardware)
- sys – operating system "kernel", handling memory management, process scheduling, system calls, etc.
- h – header files, defining key structures within the system and important system-specific invariables
- Development environment – early versions of Unix contained a development environment sufficient to recreate the entire system from source code:
- ed – text editor, for creating source code files
- cc – C language compiler (first appeared in V3 Unix)
- as – machine-language assembler for the machine
- ld – linker, for combining object files
- lib – object-code libraries (installed in /lib or /usr/lib). libc, the system library with C run-time support, was the primary library, but there have always been additional libraries for things such as mathematical functions (libm) or database access. V7 Unix introduced the first version of the modern "Standard I/O" library stdio as part of the system library. Later implementations increased the number of libraries significantly.
- make – build manager (introduced in PWB/UNIX), for effectively automating the build process
- include – header files for software development, defining standard interfaces and system invariants
- Other languages – V7 Unix contained a Fortran-77 compiler, a programmable arbitrary-precision calculator (bc, dc), and the awk scripting language; later versions and implementations contain many other language compilers and toolsets. Early BSD releases included Pascal tools, and many modern Unix systems also include the GNU Compiler Collection as well as or instead of a proprietary compiler system.
- Other tools – including an object-code archive manager (ar), symbol-table lister (nm), compiler-development tools (e.g., lex & yacc), and debugging tools.
- Commands – Unix makes little distinction between commands (user-level programs) for system operation and maintenance (e.g., cron), commands of general utility (e.g., grep), and more general-purpose applications such as the text formatting and typesetting package. Nonetheless, some major categories are:
- sh – the "shell" programmable command-line interpreter, the primary user interface on Unix before window systems appeared, and even afterward (within a "command window").
- Utilities – the core toolkit of the Unix command set, including cp, ls, grep, find and many others. Subcategories include:
- Document formatting – Unix systems were used from the outset for document preparation and typesetting systems, and included many related programs such as nroff, troff, tbl, eqn, refer, and pic. Some modern Unix systems also include packages such as TeX and Ghostscript.
- Graphics – the plot subsystem provided facilities for producing simple vector plots in a device-independent format, with device-specific interpreters to display such files. Modern Unix systems also generally include X11 as a standard windowing system and GUI, and many support OpenGL.
- Communications – early Unix systems contained no inter-system communication, but did include the inter-user communication programs mail and write. V7 introduced the early inter-system communication system UUCP, and systems beginning with BSD release 4.1c included TCP/IP utilities.
- Documentation – Unix was one of the first operating systems to include all of its documentation online in machine-readable form.[32] The documentation included:
- man – manual pages for each command, library component, system call, header file, etc.
- doc – longer documents detailing major subsystems, such as the C language and troff
Impact
[edit]

The Unix system had a significant impact on other operating systems. It achieved its reputation by its interactivity, by providing the software at a nominal fee for educational use, by running on inexpensive hardware, and by being easy to adapt and move to different machines. Unix was originally written in assembly language, but was soon rewritten in C, a high-level programming language.[33] Although this followed the lead of CTSS, Multics and Burroughs MCP, it was Unix that popularized the idea.
Unix had a drastically simplified file model compared to many contemporary operating systems: treating all kinds of files as simple byte arrays. The file system hierarchy contained machine services and devices (such as printers, terminals, or disk drives), providing a uniform interface, but at the expense of occasionally requiring additional mechanisms such as ioctl and mode flags to access features of the hardware that did not fit the simple "stream of bytes" model. The Plan 9 operating system pushed this model even further and eliminated the need for additional mechanisms.
Unix also popularized the hierarchical file system with arbitrarily nested subdirectories, originally introduced by Multics. Other common operating systems of the era had ways to divide a storage device into multiple directories or sections, but they had a fixed number of levels, often only one level. Several major proprietary operating systems eventually added recursive subdirectory capabilities also patterned after Multics. DEC's RSX-11M's "group, user" hierarchy evolved into OpenVMS directories, CP/M's volumes evolved into MS-DOS 2.0+ subdirectories, and HP's MPE group.account hierarchy and IBM's SSP and OS/400 library systems were folded into broader POSIX file systems.
Making the command interpreter an ordinary user-level program, with additional commands provided as separate programs, was another Multics innovation popularized by Unix. The Unix shell used the same language for interactive commands as for scripting (shell scripts – there was no separate job control language like IBM's JCL). Since the shell and OS commands were "just another program", the user could choose (or even write) their own shell. New commands could be added without changing the shell itself. Unix's innovative command-line syntax for creating modular chains of producer-consumer processes (pipelines) made a powerful programming paradigm (coroutines) widely available. Many later command-line interpreters have been inspired by the Unix shell.
A fundamental simplifying assumption of Unix was its focus on newline-delimited text for nearly all file formats. There were no "binary" editors in the original version of Unix – the entire system was configured using textual shell command scripts. The common denominator in the I/O system was the byte – unlike "record-based" file systems. The focus on text for representing nearly everything made Unix pipes especially useful and encouraged the development of simple, general tools that could easily be combined to perform more complicated ad hoc tasks. The focus on text and bytes made the system far more scalable and portable than other systems. Over time, text-based applications have also proven popular in application areas, such as printing languages (PostScript, ODF), and at the application layer of the Internet protocols, e.g., FTP, SMTP, HTTP, SOAP, and SIP.
Unix popularized a syntax for regular expressions that found widespread use. The Unix programming interface became the basis for a widely implemented operating system interface standard (POSIX, see above). The C programming language soon spread beyond Unix, and is now ubiquitous in systems and applications programming.
Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a "software tools" movement. Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software, norms which became as important and influential as the technology of Unix itself; this has been termed the Unix philosophy.
The TCP/IP networking protocols were quickly implemented on the Unix versions widely used on relatively inexpensive computers, which contributed to the Internet explosion of worldwide, real-time connectivity and formed the basis for implementations on many other platforms.
The Unix policy of extensive on-line documentation and (for many years) ready access to all system source code raised programmer expectations, and contributed to the launch of the free software movement in 1983.
Free Unix and Unix-like variants
[edit]In 1983, Richard Stallman announced the GNU (short for "GNU's Not Unix") project, an ambitious effort to create a free software Unix-like system—"free" in the sense that everyone who received a copy would be free to use, study, modify, and redistribute it. The GNU project's own kernel development project, GNU Hurd, had not yet produced a working kernel, but in 1991 Linus Torvalds released the Linux kernel as free software under the GNU General Public License. In addition to their use in the GNU operating system, many GNU packages – such as the GNU Compiler Collection (and the rest of the GNU toolchain), the GNU C library and the GNU Core Utilities – have gone on to play central roles in other free Unix systems as well.
Linux distributions, consisting of the Linux kernel and large collections of compatible software have become popular both with individual users and in business. Popular distributions include Red Hat Enterprise Linux, Fedora, SUSE Linux Enterprise, openSUSE, Debian, Ubuntu, Linux Mint, Slackware Linux, Arch Linux and Gentoo.[34]
A free derivative of BSD Unix, 386BSD, was released in 1992 and led to the NetBSD and FreeBSD projects. With the 1994 settlement of a lawsuit brought against the University of California and Berkeley Software Design Inc. (USL v. BSDi) by Unix System Laboratories, it was clarified that Berkeley had the right to distribute BSD Unix for free if it so desired. Since then, BSD Unix has been developed in several different product branches, including OpenBSD and DragonFly BSD.
Because of the modular design of the Unix model, sharing components is relatively common: most or all Unix and Unix-like systems include at least some BSD code, while some include GNU utilities in their distributions. Linux and BSD Unix are increasingly filling market needs traditionally served by proprietary Unix operating systems, expanding into new markets such as the consumer desktop, mobile devices and embedded devices.
In a 1999 interview, Dennis Ritchie voiced his opinion that Linux and BSD Unix operating systems are a continuation of the basis of the Unix design and are derivatives of Unix:[35]
I think the Linux phenomenon is quite delightful, because it draws so strongly on the basis that Unix provided. Linux seems to be among the healthiest of the direct Unix derivatives, though there are also the various BSD systems as well as the more official offerings from the workstation and mainframe manufacturers.
In the same interview, he states that he views both Unix and Linux as "the continuation of ideas that were started by Ken and me and many others, many years ago".[35]
OpenSolaris was the free software counterpart to Solaris developed by Sun Microsystems, which included a CDDL-licensed kernel and a primarily GNU userland. However, Oracle discontinued the project upon their acquisition of Sun, which prompted a group of former Sun employees and members of the OpenSolaris community to fork OpenSolaris into the illumos kernel. As of 2014, illumos remains the only active, open-source System V derivative.
ARPANET
[edit]In May 1975, RFC 681 described the development of Network Unix by the Center for Advanced Computation at the University of Illinois Urbana-Champaign.[36] The Unix system was said to "present several interesting capabilities as an ARPANET mini-host". At the time, Unix required a license from Bell Telephone Laboratories that cost US$20,000 for non-university institutions, while universities could obtain a license for a nominal fee of $150. It was noted that Bell was "open to suggestions" for an ARPANET-wide license.
The RFC specifically mentions that Unix "offers powerful local processing facilities in terms of user programs, several compilers, an editor based on QED, a versatile document preparation system, and an efficient file system featuring sophisticated access control, mountable and de-mountable volumes, and a unified treatment of peripherals as special files." The latter permitted the Network Control Program (NCP) to be integrated within the Unix file system, treating network connections as special files that could be accessed through standard Unix I/O calls, which included the added benefit of closing all connections on program exit, should the user neglect to do so. In order "to minimize the amount of code added to the basic Unix kernel", much of the NCP code ran in a swappable user process, running only when needed.[36]
Branding
[edit]
AT&T originally did not allow licensees to use the Unix name; thus Microsoft called its variant Xenix, for example.[37] In October 1988, they allowed licensees to use the UNIX trademark for systems based on System V Release 3.2, if certain conditions were met.[38] In October 1993, Novell, the company that owned the rights to the Unix System V source at the time, transferred the trademarks of Unix to the X/Open Company (now The Open Group),[39] and in 1995 sold the related business operations to Santa Cruz Operation (SCO).[40][41] Whether Novell also sold the copyrights to the actual software was the subject of a federal lawsuit in 2006, SCO v. Novell, which Novell won. The case was appealed, but on August 30, 2011, the United States Court of Appeals for the Tenth Circuit affirmed the trial decisions, closing the case.[42] Unix vendor SCO Group Inc. accused Novell of slander of title.
The present owner of the trademark UNIX is The Open Group, an industry standards consortium. Only systems fully compliant with and certified to the Single UNIX Specification qualify as "UNIX" (others are called "Unix-like").
By decree of The Open Group, the term "UNIX" refers more to a class of operating systems than to a specific implementation of an operating system; those operating systems which meet The Open Group's Single UNIX Specification should be able to bear the UNIX 98 or UNIX 03 trademarks today, after the operating system's vendor pays a substantial certification fee and annual trademark royalties to The Open Group.[43] Systems that have been licensed to use the UNIX trademark include AIX,[44] EulerOS,[45] HP-UX,[46] Inspur K-UX,[47] IRIX,[48] macOS,[49] Solaris,[50] Tru64 UNIX (formerly "Digital UNIX", or OSF/1),[51] and z/OS.[52] Notably, EulerOS and Inspur K-UX are Linux distributions certified as UNIX 03 compliant.[53][54]
Sometimes a representation like Un*x, *NIX, or *N?X is used to indicate all operating systems similar to Unix. This comes from the use of the asterisk (*) and the question mark characters as wildcard indicators in many utilities. This notation is also used to describe other Unix-like systems that have not met the requirements for UNIX branding from the Open Group.
The Open Group requests that UNIX always be used as an adjective followed by a generic term such as system to help avoid the creation of a genericized trademark.
Unix was the original formatting,[disputed – discuss] but the usage of UNIX remains widespread because it was once typeset in small caps (Unix). According to Dennis Ritchie, when presenting the original Unix paper to the third Operating Systems Symposium of the American Association for Computing Machinery (ACM), "we had a new typesetter and troff had just been invented and we were intoxicated by being able to produce small caps".[55] Many of the operating system's predecessors and contemporaries used all-uppercase lettering, so many people wrote the name in upper case due to force of habit. It is not an acronym.[56]
Trademark names can be registered by different entities in different countries and trademark laws in some countries allow the same trademark name to be controlled by two different entities if each entity uses the trademark in easily distinguishable categories. The result is that Unix has been used as a brand name for various products including bookshelves, ink pens, bottled glue, diapers, hair driers and food containers.[57]
Several plural forms of Unix are used casually to refer to multiple brands of Unix and Unix-like systems. Most common is the conventional Unixes, but Unices, treating Unix as a Latin noun of the third declension, is also popular. The pseudo-Anglo-Saxon plural form Unixen is not common, although occasionally seen. Sun Microsystems, developer of the Solaris variant, has asserted that the term Unix is itself plural, referencing its many implementations.[58]
See also
[edit]References
[edit]- ^ a b c McIlroy, M. D. (1987). A Research Unix reader: annotated excerpts from the Programmer's Manual, 1971–1986 (PDF) (Technical report). CSTR. Bell Labs. 139. Archived (PDF) from the original on 11 November 2017.
- ^ Ritchie, D. M.; Thompson, K. (1974). "The UNIX Time-Sharing System" (PDF). Communications of the ACM. 17 (7): 365–375. CiteSeerX 10.1.1.118.1214. doi:10.1145/361011.361061. S2CID 53235982.
- ^ Ritchie, Dennis M. (1977). The Unix Time-sharing System: A retrospective (PDF). Tenth Hawaii International Conference on the System Sciences. Retrieved October 23, 2025.
a good case can be made that [UNIX] is in essence a modern implementation of MIT's CTSS system
- ^ a b Ritchie, D.M.; Thompson, K. (July 1978). "The UNIX Time-Sharing System". Bell System Tech. J. 57 (6): 1905–1929. CiteSeerX 10.1.1.112.595. doi:10.1002/j.1538-7305.1978.tb02136.x. Retrieved December 9, 2012.
- ^ Schwartz, John (June 11, 2000). "Microsoft's Next Trials". The Washington Post. Archived from the original on December 13, 2024. Retrieved December 17, 2024.
- ^ a b Ritchie, Dennis M. (January 1993). "The Development of the C Language" (PDF). Retrieved 23 October 2025.
- ^ Raymond, Eric (19 September 2003). The Art of Unix Programming. Addison-Wesley. ISBN 978-0-13-142901-7. Archived from the original on 12 February 2009. Retrieved 9 February 2009.
- ^ a b Anthes, Gary (June 4, 2009). "Timeline: 40 Years Of Unix". Computerworld. Retrieved December 4, 2024.
- ^ Lewis, Peter H. (June 12, 1988). "Is Unix's Time Finally at hand?". The New York Times. Archived from the original on February 5, 2011. Retrieved December 17, 2024.
- ^ Bangeman, Eric (February 22, 2006). "Windows passes Unix in server sales". Ars Technica. Retrieved December 5, 2024.
- ^ Garvin, Skip (September 19, 2019). "Top IBM Power Systems myths: "IBM AIX is dead and Unix isn't relevant in today's market" (part 2)". IBM Blog. Retrieved December 5, 2024.
- ^ "Unix is dead. Long live Unix!".
- ^ Raymond, Eric Steven (2003). "The Elements of Operating-System Style". The Art of Unix Programming. Retrieved August 16, 2020.
- ^ Brand, Stewart (1984). Tandy/Radio Shack Book: Whole Earth Software Catalog. Quantum Press/Doubleday. ISBN 9780385191661.
UNIX was created by software developers for software developers, to give themselves an environment they could completely manipulate.
- ^ Spolsky, Joel (December 14, 2003). "Biculturalism". Joel on Software. Retrieved March 21, 2021.
When Unix was created and when it formed its cultural values, there were no end users.
- ^ Powers, Shelley; Peek, Jerry; O'Reilly, Tim; Loukides, Mike (2002). Unix Power Tools. "O'Reilly Media, Inc.". ISBN 978-0-596-00330-2.
- ^ a b c d e Ritchie, Dennis M. "The Evolution of the Unix Time-sharing System" (PDF). Retrieved 23 October 2025.
- ^ Kernighan, Brian W. Pike, Rob. The UNIX Programming Environment. 1984. viii
- ^ Fiedler, Ryan (October 1983). "The Unix Tutorial / Part 3: Unix in the Microcomputer Marketplace". BYTE. p. 132. Retrieved January 30, 2015.
- ^ Brand, Stewart (1984). Tandy/Radio Shack Book: Whole Earth Software Catalog. Quantum Press/Doubleday. ISBN 9780385191661.
The best thing about UNIX is its portability. UNIX ports across a full range of hardware—from the single-user $5000 IBM PC to the $5 million Cray. For the first time, the point of stability becomes the software environment, not the hardware architecture; UNIX transcends changes in hardware technology, so programs written for the UNIX environment can move into the next generation of hardware.
- ^ Stuart, Brian L. (2010). Principles of operating systems: design & applications. Boston, Massachusetts: Thompson Learning. p. 23. ISBN 978-1-4188-3769-3.
- ^ Dolya, Aleksey (29 July 2003). "Interview with Brian Kernighan". Linux Journal. Archived from the original on 18 October 2017.
- ^ Rik Farrow. "An Interview with Peter G. Neumann" (PDF). ;login:. 42 (4): 38.
That then led to Unics (the castrated one-user Multics, so- called due to Brian Kernighan) later becoming UNIX (probably as a result of AT&T lawyers).
- ^ Georgiadis, Evangelos (2024). "Dismantling Scaffolding". Letters to the Editor. Communications of the ACM. 67. doi:10.1145/3654698. ISSN 0001-0782.
- ^ Reinfelds, Juris. "The First Port of UNIX" (PDF). Retrieved June 30, 2015.
- ^ "Portability of C Programs and the UNIX System". Bell Labs. Retrieved October 23, 2025.
- ^ Thompson, Ken (16 September 2014). "personal communication, Ken Thompson to Donald W. Gillies". UBC ECE website. Archived from the original on 22 March 2016.
- ^ "Operating system Family - Systems share". Top 500 project.
- ^ "Loading". Apple Developer. Archived from the original on 9 June 2012. Retrieved 22 August 2012.
- ^ "Unix's Revenge". asymco. 29 September 2010. Archived from the original on 9 November 2010. Retrieved 9 November 2010.
- ^ "Unix: the operating system setting new standards". IONOS Digitalguide. May 29, 2020. Retrieved May 10, 2022.
- ^ Shelley Powers; Jerry Peek; Tim O'Reilly; Michael Kosta Loukides; Mike Loukides (2003). Unix Power Tools. "O'Reilly Media, Inc.". p. 32. ISBN 978-0-596-00330-2. Retrieved August 8, 2022.
- ^ Ritchie, Dennis (1979). "The Evolution of the Unix Time-sharing System". Bell Laboratories. Retrieved 23 October 2025.
Perhaps the most important watershed occurred during 1973, when the operating system kernel was rewritten in C.
- ^ "Major Distributions". distrowatch.com.
- ^ a b Benet, Manuel (1999). "Interview With Dennis M. Ritchie". LinuxFocus.org. Archived from the original on 4 January 2018. Retrieved 16 August 2020.
- ^ a b Holmgren, Steve (May 1975). Network Unix. IETF. doi:10.17487/RFC0681. RFC 681. Retrieved April 22, 2021.
- ^ Libes, Sol (November 1982). "Bytelines". BYTE. pp. 540–547.
- ^ "AT&T Expands Unix Trademark Licensing Program" (Press release). October 31, 1988.
- ^ Chuck Karish (October 12, 1993). "The name UNIX is now the property of X/Open". Newsgroup: comp.std.unix. Usenet: 29hug3INN4qt@rodan.UU.NET. Retrieved February 21, 2020.
- ^ "Novell Completes Sale of UnixWare Business to The Santa Cruz Operation | Micro Focus". www.novell.com. Archived from the original on 20 December 2015. Retrieved 20 December 2015.
- ^ "HP, Novell and SCO To Deliver High-Volume UNIX OS With Advanced Network And Enterprise Services". Novell.com. September 20, 1995. Archived from the original on January 23, 2007. Retrieved November 9, 2010.
- ^ Jones, Pamela. "SCO Files Docketing Statement and We Find Out What Its Appeal Will Be About". Groklaw. Groklaw.net. Archived from the original on June 21, 2024. Retrieved April 12, 2011.
- ^ The Open Group. "The Open Brand Fee Schedule". Archived from the original on December 31, 2011. Retrieved December 26, 2011.
The right to use the UNIX Trademark requires the Licensee to pay to The Open Group an additional annual fee, calculated in accordance with the fee table set out below.
- ^ The Open Group. "AIX 6 Operating System V6.1.2 with SP1 or later certification". Archived from the original on April 8, 2016.
- ^ The Open Group (September 8, 2016). "Huawei EulerOS 2.0 certification".
- ^ The Open Group. "HP-UX 11i V3 Release B.11.31 or later certification". Archived from the original on April 8, 2016.
- ^ The Open Group. "Inspur K-UX 2.0 certification". Archived from the original on July 9, 2014.
- ^ The Open Group. "IRIX 6.5.28 with patches (4605 and 7029) certification". Archived from the original on March 4, 2016.
- ^ "macOS version 10.12 Sierra on Intel-based Mac computers". The Open Group. Archived from the original on October 2, 2016.
- ^ The Open Group. "Oracle Solaris 11 FCS and later certification". Archived from the original on September 24, 2015.
- ^ Bonnie Talerico. "Hewlett-Packard Company Conformance Statement". The Open Group. Archived from the original on December 10, 2015. Retrieved December 8, 2015.
- ^ Vivian W. Morabito. "IBM Corporation Conformance Statement". The Open Group. Retrieved January 21, 2018.
- ^ Peng Shen. "Huawei Conformance Statement". The Open Group. Retrieved January 22, 2020.
- ^ Peng Shen. "Huawei Conformance Statement: Commands and Utilities V4". The Open Group. Retrieved January 22, 2020.
- ^ Raymond, Eric S. (ed.). "Unix". The Jargon File. Archived from the original on June 4, 2011. Retrieved November 9, 2010.
- ^ Troy, Douglas (1990). UNIX Systems. Computing Fundamentals. Benjamin/Cumming Publishing Company. p. 4. ISBN 978-0-201-19827-0.
- ^ "Autres Unix, autres moeurs (OtherUnix)". Bell Laboratories. April 1, 2000. Retrieved October 23, 2025.
- ^ "History of Solaris" (PDF). Archived (PDF) from the original on March 18, 2017.
UNIX is plural. It is not one operating system but, many implementations of an idea that originated in 1965.
Further reading
[edit]- General
- Ritchie, D.M.; Thompson, K. (July–August 1978). "The UNIX Time-Sharing System". Bell System Technical Journal. 57 (6). Archived from the original on November 3, 2010.
- "UNIX History". www.levenez.com. Retrieved March 17, 2005.
- "AIX, FreeBSD, HP-UX, Linux, Solaris, Tru64". UNIXguide.net. Retrieved March 17, 2005.
- "Linux Weekly News, February 21, 2002". lwn.net. Retrieved April 7, 2006.
- Lions, John: Lions' "Commentary on the Sixth Edition UNIX Operating System". with Source Code, Peer-to-Peer Communications, 1996; ISBN 1-57398-013-7
- Books
- Salus, Peter H.: A Quarter Century of UNIX, Addison Wesley, June 1, 1994; ISBN 0-201-54777-5
- Television
- Computer Chronicles (1985). "UNIX".
- Computer Chronicles (1989). "Unix".
- Talks
- Ken Thompson (2019). "VCF East 2019 -- Brian Kernighan interviews Ken Thompson" (Interview).
- Marshall Kirk McKusick (2006). History of the Berkeley Software Distributions (three one-hour lectures).
External links
[edit]- The UNIX Standard, at The Open Group.
- The Evolution of the Unix Time-sharing System
- The Creation of the UNIX Operating System at the Wayback Machine (archived April 2, 2014)
- The Unix Tree: source code and manuals from historic releases
- Unix History Repository — a git repository representing a reconstructed version of the Unix history on GitHub
- The Unix 1st Edition Manual
- AT&T Tech Channel Archive: The UNIX Operating System: Making Computers More Productive (1982) on YouTube (film about Unix featuring Dennis Ritchie, Ken Thompson, Brian Kernighan, Alfred Aho, and more)
- AT&T Tech Channel Archive: The UNIX System: Making Computers Easier to Use (1982) on YouTube (complementary film to the preceding "Making Computers More Productive")
- audio bsdtalk170 - Marshall Kirk McKusick at DCBSDCon -- on history of tcp/ip (in BSD) -- abridgement of the three lectures on the history of BSD.
- A History of UNIX before Berkeley: UNIX Evolution: 1975-1984
- BYTE Magazine, September 1986: UNIX and the MC68000 – a software perspective on the MC68000 CPU architecture and UNIX compatibility
Introduction
Overview
Unix is a family of multitasking, multi-user operating systems originally developed in the 1970s at Bell Laboratories Incorporated.[10][11] It emerged as a simplified alternative to more complex contemporary systems, providing a streamlined environment for interactive computing on minicomputers such as the PDP-11.[12][13] At its core, Unix emphasizes portability, achieved through its implementation in the C programming language, which allowed the system to be adapted across diverse hardware platforms with minimal changes.[14] Key characteristics include modularity, where the system comprises small, independent programs that perform specific functions; a hierarchical file system that treats files, directories, devices, and processes uniformly; and a command-line interface mediated by a shell that interprets user commands.[12] Users interact with Unix primarily through text-based commands entered at a terminal, enabling efficient scripting and automation.[12] A hallmark of Unix's design is the use of pipes, which facilitate data flow between processes, allowing complex operations to be composed from simple tools without custom programming.[12] This model supports multitasking by managing multiple asynchronous processes and multi-user access through per-user command environments.[12] Over time, Unix's principles have shaped the evolution of computing, serving as a foundation for numerous derivatives and influencing contemporary systems.[10]Design Principles
Unix's design was guided by a set of philosophical principles emphasizing simplicity, modularity, and efficiency, which emerged from the need to create a compact yet powerful operating system on limited hardware. These principles rejected the complexity of earlier systems like Multics, instead favoring a lean approach that prioritized ease of use and development. Central to this was the idea of building small, focused programs that could be combined flexibly, allowing developers to solve complex problems through composition rather than monolithic structures.[15] A core tenet is "do one thing well," which advocates for programs that perform a single, specific task efficiently without unnecessary features, promoting modularity and reusability. This is complemented by orthogonality, where tools operate independently but can be interconnected via mechanisms like pipes—streams that allow the output of one program to serve as input to another—enabling powerful pipelines for data processing. Another foundational concept is "everything is a file," providing a unified interface for handling files, devices, and inter-process communication, which simplifies programming by treating diverse system resources uniformly. These ideas were articulated by Douglas McIlroy in the forward to a 1978 Bell System Technical Journal issue on Unix, where he outlined maxims like designing output for reuse and building tools that integrate seamlessly.[16] Unix further emphasized text-based interfaces and small programs to facilitate interactivity and portability. By relying on plain text streams for communication between tools, the system ensured broad compatibility and ease of scripting, as text serves as a universal, machine-agnostic format. Programs were kept concise to minimize resource use and bugs, with source code written in high-level languages like C to enhance portability across hardware— a deliberate shift from assembly to enable recompilation on different machines without major rewrites. The "rule of least surprise" reinforces consistency, ensuring that interfaces and behaviors align with user expectations to reduce learning curves and errors across tools. While influenced by Multics in areas like hierarchical file systems and process forking, Unix deliberately avoided its elaborate features to achieve greater simplicity and performance on modest hardware. This rejection of over-engineering fostered a self-sustaining ecosystem where the system's own tools could maintain and extend it. Portability was later formalized through standards like POSIX, allowing Unix-like systems to interoperate reliably.[15]History
Origins at Bell Labs
In the late 1960s, Bell Labs withdrew from the collaborative Multics project, which aimed to create a sophisticated time-sharing operating system but had grown overly complex and resource-intensive. Motivated by the desire to recapture the interactive computing experience of Multics in a more lightweight and practical form, Ken Thompson began developing an operating system in 1969 using a DEC PDP-7 minicomputer at Bell Labs. This initial effort focused on creating a simple, efficient system for text processing and program development, initially lacking formal documentation but emphasizing rapid prototyping.[17] Thompson's prototype, informally called "Unics" as a playful reference to Multics, introduced core concepts such as a hierarchical file system and process management using the fork() primitive, which allowed processes to spawn child processes efficiently. By 1971, with the arrival of a more powerful PDP-11 minicomputer, the system evolved into Version 1 of Unix, featuring innovations like a unified file system treating devices as files and basic tools such as the ed editor and roff formatter, primarily serving the patent department's text-processing needs. Dennis Ritchie soon joined Thompson as a key collaborator, contributing to the system's design and implementation.[3][17] Other Bell Labs researchers played crucial roles in refining Unix during its early years. Doug McIlroy proposed the pipe mechanism in 1972, enabling modular command composition that became a hallmark of Unix's philosophy. Joe Ossanna focused on text processing enhancements, while Brian Kernighan suggested the name "Unix" in 1970, solidifying its identity. The system remained written in PDP-11 assembly language until 1973, when Ritchie developed the C programming language—evolving from his earlier B language—to rewrite the kernel, dramatically improving portability and maintainability across different hardware. This transition, completed in Version 4, allowed Unix to escape its machine-specific origins and facilitated broader experimentation.[17][3] By 1975, Unix reached Version 6, which incorporated the full C rewrite and included a rich set of utilities, making it suitable for academic and research use. This version marked the system's first widespread distribution outside Bell Labs, with magnetic tapes provided at nominal cost to universities such as the University of California, Berkeley, and Princeton, fostering an ecosystem of modifications and ports that extended Unix's influence.[3][18]Commercial Development and Dissemination
The commercialization of Unix began in earnest following the 1982 consent decree that broke up the Bell System monopoly, with the divestiture taking effect on January 1, 1984, which lifted restrictions on AT&T's ability to sell software products directly to the public.[19] Prior to this, AT&T had been limited to licensing Unix primarily for research and internal use due to antitrust regulations stemming from the 1956 Consent Decree. The first major step toward commercial viability was the release of System III in 1981, followed by System V in 1983, which marked AT&T's inaugural fully commercial version of Unix, incorporating enhancements like the Stream I/O mechanism and real-time extensions to appeal to business users.[20] Post-divestiture, AT&T aggressively marketed System V licenses to hardware vendors, transforming Unix from a niche research tool into a viable enterprise operating system. Parallel to AT&T's efforts, the University of California, Berkeley, initiated the Berkeley Software Distribution (BSD) in 1977 as an add-on to the Sixth Edition Unix, providing additional utilities and drivers funded initially by DARPA for PDP-11 enhancements. This evolved through annual releases, culminating in 4.2BSD in 1983, which integrated the full TCP/IP protocol stack developed by Berkeley researchers, enabling robust networking capabilities that distinguished it from AT&T's offerings. BSD's open distribution model, available at low cost to academic and research institutions, fostered widespread experimentation and customization, contrasting with AT&T's proprietary licensing approach. Key commercial vendors emerged in the early 1980s, adapting Unix to their hardware platforms and driving market expansion. Sun Microsystems released SunOS in 1982, initially based on AT&T's Version 7 Unix and targeted at engineering workstations, quickly gaining traction in technical computing environments.[21] Digital Equipment Corporation (DEC) introduced Ultrix in 1984, a BSD-derived system for VAX minicomputers, emphasizing compatibility with academic workloads. Hewlett-Packard launched HP-UX in 1982, rooted in System V with proprietary extensions for precision engineering applications on PA-RISC processors. IBM followed with AIX in 1986, blending System V and BSD elements for its RT PC and later RS/6000 systems, positioning it for enterprise data processing. These implementations proliferated Unix across diverse hardware, from workstations to mainframes, solidifying its role in professional computing. Intense competition, dubbed the "Unix Wars," erupted in the late 1980s between AT&T's System V lineage and BSD derivatives, as vendors vied for market dominance amid incompatible variants. AT&T's System V Release 4 (SVR4), unveiled in 1988 through collaboration with Sun Microsystems—which shifted from BSD to SVR4 for better binary compatibility—aimed to unify the ecosystem with features like virtual memory and file system improvements.[22] However, BSD advocates, including DEC and academic users, resisted, leading to fragmented standards and licensing battles that delayed widespread interoperability until later efforts. This rivalry spurred innovation but also highlighted the need for consolidation. By the late 1980s and 1990s, Unix achieved broad adoption in academia for computational research, government agencies for secure networked systems, and industry for software development and early client-server architectures. Its TCP/IP integration, particularly via BSD, underpinned the ARPANET's transition to the Internet, powering much of the initial infrastructure at universities and labs funded by NSF and DARPA.[23] In industry, Unix workstations from Sun and others became staples in engineering, finance, and telecommunications, with installations scaling to thousands in sectors like aerospace and defense, where its portability and reliability proved essential.[23]Standards and Compatibility
POSIX Standard
The POSIX standard, formally known as IEEE Std 1003.1, emerged in 1988 as a collaborative effort by the IEEE to establish a portable operating system interface for Unix-like environments, promoting source-level compatibility among diverse implementations. Drawing from established Unix variants such as System V (including SVID Issue 2) and Berkeley Software Distribution (BSD) releases like 4.2BSD and 4.3BSD, it standardized core system calls, library functions, and behaviors to enable applications to operate consistently across compliant systems without major modifications. This baseline specification, also adopted as FIPS PUB 151-1 by the U.S. federal government, focused on essential services including process management (e.g.,fork() and exec()), file and directory operations (e.g., open(), read(), mkdir()), signals, and input/output primitives, while aligning with the emerging ANSI C standard to minimize namespace conflicts through feature test macros like _POSIX_SOURCE.[24]
POSIX encompasses several interrelated components to cover a broad range of system functionalities. The core IEEE Std 1003.1 defines system interfaces for fundamental operations, such as process control, file system access, and environment variables via functions like sysconf(). Complementing this, IEEE Std 1003.2 (POSIX.2) standardizes the shell command interpreter and common utilities, ensuring consistent syntax and semantics for tools like sh and data interchange formats (e.g., tar and cpio). Real-time extensions, introduced in IEEE Std 1003.1b-1993 (later integrated as part of broader POSIX updates), add support for priority scheduling, semaphores, timers, and reliable signal queuing to meet demands in time-sensitive applications. These elements collectively form a cohesive framework for building portable software, with optional facilities like job control indicated by constants such as _POSIX_JOB_CONTROL.[25][26]
Conformance to POSIX is managed through a certification process administered by The Open Group in partnership with IEEE, requiring implementations to pass rigorous test suites (e.g., the POSIX Conformance Test Suite) that verify mandatory interfaces and minimum resource limits. Levels of conformance, such as those outlined in POSIX.1-2008 (IEEE Std 1003.1-2008), distinguish between baseline POSIX compliance and extended profiles, including XSI (X/Open System Interfaces) for additional Unix features; certified systems must document implementation-defined behaviors to aid developers. This process ensures verifiable portability, with numerous products achieving certification historically, fostering interoperability in enterprise environments.[27][28]
The adoption of POSIX significantly mitigated the fragmentation of the "Unix Wars," where competing proprietary variants led to incompatible APIs and hindered software development; by defining a lowest common denominator, it enabled cross-vendor portability and reduced vendor lock-in, influencing the proliferation of Unix-derived systems in the 1990s. Over time, the standard evolved through periodic revisions, culminating in POSIX.1-2024 (IEEE Std 1003.1-2024), which incorporates technical corrigenda to prior versions including POSIX.1-2017, while enhancing support for threads (via integrated 1003.1c elements for pthread APIs), advanced file system semantics (e.g., improved directory traversal and locking), and security features (e.g., refined access controls and memory synchronization). These updates, harmonized with ISO/IEC 9945:2024 and The Open Group Base Specifications Issue 8 (2024 edition), maintain backward compatibility while addressing modern requirements for concurrent and secure applications.[26][29][6]
Other Compliance Efforts
The Single UNIX Specification (SUS), developed by The Open Group from the early 1990s onward, provides a unified standard for Unix operating systems by defining common application programming interfaces (APIs), commands, utilities, and behaviors to ensure portability across diverse implementations.[30] It supersedes earlier X/Open standards, such as the X/Open Portability Guide, by integrating their requirements into a more comprehensive framework that promotes interoperability in heterogeneous environments, including support for networking, internationalization, and programming languages.[30] Key versions include SUS Version 1 (1990), which established the baseline; Version 2 (1997), adding real-time and threading support; Version 3 (2001); and Version 4 (2013, with editions in 2018 and 2024 aligning with ISO/IEC 9945:2009 and 2024 for enhanced 64-bit and large-scale system compatibility).[30][31] To enforce SUS compliance, The Open Group administers branding programs for certified systems, including the UNIX 03 mark for products conforming to SUS Version 3 and the UNIX V7 mark for those meeting Version 4 requirements.[32] These certifications verify adherence to specified interfaces, enabling vendors to demonstrate portability for applications without modification; examples include IBM AIX (certified under both marks) and HP-UX (UNIX 03 and V7), with recent certifications such as Apple's macOS Sequoia (version 15) in 2024 under UNIX V7.[7][33] The programs evolved from earlier brands like UNIX 95 and UNIX 98, broadening eligibility to include 64-bit systems and real-time extensions while maintaining a vendor-neutral benchmark.[34] The System V Interface Definition (SVID), issued by AT&T, outlines the core components of UNIX System V Release 4 (SVR4), including system calls, C libraries, and user interfaces, to facilitate compatibility among AT&T-derived systems and third-party ports.[35] First published in Issue 2 (1986), it progressed to the Fourth Edition (1995), which detailed over 1,000 interfaces and emphasized SVR4's integration of Berkeley features like TCP/IP sockets, serving as a foundational reference for commercial Unix development.[36] SVID compliance helped standardize behaviors in environments like SunOS and SCO Unix, reducing porting efforts for enterprise applications.[37] Additional initiatives, such as SPEC 1170 (early 1990s), advanced Unix evolution by defining 1,170 interfaces—including real-time processing, threads, and architecture-neutral APIs—to support portable real-time applications across vendor platforms.[38] This effort, led by a consortium including Sun, IBM, and HP, was incorporated into the initial SUS in 1994, enhancing support for time-critical systems in embedded and industrial contexts.[39] For Linux variants, the Linux Standard Base (LSB), managed by the Linux Foundation since 1998, bridges Unix compliance by specifying APIs, file formats, and packaging aligned with SUS and POSIX, with versions like LSB 5.0 (2015) enabling certification across architectures such as x86-64 and PowerPC.[40] LSB promotes interoperability for high-volume applications, though adoption has waned in favor of de facto standards in modern distributions.[40] Challenges in achieving full compliance persist due to proprietary extensions and variant-specific optimizations in Unix-like systems, often resulting in partial adherence that complicates software portability.[41] To address this, conformance test suites—such as The Open Group's VSX series for SUS Versions 3 and 4—provide automated verification of APIs, utilities, and extensions like real-time and threading, acting as indicators rather than absolute proofs of compliance.[41] These tools, including the VSRT for real-time extensions, help developers identify gaps early, though incomplete implementations in open-source variants continue to necessitate custom portability layers.[42]System Components
Kernel Structure
The Unix kernel employs a monolithic architecture, in which the core operating system components—including device drivers, file systems, networking stacks, and process management—operate within a single address space for efficiency and simplicity.[12] This design, originating in the early implementations on the PDP-11, integrates all essential services directly into the kernel, minimizing overhead from inter-component communication but requiring careful management to avoid system-wide failures.[12] Some later Unix variants incorporate modular extensions, such as loadable kernel modules for dynamic addition of file systems, networking protocols, and device drivers, blending monolithic efficiency with greater flexibility. Central to the Unix process model is the fork-exec paradigm for creating and executing new processes, where the fork system call duplicates an existing process to produce a child, and exec subsequently overlays the child's address space with a new program image.[12] This approach enables process creation, while signals provide asynchronous inter-process communication for handling events like interrupts or terminations, allowing processes to respond to conditions such as user requests or hardware errors.[12] Memory management in Unix relies on virtual memory techniques, partitioning each process's address space into distinct segments for text (code), data, and stack, with paging to support demand loading where pages are fetched from disk only upon reference.[12] The text segment is typically shared among processes running the same executable and protected against writes to conserve memory, while the kernel swaps entire processes to disk under memory pressure, ensuring isolation and efficient resource allocation across multiple users.[12] The file system adopts an inode-based structure, introduced in early versions and refined by Version 7 Unix, where each file is represented by an inode—an on-disk data structure storing metadata such as ownership, size, permissions, and pointers to data blocks—enabling a hierarchical directory organization through special directory files that map names to inode numbers.[12] This design treats devices and directories uniformly as files, with support for hard links via multiple name-to-inode mappings and removable media through per-volume inode lists, promoting a consistent interface for all I/O operations.[12] Security in the Unix kernel is enforced through user and group identifiers (UID and GID), assigned to each process and file, with nine-bit permission modes controlling read, write, and execute access for the owner, group, and others.[12] The setuid (set-user-ID) bit on executables allows a process to temporarily adopt the file owner's UID, enabling privileged operations like those required by system utilities while maintaining least-privilege principles for ordinary users.[12]User Interface and Tools
The Unix user interface is primarily command-line based, centered around the shell, which acts as a command interpreter and scripting environment that enables users to interact with the operating system by executing programs and managing files. The original shell, known as the Bourne shell (sh), was developed by Stephen Bourne at Bell Labs and released in 1977 as part of Unix Version 7.[43] This shell introduced a scripting language with features like variables, control structures, and command substitution, allowing users to automate tasks through shell scripts. Subsequent evolutions enhanced interactivity and functionality: the C shell (csh), created by Bill Joy at the University of California, Berkeley in the late 1970s, added C-like syntax, history substitution, and job control for better interactive use.[44] The Korn shell (ksh), developed by David Korn at Bell Labs in the early 1980s and first announced in 1983, combined the scripting power of the Bourne shell with C shell conveniences like command-line editing and improved performance.[45] The Bourne-Again shell (Bash), authored by Brian Fox for the GNU Project and released in 1989, became widely adopted in open-source Unix-like systems due to its POSIX compliance, extensive customization options, and default status in many distributions. A hallmark of the Unix user environment is its composability, where small, single-purpose utilities can be chained together to perform complex operations. Essential command-line tools includels for listing directory contents, grep for searching text patterns using regular expressions (originally derived from the ed editor and introduced as a standalone utility in Unix Version 4 around 1973), awk for pattern scanning and data transformation (developed by Alfred Aho, Peter Weinberger, and Brian Kernighan in 1977), and sed for stream editing and text substitution (created by Lee McMahon in 1974). These utilities emphasize modularity, with text processing as a core strength. The pipe operator (|), invented by Douglas McIlroy in 1973 and implemented in Unix Version 3, allows the output of one command to serve as input to another, enabling pipelines like ls | grep ".txt" | wc -l to list, filter, and count text files efficiently.[46] Redirection operators, such as > for output to files and < for input from files, further support this by rerouting data streams, as seen in commands like grep error log.txt > errors.log.
Underpinning these interactions are the three standard I/O streams: stdin (standard input, file descriptor 0), stdout (standard output, file descriptor 1), and stderr (standard error, file descriptor 2), which were established in early Unix implementations to standardize program communication with the environment. By default, stdin reads from the keyboard, while stdout and stderr write to the terminal, but redirection and pipes allow flexible reassignment, promoting reusable code. Shell scripting builds on this foundation, permitting users to write automation scripts in files executed via the shell (e.g., sh script.sh), often documented through man pages—a manual system originating in the early 1970s, where the man command displays formatted documentation for commands, files, and system calls.[47]
While Unix is fundamentally text-oriented, graphical extensions emerged to support visual interfaces. The X Window System, developed at MIT's Project Athena starting in 1984 and reaching version X11 in 1987, provides a network-transparent windowing protocol that was integrated into various Unix variants, such as SunOS and BSD, enabling bitmap displays, window management, and remote access without altering the core command-line tools. This separation allows users to layer graphical desktops atop the traditional shell environment, maintaining composability across interfaces.
Implementations
Proprietary Systems
Proprietary Unix systems, largely descended from AT&T's System V Release 4 (SVR4), were commercialized by major vendors to deliver robust, standards-compliant operating environments for enterprise computing, workstations, and specialized hardware. These implementations prioritized features like advanced virtualization, security hardening, and hardware optimization, often earning certification from The Open Group to ensure interoperability and adherence to Unix specifications. By 2025, while still vital in select high-reliability sectors, their dominance has waned amid the rise of open-source alternatives. Oracle Solaris, originally developed as SunOS by Sun Microsystems and maintained by Oracle since 2010, represents a key SVR4 lineage with support for SPARC and x86-64 architectures. It incorporates the ZFS file system for data integrity and Solaris Zones for lightweight virtualization, making it suitable for large-scale data centers. As of October 2025, Oracle released Solaris 11.4 Support Repository Update (SRU) 86, addressing security vulnerabilities and providing ongoing patches under the sustaining support model, which focuses on maintenance without major new developments. It continues to run in critical environments such as banking and telecommunications.[48][49][50] IBM AIX, optimized for IBM's Power ISA processors, evolved from SVR4 to excel in enterprise servers with capabilities like Logical Partitioning (LPAR) for resource isolation and Live Partition Mobility for workload migration. Its reliability stems from features such as Journaled File System (JFS2) and robust clustering support. In 2025, AIX 7.3 Technology Level 3 Service Pack 1 continues active development, while support for AIX 7.1 has been extended through the end of 2027, affirming its role in mission-critical applications like finance, manufacturing, banking, and telecommunications.[51][52][53] HP-UX, Hewlett-Packard's SVR4-based system for PA-RISC and Itanium processors, emphasizes security through mechanisms like Process Execution Environment and Trusted Computing Base integration. It supports advanced storage management via Logical Volume Manager. However, following Intel's Itanium phase-out, HP-UX 11i v3's standard support concludes on December 31, 2025, with optional mature support available until 2028; HPE now recommends migration to Linux for future deployments.[54][55][56] Historical proprietary Unix variants include IRIX from Silicon Graphics, a MIPS architecture system celebrated for its OpenGL integration and real-time 3D rendering in creative industries. Production of IRIX ended on December 29, 2006, with extended support ceasing in 2013.[57][58] Likewise, Tru64 UNIX, developed by Digital Equipment Corporation and later HP from the OSF/1 base for Alpha processors, offered 64-bit clustering via TruCluster. Full engineering support for Tru64 ended in December 2012, rendering it obsolete for modern use.[59] Contemporary proprietary systems persist in niche markets. macOS, built on the open-source Darwin kernel (a BSD variant), has been Apple-certified as Unix since 2000 and conforms to the Single UNIX Specification version 3; macOS 26.0 Tahoe was registered under UNIX 03 on August 29, 2025, for Apple silicon hardware, and is used by millions of users.[60] Inspur K-UX, developed by the Chinese firm Inspur for x86-64 servers, is a proprietary Unix certified under UNIX 03 since 2016, featuring enterprise tools for high-performance computing in Asia-Pacific regions.[61][30] As of 2026, proprietary Unix systems' market share has sharply declined, comprising a fraction of server deployments as open-source options like Linux offer comparable functionality at lower cost and greater flexibility, particularly in cloud and distributed environments. Their enduring appeal lies in proven enterprise reliability and long-term vendor commitments for legacy infrastructure.[62][63][64]Open-Source and Unix-like Variants
The open-source and Unix-like variants of Unix emerged in the late 1980s and early 1990s as responses to the proprietary nature of commercial Unix systems, emphasizing free software licensing, community-driven development, and adherence to Unix principles such as modularity and portability. These implementations diverged from traditional Unix by prioritizing accessibility, customization, and innovation in areas like security and distributed computing, often under licenses like the BSD or GPL that allow broad redistribution and modification. Key projects include the BSD derivatives, the GNU initiative, and the Linux kernel, each contributing distinct components to form complete operating systems. The BSD family, originating from the University of California's Berkeley Software Distribution, produced several influential open-source variants after the 1992 settlement of AT&T lawsuits enabled code redistribution. FreeBSD, first released in 1993 by a group of 386BSD developers including Nate Williams and Jordan Hubbard, focused on high-performance networking and multimedia support, evolving into a robust platform for servers and embedded systems. NetBSD, also launched in 1993 by Adam Glass and others from the 386BSD community, emphasized extreme portability across diverse hardware architectures, supporting over 50 platforms by the mid-1990s. OpenBSD, forked from NetBSD in 1995 by Theo de Raadt, prioritized security through proactive auditing and cryptographic features, becoming a foundation for secure appliances and firewalls. BSD variants like FreeBSD and OpenBSD are actively maintained and deployed in servers and embedded systems. The GNU Project, initiated in 1983 by Richard Stallman at the Free Software Foundation, aimed to create a complete free Unix-like operating system by developing essential utilities, compilers, and libraries such as the GNU C Compiler (GCC) and coreutils, which replaced proprietary Unix tools and became foundational for many variants. Although the project produced most system components by the early 1990s, its kernel effort, the GNU Hurd, began development in 1990 as a microkernel-based replacement for the Unix kernel using the Mach microkernel; development continues, with notable progress including the release of Debian GNU/Hurd 2025 in August 2025, providing 64-bit support and broader package compatibility, though it remains less widely adopted than other kernels due to its microkernel architecture.[65] Linux, developed starting in 1991 by Finnish student Linus Torvalds as a free monolithic kernel for x86 systems, drew inspiration from Minix and Unix to provide a POSIX-compliant foundation, quickly gaining adoption through its GPL license and integration with GNU tools to form complete GNU/Linux systems. Major distributions include Ubuntu, launched in 2004 by Canonical for user-friendly desktop and server use with Debian roots, and Red Hat Enterprise Linux, introduced in 2003 by Red Hat for enterprise environments emphasizing stability and support. Other notable Unix-like systems include MINIX, created in 1987 by Andrew S. Tanenbaum at Vrije Universiteit Amsterdam as an educational tool to illustrate operating system principles in his textbook, featuring a microkernel design for simplicity and reliability. Plan 9, developed from 1989 at Bell Labs by Ken Thompson, Rob Pike, and Dave Presotto as a distributed successor to Unix, introduced resource naming via a unified file protocol (9P) to enable seamless integration of CPUs, storage, and displays across networks.[66] Additionally, illumos, forked from OpenSolaris in August 2010 by former Sun engineers including Garrett D'Amore in response to Oracle's closure of the project, continues development of System V Release 4-derived code with features like ZFS, maintaining Unix-like compatibility though not formally certified.Impact and Legacy
Influence on Modern Operating Systems
Unix's design principles, particularly its emphasis on modularity, portability, and a hierarchical file system, have profoundly influenced modern operating systems, with Linux—a direct Unix-like implementation—emerging as a cornerstone in server environments. As of November 2025, Linux powers approximately 58% of all websites whose operating system is known and 54% of the top 1,000,000 websites, underscoring its dominance in high-performance computing and hosting infrastructure.[67] This prevalence stems from Unix's foundational concepts, such as process isolation and multi-user support, which Linux extends through its kernel, enabling scalable deployments in data centers worldwide. In mobile computing, Android, built on the Linux kernel, commands over 72% of the global smartphone operating system market share in 2025, integrating Unix-derived tools for app development and system management.[68] Apple's ecosystem further exemplifies Unix's legacy through Darwin, the open-source Unix-like core that underpins macOS and iOS. Darwin incorporates components from BSD Unix, including the XNU kernel, providing POSIX compliance and a familiar command-line interface that facilitates developer productivity across Apple's platforms. As of 2026, macOS, a certified UNIX operating system, continues to be actively used by millions of users worldwide.[69][70] In 2026, proprietary Unix systems such as IBM AIX and Oracle Solaris remain in use in critical enterprise environments, such as banking and telecommunications.[71][72] Similarly, open-source BSD variants like FreeBSD and OpenBSD are actively maintained and deployed in servers, firewalls, and embedded systems.[8][9] Similarly, Microsoft introduced the Windows Subsystem for Linux (WSL) in 2016, allowing seamless execution of Unix tools and Linux distributions natively on Windows, thereby bridging Unix portability to enterprise Windows environments and supporting hybrid workflows.[73] In embedded systems, Unix-like architectures continue to thrive due to their reliability and resource efficiency. For instance, Cisco IOS, particularly newer variants like IOS XE, traces its roots to Unix-like systems such as QNX, evolving into a Linux-based platform for routers and network devices that handle critical infrastructure tasks.[74] IoT devices increasingly adopt Unix-like operating systems, with distributions like Ubuntu Core and Raspbian enabling secure, connected ecosystems in smart homes and industrial sensors.[75] Cloud computing platforms amplify Unix's impact, as services like Amazon Web Services (AWS) and Google Cloud Platform rely on Linux-based foundations for their virtual machines and container orchestration. AWS's Amazon Linux, optimized for cloud workloads, draws from Unix's service-oriented model to support scalable applications.[76] The portability of Unix tools persists in non-Unix environments through projects like Cygwin, which provides a Unix-like layer on Windows for command-line utilities, and various ports that adapt Unix software to diverse hosts.Broader Technological and Cultural Effects
The availability of Unix source code, particularly through distributions like the Berkeley Software Distribution (BSD) licensed to academic institutions starting in the late 1970s, played a pivotal role in inspiring the free software and open-source movements.[77] This access enabled researchers and developers to study, modify, and redistribute the code, fostering a culture of collaborative improvement that directly influenced the formation of the Free Software Foundation (FSF) in 1985 by Richard Stallman, who sought to create a free Unix-compatible operating system via the GNU Project. Similarly, the Open Source Initiative (OSI), established in 1998, built on this tradition by formalizing principles of source code sharing derived from early Unix practices, emphasizing pragmatic benefits for software development communities. Unix's permissive licensing and source availability also nurtured hacker culture, particularly through Usenet, a distributed discussion system launched in 1979 by Duke University students using the Unix-to-Unix Copy Protocol (UUCP).[78] Usenet connected Unix users across academic and research networks, creating early online forums for sharing code, ideas, and memes, which laid groundwork for internet-based hacker communities and collaborative norms that persist in modern open-source ecosystems.[79] In software engineering, Unix promoted principles of readable, modular code, exemplified by the C programming language developed by Dennis Ritchie at Bell Labs in 1972 specifically for rewriting Unix. C's design emphasized clarity and portability, allowing concise yet expressive code that could be maintained by multiple developers, as detailed in Kernighan and Ritchie's seminal 1978 book, which became a standard for teaching structured programming.[80] Additionally, Unix introduced precursors to modern version control with the Source Code Control System (SCCS), created by Marc Rochkind in 1972 at Bell Labs to track changes in program source files, enabling systematic management of code evolution in multi-developer environments.[81] Unix's integration of networking capabilities significantly advanced internet infrastructure, with the 4.2BSD release in August 1983 incorporating a robust TCP/IP implementation funded by DARPA.[82] This version, developed by the University of California, Berkeley team, provided the first widely distributed Unix-based TCP/IP stack, which facilitated the ARPANET's transition to TCP/IP on January 1, 1983, marking the birth of the modern internet by standardizing packet-switched communication across diverse systems.[83] The BSD TCP/IP code's open availability accelerated adoption in academic and research settings, forming the foundational protocols still used today.[84] In computer science education, Unix became integral to curricula from the 1970s onward due to its comprehensive toolset and source code accessibility, allowing students to explore operating system internals hands-on.[85] Tools like the vi editor, developed by Bill Joy for 1BSD in 1976, and the make utility, invented by Stuart Feldman at Bell Labs in 1976, standardized software development workflows by automating builds and enabling efficient text manipulation, influencing pedagogical approaches in programming courses worldwide.[86] These utilities, included in early Unix distributions, promoted practices such as modular design and automation that remain core to CS training.[87] Economically, Unix workstations spurred startup ecosystems in Silicon Valley during the 1980s, providing affordable, high-performance platforms for innovation in software and graphics. Companies like Sun Microsystems, founded in 1982 by Stanford alumni using BSD-derived Unix, and Silicon Graphics (SGI), established the same year for 3D visualization on IRIX (a Unix variant), enabled rapid prototyping and scaling for tech ventures, contributing to the region's venture capital boom and the creation of thousands of jobs in computing hardware and applications.[88] By the mid-1980s, Unix-based systems powered engineering workstations in Silicon Valley firms, fostering an environment where startups could compete with established players through networked, open-standards computing.[89]Branding and Legal Aspects
Trademark Usage
The UNIX trademark is owned and managed by The Open Group, a consortium focused on open standards, which has held the rights since 1994 following a transfer from Novell.[90] Novell had acquired the UNIX business, including the trademark, from AT&T's Unix System Laboratories (USL) in 1993.[90] Prior to that, AT&T developed and controlled the trademark originating from the system's creation in the 1970s.[90] In a separate transaction, Novell sold the UNIX System V source code and the UnixWare product line to Santa Cruz Operation (SCO) in 1995, but the core UNIX trademark remained with what became The Open Group; subsequent legal disputes in the early 2000s, including SCO's claims against IBM and others, were resolved without altering The Open Group's trademark ownership.[90] To use the UNIX trademark in system naming and marketing, operating systems must pass The Open Group's conformance tests under the Single UNIX Specification, which builds on POSIX standards for portability and compatibility.[91] Licensees are required to display the mark as "UNIX" in all uppercase letters, treating it as an adjective modifying a generic noun (e.g., "UNIX operating system") rather than as a standalone noun, verb, plural, or possessive form.[92] Prohibited practices include creating derivatives such as "Unix-like" or "UNIX-based" for certified products, abbreviating the term, or combining it into new words without prior approval; such uses could dilute the mark or imply unauthorized endorsement.[93] At the first and significant subsequent mentions, the registered status must be noted (e.g., "UNIX®"), along with an attribution like "UNIX® is a registered trademark of The Open Group."[92] The term "Unix-like" serves as an informal descriptor for operating systems that emulate UNIX features but lack official certification, such as Linux distributions, which adhere to many POSIX interfaces without undergoing the full Single UNIX Specification tests.[91] As of 2025, The Open Group continues active enforcement of the trademark through its certification program, licensing the mark to compliant systems like IBM AIX, Oracle Solaris, and Apple macOS (with macOS 15 certified to UNIX 03 in September 2024); however, new certifications remain rare, largely limited to proprietary implementations amid the dominance of open-source alternatives that opt for non-trademarked branding.[91][94]Licensing Models
In the 1970s, AT&T licensed early versions of Unix source code to academic and research institutions for a nominal fee, initially set at $150 as an administrative charge for royalty-free distribution, enabling widespread adoption in universities and fostering development of Unix derivatives.[95] This academic licensing model, which began with Version 6 in 1975, restricted commercial use but allowed modification and internal redistribution within licensed entities, laying the groundwork for collaborative enhancements.[96] Following the 1984 breakup of the Bell System, AT&T shifted to commercial binary distribution licenses, which permitted vendors to sell pre-compiled Unix systems without source access, often at lower costs than source licenses to encourage market penetration.[97] These binary licenses, such as those for System V variants, generated revenue through royalties and upfront fees while protecting proprietary code from redistribution.[98] The University of California, Berkeley, introduced the permissive Berkeley Software Distribution (BSD) license in the early 1980s, allowing free modification, distribution, and commercial use of its Unix enhancements with minimal restrictions beyond attribution.[99] Unlike AT&T's restrictive terms, the original four-clause BSD license, first applied to 4.2BSD in 1983, permitted integration into proprietary products without requiring source disclosure, influencing workstation vendors like Sun Microsystems.[100] In contrast, the GNU General Public License (GPL), version 1 released in 1989 by the Free Software Foundation, enforced copyleft principles for Unix-like systems, mandating that any derivative works, including those incorporating GNU components into Linux, remain open source under the same terms to preserve user freedoms. This copyleft model, distinct from BSD's permissiveness, ensured that contributions to projects like GNU/Linux could not be proprietarized, promoting a collaborative ecosystem for free Unix alternatives. AT&T's System V Release 4 (SVR4), unveiled in 1988, extended source code licensing to hardware and software vendors, enabling customized Unix implementations for commercial products like HP-UX and AIX through agreements that included royalties and non-disclosure clauses.[96] These SVR4 licenses, managed by Unix System Laboratories (a Novell subsidiary after 1993), allowed vendors to modify and redistribute binaries but retained AT&T's intellectual property rights over core code.[98] To circumvent these proprietary constraints, Berkeley pursued clean-room reimplementations in the late 1980s and early 1990s, rewriting Unix components without direct access to AT&T source to eliminate licensed code, culminating in the 1994 release of 4.4BSD-Lite as a fully independent, redistributable base.[100] This effort resolved a 1992 lawsuit by USL, confirming that Berkeley's networking and utility code was original, thus freeing BSD descendants from AT&T dependencies.[101] In modern Unix variants, the Common Development and Distribution License (CDDL), a weak copyleft OSI-approved license introduced by Sun Microsystems in 2004 for OpenSolaris, governs illumos, an open-source continuation of Solaris released in 2010, requiring source availability for modifications but allowing proprietary linking.[102] Similarly, Apple's Darwin, the open-source foundation of macOS first released in 2000 under the Apple Public Source License (APSL) 1.0—a modified BSD variant with patent grants—evolved with APSL 2.0 in 2003 to address OSI concerns, becoming more permissive while retaining some Apple-specific terms, enabling community contributions to core Unix-like components.[103] These licenses reflect a shift toward hybrid models balancing openness with commercial interests in contemporary Unix derivatives.[104] Licensing disputes peaked with the 2003 SCO Group lawsuit against IBM, alleging unauthorized insertion of proprietary Unix code from SVRx into the open-source Linux kernel, seeking billions in damages and threatening Linux's viability.[105] The protracted litigation, involving Novell and others, spanned over a decade with rulings progressively invalidating SCO's claims, including a 2010 jury finding that Novell retained Unix copyrights and a 2021 settlement in which IBM paid $14.25 million to SCO's bankruptcy trustee, confirming no infringement and vindicating open-source practices while affirming Linux's independence from AT&T-derived code.[106]References
- https://handwiki.org/wiki/Software:Tru64_UNIX
