Recent from talks
Nothing was collected or created yet.
Open system (computing)
View on WikipediaThis article needs additional citations for verification. (June 2008) |
Open systems are computer systems that provide some combination of interoperability, portability, and open software standards. (It can also refer to specific installations that are configured to allow unrestricted access by people and/or other computers; this article does not discuss that meaning).
The term was popularized in the early 1980s, mainly to describe systems based on Unix, especially in contrast to the more entrenched mainframes and minicomputers in use at that time. Unlike older legacy systems, the newer generation of Unix systems featured standardized programming interfaces and peripheral interconnects; third party development of hardware and software was encouraged, a significant departure from the norm of the time, which saw companies such as Amdahl and Hitachi going to court for the right to sell systems and peripherals that were compatible with IBM's mainframes.
The definition of "open system" can be said to have become more formalized in the 1990s with the emergence of independently administered software standards such as The Open Group's Single UNIX Specification.
Although computer users today are used to a high degree of both hardware and software interoperability, in the 20th century the open systems concept could be promoted by Unix vendors as a significant differentiator. IBM and other companies resisted the trend for decades, exemplified by a now-famous warning in 1991 by an IBM account executive that one should be "careful about getting locked into open systems".[1]
However, in the first part of the 21st century many of these same legacy system vendors, particularly IBM and Hewlett-Packard, began to adopt Linux as part of their overall sales strategy, with "open source" marketed as trumping "open system". Consequently, an IBM mainframe with Linux on IBM Z is marketed as being more of an open system than commodity computers using closed-source Microsoft Windows—or even those using Unix, despite its open systems heritage. In response, more companies are opening the source code to their products, with a notable example being Sun Microsystems and their creation of the OpenOffice.org and OpenSolaris projects, based on their formerly closed-source StarOffice and Solaris software products.
See also
[edit]References
[edit]- ^ Ian Dickinson (1991-07-11). "Open Systems Strategy from IBM". Newsgroup: comp.unix.misc. Retrieved 2006-08-13.
Open system (computing)
View on GrokipediaDefinition and Principles
Core Definition
In computing, an open system refers to a computer platform designed to emphasize interoperability, portability, and adherence to open software standards, enabling users to modify, extend, and access freely available documentation for its components.[1] This approach promotes transparency and flexibility, allowing the system to integrate with diverse hardware and software without reliance on specific vendors.[1] Core attributes of open systems include non-proprietary interfaces that facilitate seamless communication between components, vendor-neutral architectures that avoid lock-in to single suppliers, and support for multi-vendor environments where elements from different providers can coexist.[7] These features ensure that the system remains adaptable and scalable. Open systems should not be confused with open source software, which additionally provides access to source code under permissive licenses.[8] In contrast, closed systems in computing, such as proprietary platforms, restrict access to source code, interfaces, and documentation, limiting modification and integration to authorized parties only.[1] This restriction fosters dependency on the original vendor, unlike the openness that characterizes systems designed for broad collaboration and evolution.[1] A seminal example of open system principles is the UNIX operating system, which emerged in the 1970s and became a foundational model through its documented interfaces and compatibility across hardware platforms, influencing subsequent standards for portability and interoperability.[9][1]Key Principles
The principle of openness in computing systems emphasizes the use of publicly documented specifications and standards that are accessible to all developers and vendors, thereby preventing vendor lock-in and enabling seamless integration across diverse implementations. This approach ensures that systems can communicate and interoperate without reliance on proprietary technologies, fostering a competitive environment where users can select components based on merit rather than exclusivity. Open systems in computing generally comply with established standards for information exchange, allowing mutual cooperation among autonomous entities, as exemplified by frameworks like the OSI Reference Model.[10][8] Modularity is a core tenet of open systems, where architectures are designed with interchangeable components from multiple vendors, promoting reusability and ease of maintenance. By dividing functionalities into distinct, self-contained modules that interact through well-defined interfaces, open systems minimize dependencies and facilitate upgrades or replacements without disrupting the overall structure. The OSI model exemplifies this through its layered architecture, in which each layer provides services to adjacent layers via standardized access points, ensuring that modifications in one module do not cascade to others.[10] Scalability in open systems allows for the integration of emerging technologies or expansion of capabilities without necessitating a complete redesign, supporting growth from small networks to large-scale distributed environments. This is achieved through flexible protocols and interfaces that accommodate varying system sizes and performance needs, as outlined in frameworks like the OSI model's provisions for relaying and routing across heterogeneous subnetworks. The Reference Model of Open Systems Interconnection (OSI) serves as a foundational example of such layered architectures in computing, providing a conceptual blueprint that separates concerns and enables evolutionary development.[10][11] Open systems prioritize community-driven evolution over proprietary control, relying on collaborative standards bodies to refine and update specifications in response to technological advancements. International organizations like ISO and ITU coordinate this process, ensuring that protocols evolve through consensus rather than unilateral decisions, which sustains long-term interoperability and innovation. This collective governance model underpins the development of open standards, including those in the OSI Reference Model.[10][11]Historical Development
Origins in Computing
The concept of open systems in computing originated with the development of the Unix operating system in 1969 at Bell Labs by Kenneth Thompson and Dennis Ritchie, which introduced a portable, modular design using the C programming language that facilitated implementation across diverse hardware platforms without vendor-specific dependencies.[12] This innovation addressed the limitations of earlier proprietary systems and laid the groundwork for vendor-independent software portability. By the late 1970s, growing frustration with incompatible proprietary environments from vendors like IBM and Digital Equipment Corporation highlighted the need for standardized interfaces to enable multi-vendor compatibility.[13] Parallel to these operating system advancements, the ARPANET project in the late 1960s and 1970s, funded by the U.S. Defense Advanced Research Projects Agency (DARPA), demonstrated the value of vendor-independent communication protocols for interconnecting heterogeneous systems. ARPANET connected its first nodes in 1969, with the Network Control Protocol finalized by 1970 to support resource sharing.[14] In 1972, Robert Kahn proposed an open-architecture framework for internetworking disparate packet-switched networks, including ARPANET, packet radio, and satellite systems, without mandating internal modifications or proprietary ties; this approach prioritized flexibility and interoperability, directly influencing the 1973–1974 design of TCP/IP by Kahn and Vinton Cerf.[14] The term "open system" in this context draws from general systems theory, which distinguishes systems that exchange information with their environment from isolated ones, but in computing, it specifically emphasizes standards-based architectures for interoperability and extensibility. By the 1980s, the U.S. Department of Defense (DoD) actively championed open architectures for military computing applications, motivated by the vulnerabilities of proprietary systems that created dependencies on single vendors and escalated costs through monopolistic practices. Drawing from experiences with fragmented defense information systems, the DoD sought standards-based approaches to enhance competition, maintainability, and technological agility in procurement and deployment.[15] A landmark response to these concerns was the IEEE's development of the POSIX standard (IEEE Std 1003.1-1988), initiated in 1985 by the POSIX Working Group under the IEEE Computer Society's Technical Committee on Operating Systems to resolve the proliferation of incompatible UNIX variants—such as AT&T's System V, Berkeley's 4.xBSD, and others—that impeded software portability across vendor platforms. Involving over 450 participants from industry, academia, and government, including DoD representatives, the standard defined a core set of C-language APIs for system calls, libraries, and utilities, enabling source-code portability while minimizing disruptions to existing implementations; it was approved by the IEEE Standards Board in August 1988 and later adopted as Federal Information Processing Standard 151-1.[16]Evolution and Milestones
The evolution of open systems in computing accelerated in the 1990s through collaborative efforts to standardize Unix-like environments and promote interoperability. The Open Software Foundation (OSF), established in 1988 as a non-profit consortium by major vendors including IBM, Digital Equipment Corporation, and Hewlett-Packard, aimed to develop an open implementation of Unix to counter proprietary fragmentation.[17] Complementing this, the X/Open Company, formed in 1984 by European vendors like Bull and ICL, focused on defining portable application interfaces, growing to include 21 members by 1990. These initiatives culminated in the 1993 formation of the Common Open Software Environment (COSE), a coalition uniting OSF, X/Open, Unix International, and other Unix leaders to create unified standards for operating environments, including the Single UNIX Specification, which facilitated cross-vendor compatibility and reduced vendor lock-in.[18] Parallel to these consortium efforts, the Linux kernel emerged as a pivotal open system platform. Released by Linus Torvalds in 1991 as an open-source alternative to proprietary Unix, the kernel rapidly evolved through global developer contributions, growing from a hobby project to a robust foundation for diverse applications by the mid-1990s. A key milestone came in 2003, when Linux achieved widespread enterprise server adoption, with major firms like IBM integrating it into high-performance computing and data centers, marking its transition from academic tool to commercial powerhouse.[19][20] In the 2000s, the rise of the internet further propelled open systems via standardized web protocols. The World Wide Web Consortium (W3C), founded in 1994, played a central role by developing and promoting open standards such as HTML, CSS, and XML, which ensured seamless interoperability across platforms. These efforts built on protocols like HTTP, originally specified by the IETF in the 1990s but widely adopted in the 2000s for open web architectures, enabling decentralized development and reducing reliance on closed ecosystems.[21][22] The 2010s witnessed the ascent of cloud computing as a cornerstone of open systems, driven by open APIs and platforms. OpenStack, launched in 2010 by Rackspace and NASA as an open-source cloud infrastructure toolkit, democratized cloud deployment and influenced proprietary providers to embrace interoperability. Major platforms like Amazon Web Services (AWS), which expanded significantly post-2006, and Microsoft Azure, debuting in 2010, increasingly adopted open standards, including RESTful APIs and support for open-source tools like Kubernetes, fostering hybrid and multi-cloud environments that prioritized portability over silos.[23][24]Technical Components
Interoperability Standards
Interoperability standards form the cornerstone of open systems in computing, providing formalized protocols and interfaces that allow disparate hardware, software, and networks to communicate seamlessly. These standards ensure that components from different vendors or implementations can exchange data and services without proprietary barriers, fostering a modular and extensible ecosystem. By defining common rules for interaction, they reduce integration costs and promote widespread adoption across global computing environments. The Open Systems Interconnection (OSI) model, developed by the International Organization for Standardization (ISO), represents a foundational framework for network interoperability. Published initially in 1984 as ISO 7498 and later revised as ISO/IEC 7498-1:1994, it conceptualizes network communication through a seven-layer architecture that separates functions into distinct, hierarchical modules. The layers, from bottom to top, include:- Physical layer: Handles the transmission of raw bits over physical media, such as cables or wireless signals.
- Data link layer: Manages node-to-node data transfer, error detection, and framing.
- Network layer: Routes packets across interconnected networks, addressing logical topologies.
- Transport layer: Ensures end-to-end data delivery, reliability, and flow control.
- Session layer: Establishes, maintains, and terminates communication sessions between applications.
- Presentation layer: Translates data formats, encryption, and compression for application compatibility.
- Application layer: Provides network services directly to end-user applications, such as file transfer or email.
ls and grep, which promote compatibility in multi-vendor environments. The Open Group maintains an updated version in its Base Specifications Issue 7, incorporating POSIX.1-2008 for enhanced real-time and security features.
At the protocol level, the TCP/IP suite stands as the de facto backbone for global network interoperability, officially adopted by ARPANET on January 1, 1983, marking the birth of the modern Internet. Developed by the U.S. Department of Defense's DARPA and standardized through IETF RFCs, TCP/IP encompasses protocols like the Transmission Control Protocol (TCP) for reliable, connection-oriented communication and the Internet Protocol (IP) for addressing and routing. This suite's open specification has enabled billions of devices worldwide to interconnect, supporting applications from web browsing to cloud services.[25]
Additional standards extend interoperability to specialized domains. The Common Object Request Broker Architecture (CORBA), specified by the Object Management Group (OMG) since 1991 and detailed in version 3.3, provides a middleware platform for distributed object communication across heterogeneous systems, using an Object Request Broker (ORB) to invoke methods remotely regardless of programming language or platform.[26] Similarly, the Simple Network Management Protocol (SNMP), defined by the IETF in RFC 1157 (1990) and evolved through subsequent versions, standardizes the monitoring and configuration of network devices via a manager-agent model, allowing cross-vendor management of IP-based infrastructure.[27]
