Hubbry Logo
OpenVMSOpenVMSMain
Open search
OpenVMS
Community hub
OpenVMS
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
OpenVMS
OpenVMS
from Wikipedia

OpenVMS
OpenVMS V7.3-1 running the CDE-based DECwindows "New Desktop" GUI
DeveloperVMS Software Inc (VSI)[1] (previously Digital Equipment Corporation, Compaq, Hewlett-Packard)
Written inPrimarily C, BLISS, VAX MACRO, DCL.[2] Other languages also used.[3]
Working stateCurrent
Source modelClosed-source with open-source components. Formerly source available[4][5]
Initial releaseAnnounced: October 25, 1977; 48 years ago (1977-10-25)
V1.0 / August 1978; 47 years ago (1978-08)
Latest releaseV9.2-3 / November 20, 2024; 11 months ago (2024-11-20)
Marketing targetServers (historically Minicomputers, Workstations)
Available inEnglish, Japanese.[6] Historical support for Chinese (both Traditional and Simplified characters), Korean, Thai.[7]
Update methodConcurrent upgrades,
rolling upgrades
Package managerPCSI and VMSINSTAL
Supported platformsVAX, Alpha, Itanium, x86-64
Kernel typeMonolithic kernel with loadable modules
InfluencedVAXELN, MICA, Windows NT
Influenced byRSX-11M
Default
user interface
DCL CLI and DECwindows GUI
LicenseProprietary
Official websitevmssoftware.com

OpenVMS, often referred to as just VMS,[8] is a multi-user, multiprocessing and virtual memory-based operating system. It is designed to support time-sharing, batch processing, transaction processing and workstation applications.[9] Customers using OpenVMS include banks and financial services, hospitals and healthcare, telecommunications operators, network information services, and industrial manufacturers.[10][11] During the 1990s and 2000s, there were approximately half a million VMS systems in operation worldwide.[12][13][14]

It was first announced by Digital Equipment Corporation (DEC) as VAX/VMS (Virtual Address eXtension/Virtual Memory System[15]) alongside the VAX-11/780 minicomputer in 1977.[16][17][18] OpenVMS has subsequently been ported to run on DEC Alpha systems, the Itanium-based HPE Integrity Servers,[19] and select x86-64 hardware and hypervisors.[20] Since 2014, OpenVMS is developed and supported by VMS Software Inc. (VSI).[21][22] OpenVMS offers high availability through clustering—the ability to distribute the system over multiple physical machines.[23] This allows clustered applications and data to remain continuously available while operating system software and hardware maintenance and upgrades are performed,[24] or if part of the cluster is destroyed.[25] VMS cluster uptimes of 17 years have been reported.[26]

History

[edit]

Origin and name changes

[edit]
Stylized "VAX/VMS" used by Digital

In April 1975, Digital Equipment Corporation (DEC) embarked on a project to design a 32-bit extension to its PDP-11 computer line. The hardware component was code named Star; the operating system was code named Starlet. Roger Gourd was the project lead for VMS. Software engineers Dave Cutler, Dick Hustvedt, and Peter Lipman acted as technical project leaders.[27] To avoid a repetition of PDP-11's many incompatible operating systems, the new operating system would be capable of real-time, time-sharing, and transaction processing.[28] The Star and Starlet projects culminated in the VAX-11/780 computer and the VAX/VMS operating system. The Starlet project's code name survives in VMS in the name of several of the system libraries, including STARLET.OLB and STARLET.MLB.[29] VMS was mostly written in VAX MACRO with some components written in BLISS.[8]

One of the original goals for VMS was backward compatibility with DEC's existing RSX-11M operating system.[8] Prior to the V4.0 release, VAX/VMS included a compatibility layer named the RSX Application Migration Executive (RSX AME), which allowed user-mode RSX-11M software to be run unmodified on top of VMS.[30] The RSX AME played an important role on early versions of VAX/VMS, which used certain RSX-11M user-mode utilities before native VAX versions had been developed.[8] By the V3.0 release, all compatibility-mode utilities were replaced with native implementations.[31] In VAX/VMS V4.0, RSX AME was removed from the base system, and replaced with an optional layered product named VAX-11 RSX.[32]

"Albert the Cheshire Cat" mascot for VAX/VMS, used by the DECUS VAX SIG[33][34]

By the early 1980s VAX/VMS was very successful in the market. Although created on Unix on DEC systems, Ingres ported to VMS believing that doing so was necessary for commercial success. Demand for the VMS version was so much greater that the company neglected the Unix version.[35] A number of distributions of VAX/VMS were created:

  • MicroVMS was a distribution of VAX/VMS designed for MicroVAX and VAXstation hardware, which had less memory and disk space than larger VAX systems of the time.[36] MicroVMS split up VAX/VMS into multiple kits, which a customer could use to install a subset of VAX/VMS tailored to their specific requirements.[37] MicroVMS releases were produced for each of the V4.x releases of VAX/VMS and was discontinued when VAX/VMS V5.0 was released.[38][39]
  • Desktop-VMS was a short-lived distribution of VAX/VMS sold with VAXstation systems. It consisted of a single CD-ROM containing a bundle of VMS, DECwindows, DECnet, VAXcluster support, and a setup process designed for non-technical users.[40][41] Desktop-VMS could either be run directly from the CD or could be installed onto a hard drive.[42] Desktop-VMS had its own versioning scheme beginning with V1.0, which corresponded to the V5.x releases of VMS.[43]
  • An unofficial derivative of VAX/VMS named MOS VP (Russian: Многофункциональная операционная система с виртуальной памятью, МОС ВП, lit.'Multifunctional Operating System with Virtual Memory')[44] was created in the Soviet Union during the 1980s for the SM 1700 line of VAX clone hardware.[45][46] MOS VP added support for the Cyrillic script and translated parts of the user interface into Russian.[47] Similar derivatives of MicroVMS known as MicroMOS VP (Russian: МикроМОС ВП) or MOS-32M (Russian: МОС-32М) were also created.

With the V5.0 release in April 1988, DEC began to refer to VAX/VMS as simply VMS in its documentation.[48] In July 1992,[49] DEC renamed VAX/VMS to OpenVMS as an indication of its support of open systems industry standards such as POSIX and Unix compatibility,[50] and to drop the VAX connection since a migration to a different architecture was underway. The OpenVMS name was first used with the OpenVMS AXP V1.0 release in November 1992. DEC began using the OpenVMS VAX name with the V6.0 release in June 1993.[51]

Port to Alpha

[edit]
"Vernon the Shark" logo for OpenVMS[52]

During the 1980s, DEC planned to replace the VAX platform and the VMS operating system with the PRISM architecture and the MICA operating system.[53] When these projects were cancelled in 1988, a team was set up to design new VAX/VMS systems of comparable performance to RISC-based Unix systems.[54] After a number of failed attempts to design a faster VAX-compatible processor, the group demonstrated the feasibility of porting VMS and its applications to a RISC architecture based on PRISM.[55] This led to the creation of the Alpha architecture.[56] The project to port VMS to Alpha began in 1989, and first booted on a prototype Alpha EV3-based Alpha Demonstration Unit in early 1991.[55][57]

The main challenge in porting VMS to a new architecture was that VMS and the VAX were designed together, meaning that VMS was dependent on certain details of the VAX architecture.[58] Furthermore, a significant amount of the VMS kernel, layered products, and customer-developed applications were implemented in VAX MACRO assembly code.[8] Some of the changes needed to decouple VMS from the VAX architecture included the creation of the MACRO-32 compiler, which treated VAX MACRO as a high-level language, and compiled it to Alpha object code,[59] and the emulation of certain low-level details of the VAX architecture in PALcode, such as interrupt handling and atomic queue instructions.

The VMS port to Alpha resulted in the creation of two separate codebases: one for VAX, and another for Alpha.[4] The Alpha code library was based on a snapshot of the VAX/VMS code base circa V5.4-2.[60] 1992 saw the release of the first version of OpenVMS for Alpha AXP systems, designated OpenVMS AXP V1.0. In 1994, with the release of OpenVMS V6.1, feature (and version number) parity between the VAX and Alpha variants was achieved; this was the so-called Functional Equivalence release.[60] The decision to use the 1.x version numbering stream for the pre-production quality releases of OpenVMS AXP confused some customers, and was not repeated in the subsequent ports of OpenVMS to new platforms.[58]

When VMS was ported to Alpha, it was initially left as a 32-bit only operating system.[59] This was done to ensure backwards compatibility with software written for the 32-bit VAX. 64-bit addressing was first added for Alpha in the V7.0 release.[61] In order to allow 64-bit code to interoperate with older 32-bit code, OpenVMS does not create a distinction between 32-bit and 64-bit executables, but instead allows for both 32-bit and 64-bit pointers to be used within the same code.[62] This is known as mixed pointer support. The 64-bit OpenVMS Alpha releases support a maximum virtual address space size of 8TiB (a 43-bit address space), which is the maximum supported by the Alpha 21064 and Alpha 21164.[63]

One of the more noteworthy Alpha-only features of OpenVMS was OpenVMS Galaxy, which allowed the partitioning of a single SMP server to run multiple instances of OpenVMS. Galaxy supported dynamic resource allocation to running partitions, and the ability to share memory between partitions.[64][65]

Port to Intel Itanium

[edit]
"Swoosh" logo used by HP for OpenVMS

In 2001, prior to its acquisition by Hewlett-Packard, Compaq announced the port of OpenVMS to the Intel Itanium architecture.[66] The Itanium port was the result of Compaq's decision to discontinue future development of the Alpha architecture in favour of adopting the then-new Itanium architecture.[67] The porting began in late 2001, and the first boot on took place on January 31, 2003.[68] The first boot consisted of booting a minimal system configuration on a HP i2000 workstation, logging in as the SYSTEM user, and running the DIRECTORY command. The Itanium port of OpenVMS supports specific models and configurations of HPE Integrity Servers.[9] The Itanium releases were originally named HP OpenVMS Industry Standard 64 for Integrity Servers, although the names OpenVMS I64 or OpenVMS for Integrity Servers are more commonly used.[69]

The Itanium port was accomplished using source code maintained in common within the OpenVMS Alpha source code library, with the addition of conditional code and additional modules where changes specific to Itanium were required.[58] This required certain architectural dependencies of OpenVMS to be replaced, or emulated in software. Some of the changes included using the Extensible Firmware Interface (EFI) to boot the operating system,[70] reimplementing the functionality previously provided by Alpha PALcode inside the kernel,[71] using new executable file formats (Executable and Linkable Format and DWARF),[72] and adopting IEEE 754 as the default floating point format.[73]

As with the VAX to Alpha port, a binary translator for Alpha to Itanium was made available, allowing user-mode OpenVMS Alpha software to be ported to Itanium in situations where it was not possible to recompile the source code. This translator is known as the Alpha Environment Software Translator (AEST), and it also supported translating VAX executables which had already been translated with VEST.[74]

Two pre-production releases, OpenVMS I64 V8.0 and V8.1, were available on June 30, 2003, and on December 18, 2003. These releases were intended for HP organizations and third-party vendors involved with porting software packages to OpenVMS I64. The first production release, V8.2, was released in February 2005. V8.2 was also released for Alpha; subsequent V8.x releases of OpenVMS have maintained feature parity between the Alpha and Itanium architectures.[75]

Port to x86-64

[edit]

When VMS Software Inc. (VSI) announced that they had secured the rights to develop the OpenVMS operating system from HP, they also announced their intention to port OpenVMS to the x86-64 architecture.[76] The porting effort ran concurrently with the establishment of the company, as well as the development of VSI's own Itanium and Alpha releases of OpenVMS V8.4-x.

The x86-64 port is targeted for specific servers from HPE and Dell, as well as certain virtual machine hypervisors.[77] Initial support was targeted for KVM and VirtualBox. Support for VMware was announced in 2020, and Hyper-V is being explored as a future target.[78] In 2021, the x86-64 port was demonstrated running on an Intel Atom-based single-board computer.[79]

As with the Alpha and Itanium ports, the x86-64 port made some changes to simplify porting and supporting OpenVMS on the new platform including: replacing the proprietary GEM compiler backend used by the VMS compilers with LLVM,[80] changing the boot process so that OpenVMS is booted from a memory disk,[81] and simulating the four privilege levels of OpenVMS in software since only two of x86-64's privilege levels are usable by OpenVMS.[71]

The first boot was announced on May 14, 2019. This involved booting OpenVMS on VirtualBox, and successfully running the DIRECTORY command.[82] In May 2020, the V9.0 Early Adopter's Kit release was made available to a small number of customers. This consisted of the OpenVMS operating system running in a VirtualBox VM with certain limitations; most significantly, few layered products were available, and code can only be compiled for x86-64 using cross compilers which run on Itanium-based OpenVMS systems.[20] Following the V9.0 release, VSI released a series of updates on a monthly or bimonthly basis which added additional functionality and hypervisor support. These were designated V9.0-A through V9.0-H.[83] In June 2021, VSI released the V9.1 Field Test, making it available to VSI's customers and partners.[84] V9.1 shipped as an ISO image which can be installed onto a variety of hypervisors, and onto HPE ProLiant DL380 servers starting with the V9.1-A release.[85]

Influence

[edit]

During the 1980s, the MICA operating system for the PRISM architecture was intended to be the eventual successor to VMS. MICA was designed to maintain backwards compatibility with VMS applications while also supporting Ultrix applications on top of the same kernel.[86] MICA was ultimately cancelled along with the rest of the PRISM platform, leading Dave Cutler to leave DEC for Microsoft. At Microsoft, Cutler led the creation of the Windows NT operating system, which was heavily inspired by the architecture of MICA.[87] As a result, VMS is considered an ancestor of Windows NT, together with RSX-11, VAXELN and MICA, and many similarities exist between VMS and NT.[88]

A now-defunct project named FreeVMS attempted to develop an open-source operating system following VMS conventions.[89][90] FreeVMS was built on top of the L4 microkernel and supported the x86-64 architecture. Prior work investigating the implementation of VMS using a microkernel-based architecture had previously been undertaken as a prototyping exercise by DEC employees with assistance from Carnegie Mellon University using the Mach 3.0 microkernel ported to VAXstation 3100 hardware, adopting a multiserver architectural model.[91]

Architecture

[edit]
The architecture of the OpenVMS operating system, demonstrating the layers of the system, and the access modes in which they typically run

The OpenVMS operating system has a layered architecture, consisting of a privileged Executive, an intermediately privileged Command Language Interpreter, and unprivileged utilities and run-time libraries (RTLs).[92] Unprivileged code typically invokes the functionality of the Executive through system services (equivalent to system calls in other operating systems).

OpenVMS' layers and mechanisms are built around certain features of the VAX architecture, including:[92][93]

These VAX architecture mechanisms are implemented on Alpha, Itanium and x86-64 by either mapping to corresponding hardware mechanisms on those architectures, or through emulation (via PALcode on Alpha, or in software on Itanium and x86-64).[71]

Executive and Kernel

[edit]

The OpenVMS Executive comprises the privileged code and data structures which reside in the system space. The Executive is further subdivided between the Kernel, which consists of the code which runs at the kernel access mode, and the less-privileged code outside of the Kernel which runs at the executive access mode.[92]

The components of the Executive which run at executive access mode include the Record Management Services, and certain system services such as image activation. The main distinction between the kernel and executive access modes is that most of the operating system's core data structures can be read from executive mode, but require kernel mode to be written to.[93] Code running at executive mode can switch to kernel mode at will, meaning that the barrier between the kernel and executive modes is intended as a safeguard against accidental corruption as opposed to a security mechanism.[94]

The Kernel comprises the operating system's core data structures (e.g. page tables, the I/O database and scheduling data), and the routines which operate on these structures. The Kernel is typically described as having three major subsystems: I/O, Process and Time Management, Memory Management.[92][93] In addition, other functionality such as logical name management, synchronization and system service dispatch are implemented inside the Kernel.

OpenVMS allows user-mode code with suitable privileges to switch to executive or kernel mode using the $CMEXEC and $CMKRNL system services, respectively.[95] This allows code outside of system space to have direct access to the Executive's routines and system services. In addition to allowing third-party extensions to the operating system, Privileged Images are used by core operating system utilities to manipulate operating system data structures through undocumented interfaces.[96]

File system

[edit]

The typical user and application interface into the file system is the Record Management Services (RMS), although applications can interface directly with the underlying file system through the QIO system services.[97] The file systems supported by VMS are referred to as the Files-11 On-Disk Structures (ODS), the most significant of which are ODS-2 and ODS-5.[98] VMS is also capable of accessing files on ISO 9660 CD-ROMs and magnetic tape with ANSI tape labels.[99]

Files-11 is limited to 2 TiB volumes.[98] DEC attempted to replace it with a log-structured file system named Spiralog, first released in 1995.[100] However, Spiralog was discontinued due to a variety of problems, including issues with handling full volumes.[100] Instead, there has been discussion of porting the open-source GFS2 file system to OpenVMS.[101]

Command Language Interpreter

[edit]

An OpenVMS Command Language Interpreter (CLI) implements a command-line interface for OpenVMS, responsible for executing individual commands and command procedures (equivalent to shell scripts or batch files).[102] The standard CLI for OpenVMS is the DIGITAL Command Language, although other options are available.

Unlike Unix shells, which typically run in their own isolated process and behave like any other user-mode program, OpenVMS CLIs are an optional component of a process, which exist alongside any executable image which that process may run.[103] Whereas a Unix shell will typically run executables by creating a separate process using fork-exec, an OpenVMS CLI will typically load the executable image into the same process, transfer control to the image, and ensure that control is transferred back to CLI once the image has exited and that the process is returned to its original state.[92]

Because the CLI is loaded into the same address space as user code, and the CLI is responsible for invoking image activation and image rundown, the CLI is mapped into the process address space at supervisor access mode, a higher level of privilege than most user code. This is in order to prevent accidental or malicious manipulation of the CLI's code and data structures by user-mode code.[92][103]

Features

[edit]
VAXstation 4000 model 96 running OpenVMS V6.1, DECwindows Motif and the NCSA Mosaic browser

Clustering

[edit]

OpenVMS supports clustering (first called VAXcluster and later VMScluster), where multiple computers run their own instance of the operating system. Clustered computers (nodes) may be fully independent from each other, or they may share devices like disk drives and printers. Communication across nodes provides a single system image abstraction.[104] Nodes may be connected to each other via a proprietary hardware connection called Cluster Interconnect or via a standard Ethernet LAN.

OpenVMS supports up to 96 nodes in a single cluster. It also allows mixed-architecture clusters.[23] OpenVMS clusters allow applications to function during planned or unplanned outages.[105] Planned outages include hardware and software upgrades.[24]

Networking

[edit]

The DECnet protocol suite is tightly integrated into VMS, allowing remote logins, as well as transparent access to files, printers and other resources on VMS systems over a network.[106] VAX/VMS V1.0 featured support for DECnet Phase II,[107] and modern versions of VMS support both the traditional Phase IV DECnet protocol, as well as the OSI-compatible Phase V (also known as DECnet-Plus).[108] Support for TCP/IP is provided by the optional TCP/IP Services for OpenVMS layered product (originally known as the VMS/ULTRIX Connection, then as the ULTRIX Communications Extensions or UCX).[109][110] TCP/IP Services is based on a port of the BSD network stack to OpenVMS,[111] along with support for common protocols such as SSH, DHCP, FTP and SMTP.

DEC sold a software package named PATHWORKS (originally known as the Personal Computer Systems Architecture or PCSA) which allowed personal computers running MS-DOS, Microsoft Windows or OS/2, or the Apple Macintosh to serve as a terminal for VMS systems, or to use VMS systems as a file or print server.[112] PATHWORKS was later renamed to Advanced Server for OpenVMS, and was eventually replaced with a VMS port of Samba at the time of the Itanium port.[113]

DEC provided the Local Area Transport (LAT) protocol which allowed remote terminals and printers to be attached to a VMS system through a terminal server such as one of the DECserver family.[114]

Programming

[edit]

DEC (and its successor companies) provided a wide variety of programming languages for VMS. Officially supported languages on VMS, either current or historical, include:[115][116][117]

Among OpenVMS's notable features is the Common Language Environment, a strictly defined standard that specifies calling conventions for functions and routines, including use of stacks, registers, etc., independent of programming language.[118] Because of this, it is possible to call a routine written in one language (for example, Fortran) from another (for example, COBOL), without needing to know the implementation details of the target language. OpenVMS itself is implemented in a variety of different languages and the common language environment and calling standard supports freely mixing these languages.[119] DEC created a tool named the Structure Definition Language (SDL), which allowed data type definitions to be generated for different languages from a common definition.[120]

The set of languages available directly with the operating system is restricted to C, Fortran, Pascal, BASIC, C++, BLISS and COBOL. Freely available open source languages include Lua, PHP, Python, Scala and Java.[121]

Development tools

[edit]
The "Grey Wall" of VAX/VMS documentation, at Living Computers: Museum + Labs

DEC provided a collection of software development tools in a layered product named DECset (originally named VAXset).[115] This consisted of the following tools:[122]

The OpenVMS Debugger supports all DEC compilers and many third-party languages. It allows breakpoints, watchpoints and interactive runtime program debugging using either a command line or graphical user interface.[124] A pair of lower-level debuggers, named DELTA and XDELTA, can be used to debug privileged code in additional to normal application code.[125]

In 2019, VSI released an officially supported Integrated Development Environment for VMS based on Visual Studio Code.[77] This allows VMS applications to be developed and debugged remotely from a Microsoft Windows, macOS or Linux workstation.[126]

Database management

[edit]

DEC created a number of optional database products for VMS, some of which were marketed as the VAX Information Architecture family.[127] These products included:

  • Rdb – A relational database system which originally used the proprietary Relational Data Operator (RDO) query interface, but later gained SQL support.[128]
  • DBMS – A database management system which uses the CODASYL network model and Data Manipulation Language (DML).
  • Digital Standard MUMPS (DSM) – an integrated programming language and key-value database.[115]
  • Common Data Dictionary (CDD) – a central database schema repository, which allowed schemas to be shared between different applications, and data definitions to be generated for different programming languages.
  • DATATRIEVE – a query and reporting tool which could access data from RMS files as well as Rdb and DBMS databases.
  • Application Control Management System (ACMS) – A transaction processing monitor, which allows applications to be created using a high-level Task Description Language (TDL). Individual steps of a transaction can be implemented using DCL commands, or Common Language Environment procedures. User interfaces can be implemented using TDMS, DECforms or Digital's ALL-IN-1 office automation product.[129]
  • RALLY, DECadmireFourth-generation programming languages (4GLs) for generating database-backed applications.[130] DECadmire featured integration with ACMS, and later provided support for generating Visual Basic client-server applications for Windows PCs.[131]

In 1994, DEC sold Rdb, DBMS and CDD to Oracle, where they remain under active development.[132] In 1995, DEC sold DSM to InterSystems, who renamed it Open M, and eventually replaced it with their Caché product.[133]

Examples of third-party database management systems for OpenVMS include MariaDB,[134] Mimer SQL[135] (Itanium and x86-64[136]), and System 1032.[137]

User interfaces

[edit]
OpenVMS Alpha V8.4-2L1, showing the DCL CLI in a terminal session

VMS was originally designed to be used and managed interactively using DEC's text-based video terminals such as the VT100, or hardcopy terminals such as the DECwriter series. Since the introduction of the VAXstation line in 1984, VMS has optionally supported graphical user interfaces for use with workstations or X terminals such as the VT1000 series.

Text-based user interfaces

[edit]

The DIGITAL Command Language (DCL) has served as the primary command language interpreter (CLI) of OpenVMS since the first release.[138][30][9] Other official CLIs available for VMS include the RSX-11 Monitor Console Routine (MCR) (VAX only), and various Unix shells.[115] DEC provided tools for creating text-based user interface applications – the Form Management System (FMS) and Terminal Data Management System (TDMS), later succeeded by DECforms.[139][140][141] A lower level interface named Screen Management Services (SMG$), comparable to Unix curses, also exists.[142]

Graphical user interfaces

[edit]
VWS 4.5 running on top of VAX/VMS V5.5-2
DECwindows XUI window manager running on top of VAX/VMS V5.5-2

Over the years, VMS has gone through a number of different GUI toolkits and interfaces:

  • The original graphical user interface for VMS was a proprietary windowing system known as the VMS Workstation Software (VWS), which was first released for the VAXstation I in 1984.[143] It exposed an API called the User Interface Services (UIS).[144] It ran on a limited selection of VAX hardware.[145]
  • In 1989, DEC replaced VWS with a new X11-based windowing system named DECwindows.[146] It was first included in VAX/VMS V5.1.[147] Early versions of DECwindows featured an interface built on top of a proprietary toolkit named the X User Interface (XUI). A layered product named UISX was provided to allow VWS/UIS applications to run on top of DECwindows.[148] Parts of XUI were subsequently used by the Open Software Foundation as the foundation of the Motif toolkit.[149]
  • In 1991, DEC replaced XUI with the Motif toolkit, creating DECwindows Motif.[150][151] As a result, the Motif Window Manager became the default DECwindows interface in OpenVMS V6.0,[147] although the XUI window manager remained as an option.
  • In 1996, as part of OpenVMS V7.1,[147] DEC released the New Desktop interface for DECwindows Motif, based on the Common Desktop Environment (CDE).[152] On Alpha and Itanium systems, it is still possible to select the older MWM-based UI (referred to as the "DECwindows Desktop") at login time. The New Desktop was never ported to the VAX releases of OpenVMS.

Versions of VMS running on DEC Alpha workstations in the 1990s supported OpenGL[153] and Accelerated Graphics Port (AGP) graphics adapters. VMS also provides support for older graphics standards such as GKS and PHIGS.[154][155] Modern versions of DECwindows are based on X.Org Server.[9]

Security

[edit]

OpenVMS provides various security features and mechanisms, including security identifiers, resource identifiers, subsystem identifiers, ACLs, intrusion detection and detailed security auditing and alarms.[156] Specific versions evaluated at Trusted Computer System Evaluation Criteria Class C2 and, with the SEVMS security enhanced release at Class B1.[157] OpenVMS also holds an ITSEC E3 rating (see NCSC and Common Criteria).[158] Passwords are hashed using the Purdy Polynomial.

Vulnerabilities

[edit]
  • Early versions of VMS included a number of privileged user accounts (including SYSTEM, FIELD, SYSTEST and DECNET) with default passwords which were often left unchanged by system managers.[159][160] A number of computer worms for VMS including the WANK worm and the Father Christmas worm exploited these default passwords to gain access to nodes on DECnet networks.[161] This issue was also described by Clifford Stoll in The Cuckoo's Egg as a means by which Markus Hess gained unauthorized access to VAX/VMS systems.[162] In V5.0, the default passwords were removed, and it became mandatory to provide passwords for these accounts during system setup.[39]
  • A 33-year-old vulnerability in VMS on VAX and Alpha was discovered in 2017 and assigned the CVE ID CVE-2017-17482. On the affected platforms, this vulnerability allowed an attacker with access to the DCL command line to carry out a privilege escalation attack. The vulnerability relies on exploiting a buffer overflow bug in the DCL command processing code, the ability for a user to interrupt a running image (program executable) with CTRL/Y and return to the DCL prompt, and the fact that DCL retains the privileges of the interrupted image.[163] The buffer overflow bug allowed shellcode to be executed with the privileges of an interrupted image. This could be used in conjunction with an image installed with higher privileges than the attacker's account to bypass system security.[164]

POSIX compatibility

[edit]

Various official Unix and POSIX compatibility layers were created for VMS. The first of these was DEC/Shell, which was a layered product consisting of ports of the Bourne shell from Version 7 Unix and several other Unix utilities to VAX/VMS.[115] In 1992, DEC released the POSIX for OpenVMS layered product, which included a shell based on the KornShell.[165] POSIX for OpenVMS was later replaced by the open-source GNV (GNU's not VMS) project, which was first included in OpenVMS media in 2002.[166] Amongst other GNU tools, GNV includes a port of the Bash shell to VMS.[167] Examples of third-party Unix compatibility layers for VMS include Eunice.[168]

Hobbyist programs

[edit]

In 1997, OpenVMS and a number of layered products were made available free of charge for hobbyist, non-commercial use as part of the OpenVMS Hobbyist Program.[169] Since then, several companies producing OpenVMS software have made their products available under the same terms, such as Process Software.[170] Prior to the x86-64 port, the age and cost of hardware capable of running OpenVMS made emulators such as SIMH a common choice for hobbyist installations.[171]

In March 2020, HPE announced the end of the OpenVMS Hobbyist Program.[172] This was followed by VSI's announcement of the Community License Program (CLP) in April 2020, which was intended as a replacement for the HPE Hobbyist Program.[173] The CLP was launched in July 2020, and provides licenses for VSI OpenVMS releases on Alpha, Integrity and x86-64 systems.[174] OpenVMS for VAX is not covered by the CLP, since there are no VSI releases of OpenVMS VAX, and the old versions are still owned by HPE.[175]

Release history

[edit]
  1. ^ X0.5 was also known as "Base Level 5".[182]
  2. ^ While an exact release date is unknown, the V1.01 change log dates in the release notes for V1.5 suggest it was released some time after November 1978.[183]
  3. ^ For some of the early VAX/VMS releases where an official release date is not known, the date of the Release Notes has been used an approximation.
  4. ^ The existence of releases V2.0 through V2.5 are documented in the V3.0 release notes.[185]
  5. ^ While the versioning scheme reset to V1.0 for the first AXP (Alpha) releases, these releases were contemporaneous with the V5.x releases and had a similar feature set.

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
OpenVMS is a multi-user, multitasking, virtual memory-based operating system originally developed by Digital Equipment Corporation (DEC) in 1977 for its VAX minicomputers, renowned for its high reliability, security, and support for mission-critical applications in industries such as finance, defense, healthcare, and manufacturing. Originally released as VMS (Virtual Memory System) in 1978, the operating system was renamed OpenVMS in 1991 to emphasize its conformance to POSIX standards and openness to third-party integrations. After DEC's acquisition by Compaq in 1998 and Compaq's acquisition by HP in 2002, it was managed by HP until a 2014 agreement transferred development stewardship to VMS Software Inc. (VSI), with VSI assuming management in 2015 following HP's split into HP Inc. and Hewlett-Packard Enterprise (HPE). Over its evolution, OpenVMS transitioned from 32-bit VAX architecture to 64-bit platforms including Alpha and (Integrity servers), with VSI now advancing a native port to architecture, culminating in the production release of OpenVMS V9.2 in and its update V9.2-3 in November 2024. Key features include robust clustering supporting up to 96 nodes for , load balancing, and continuous operation with uptimes spanning decades; integrated networking via TCP/IP and DECnet; advanced security mechanisms like access control lists and auditing; and comprehensive support for , , , and real-time applications. The system provides a rich development environment with compilers for languages such as C, C++, Fortran, COBOL, and Java, alongside tools like DECset for software engineering and integration with open-source utilities including Git and Python, ensuring compatibility with modern workflows while maintaining backward compatibility for legacy VAX-era applications. As of 2025, OpenVMS runs on Alpha and Integrity hardware, with x86-64 support available natively and through virtualization on platforms like VMware, KVM, and Oracle VirtualBox, positioning it as a resilient choice for enterprise environments requiring uninterrupted service and data integrity.

History

Origins and Early Development

The development of VMS originated in 1975 at (DEC), driven by the limitations of the 16-bit and the need for a more advanced operating system to accompany the forthcoming 32-bit VAX minicomputer line. This effort built upon DEC's prior real-time and time-sharing systems, including for multiprogramming on PDP-11s and for multiuser environments, adapting their concepts to support and larger-scale . The project, approved by DEC's engineering manager Gordon Bell in April 1975, was led by software architect Dave Cutler, who drew from his experience on to design a robust, multiuser system emphasizing reliability and extensibility. VMS Version 1.0 was announced alongside the VAX-11/780 on October 25, 1977, and shipped in late 1978, marking DEC's first 32-bit operating system with integrated hardware-software optimization. It supported up to 8 MB of memory, for symmetric configurations, demand-paged addressing up to 4 GB per process, and for multiple interactive users, positioning it as a commercial alternative to Unix on minicomputers. Core to its design was the Record Management Services (RMS), a record-oriented enabling structured data access for business applications, and the initial (DCL) interpreter, providing a powerful, scriptable for system administration and user tasks. Basic utilities like the command for volume archiving were included from this version, facilitating data protection in enterprise settings. Subsequent releases through 1985 refined VMS for broader hardware support and enhanced functionality. Version 2.0 (April 1980) added compatibility with the /750 and improved DECnet networking for Phase III connectivity. Version 3.0 (April 1982) extended to the /730, introducing advanced lock management for concurrent access and support for larger disk drives like the RA81. By Version 4.0 (September 1984), VMS incorporated foundational clustering via VAXclusters, allowing multiple VAX systems to share resources like disks through the and QIO system services, while also adding security enhancements and MicroVMS for smaller configurations. Version 4.2 (October 1985) further advanced reliability with volume shadowing for disk redundancy and RMS journaling to protect against during failures. In 1991, DEC renamed the operating system to OpenVMS to signify its growing adherence to open standards like and compatibility with third-party hardware, though it remained proprietary.

Architectural Ports and Transitions

The porting of OpenVMS to the Alpha AXP architecture marked a significant transition from the 32-bit VAX CISC design to a 64-bit RISC platform, with development beginning in October 1989 and the initial release of OpenVMS AXP Version 1.0 announced in November 1992. This effort involved recompiling the operating system's extensive codebase using tools such as the compiler and Alpha AXP cross-compilers for languages like MACRO-32 and BLISS-32, alongside mechanisms like the VEST translator to convert VAX executables into native Alpha images for compatibility. The architecture shift introduced a load/store model, 64-bit registers, and a three-level structure, with initial implementations supporting 44-bit virtual addressing and 54-bit physical addressing to enable scalability beyond VAX limitations. Early versions of OpenVMS AXP maintained a 32-bit to support existing VAX applications, but full 64-bit virtual addressing—expanding the to up to 8 terabytes—was introduced in OpenVMS Alpha Version 7.0, released in December 1995. This upgrade included kernel threads for enhanced concurrency and required developers to update privileged code for 64-bit pointer handling, while ensuring hybrid 32/64-bit interoperability through conditional compilation and the Alpha User-mode Debugging Environment (AUD). Key technical hurdles during the Alpha port included adapting synchronization primitives, , and I/O subsystems to the RISC model, as well as optimizing dispatch code for efficient argument passing without VAX-specific CALLG instructions. Clustering compatibility was preserved, allowing mixed VAX-Alpha configurations via the CI interconnect and shared access, ensuring seamless operation across architectures without major disruptions. The transition to the () architecture followed in 2003 with the release of OpenVMS Version 8.0 for Industry Standard 64 (I64), the first production version targeting the () design. Adaptations for involved replacing Alpha-specific PALcode with OS-managed equivalents for VAX queue instructions and registers, adopting the with VMS extensions, and developing a new object language and image format to handle register translations and instruction bundling. Backward compatibility was achieved primarily through recompilation and relinking of using cross-compilation tools, rather than exact emulation, with support for analyzing dumps on Alpha systems to aid migration. The port emphasized a common source base with Alpha, minimizing hardware-dependent changes, though challenges arose in (first successful boot on i2000 in 2003), interrupt handling, and TLB management due to the absence of traditional console mechanisms. OpenVMS I64 maintained hybrid 32/64-bit support similar to Alpha, with three additional VAX floating-point types (F-, D-, and G-floating) preserved for legacy compatibility, while favoring IEEE formats for new development. Clustering across Alpha and nodes was enabled through compatible interconnects like CI or , allowing shared storage and in mixed environments. handling aligned with the little-endian convention shared across VAX, Alpha, and , avoiding major byte-order issues. Although announced the end-of-life for in 2017, VSI extended support for OpenVMS I64, with ongoing patches and compatibility through at least 2028 to facilitate gradual migrations. In 2020, VMS Software Inc. (VSI) advanced the to with the release of OpenVMS V9.0 Kit (EAK) for select partners, marking the initial alpha-stage availability following earlier planning. This was followed by V9.1 field test releases in 2021 for broader customer access, focusing on native execution on hardware and hypervisors like KVM and . The production version, OpenVMS V9.2, arrived in 2022, providing a fully supported system with enhancements for and integration. For legacy binaries, the incorporates an Alpha-to-x86 dynamic binary translator to run unmodified Alpha images, though privileged code requires native recompilation, and VAX compatibility relies on layers. Technical challenges in the port included ensuring clustering interoperability with existing Alpha and nodes, particularly for multi-architecture boot sequences and shared storage protocols. consistency was maintained via adherence to the AMD64 ABI, as x86-64 is little-endian like prior platforms. Hybrid 32/64-bit support was implemented using compatibility stubs for 32-bit addressing, allowing gradual upgrades while prioritizing 64-bit operations for modern workloads.

Ownership Changes and Modern Evolution

Digital Equipment Corporation (DEC), the original developer of VMS (later rebranded as OpenVMS), faced financial challenges in the late 1990s, culminating in its acquisition by Computer Corporation in 1998. This merger integrated OpenVMS into Compaq's portfolio, but development priorities shifted amid broader industry transitions. In 2002, (HP) acquired Compaq, bringing OpenVMS under HP's stewardship, where investment in the operating system began to wane as resources were redirected toward emerging platforms like Intel's architecture. This focus on Itanium led to ports of OpenVMS to HP servers, but it also signaled a period of stagnation for broader innovation, with HP announcing the end of OpenVMS development for VAX and Alpha in 2013. In 2015, HP split into and (HPE), with OpenVMS assigned to HPE's enterprise server division. HPE continued providing support and maintenance for existing OpenVMS installations on Alpha and hardware, but committed to no new architectures or major enhancements, leaving the platform's long-term viability in question. By 2017, as part of HPE's strategic realignment, OpenVMS support contracts were increasingly handled externally, paving the way for a transition to independent stewardship. The formation of VMS Software Inc. (VSI) in 2014 by former HP engineers marked a pivotal shift, as the company secured an exclusive license from HP to develop and enhance OpenVMS. VSI's mandate included porting OpenVMS to architectures and fostering community-driven development to sustain the ecosystem. In 2019, VSI further solidified its role by acquiring all OpenVMS support business from HPE, ensuring continuity for customers while enabling independent innovation. Under VSI, key milestones have revitalized OpenVMS. In , VSI announced its commitment to porting OpenVMS to , targeting compatibility with standard hypervisors and environments to extend the platform's relevance. This effort culminated in the 2024 release of OpenVMS V9.2-3, which enhanced readiness by supporting deployment on platforms like AWS and , allowing without . In October 2024, VSI enabled deployment of OpenVMS x86 on Amazon EC2, facilitating -based operations without emulation. VSI's modern roadmap emphasizes annual releases starting post-2023, driven by customer needs and industry trends such as integration with AI and frameworks to support advanced workloads. At the 2025 OpenVMS Bootcamp in , VSI announced tools like VMS/XDE, a native development environment for OpenVMS on /Linux, facilitating cross-platform coding and pipelines without emulation. These developments underscore VSI's strategy to position OpenVMS as a robust, future-proof option for mission-critical applications in hybrid settings.

Influence and Legacy

OpenVMS has significantly influenced operating system through its early adoption of key architectural concepts. The system introduced (SMP) support in VMS version 5.2 in , enabling efficient utilization of multiple processors in a single system image, which became a model for parallel processing in subsequent commercial operating systems. Similarly, OpenVMS pioneered fault-tolerant clustering with the VAXcluster technology in 1983, allowing up to 96 nodes to operate as a unified high-availability environment with shared resources and automatic , a that informed paradigms in later systems. These innovations extended to broader industry impacts, particularly in mission-critical sectors. OpenVMS powered in , , and defense for decades, providing the reliability needed for high-volume ; for instance, it supported operations and billing systems until migrations in the and 2000s shifted toward more distributed architectures. As of 2025, it continues to underpin legacy mainframes in healthcare for patient and in for control systems, where its proven uptime—often exceeding 99.999% availability—remains essential for uninterrupted operations. The reliability model of OpenVMS, emphasizing proactive fault detection and seamless recovery, has shaped high-availability systems beyond its native ecosystem, influencing designs in fault-tolerant platforms used in transaction-heavy environments and contributing to modern architectures that prioritize redundancy and minimal downtime. Specific contributions include its compliance with standards starting in the early 1990s, which facilitated application portability across systems and supported broader adoption of standardized interfaces in . Additionally, DECnet protocols, integral to OpenVMS networking, represented one of the earliest implementations of internetworking in 1974, inspiring foundational concepts in distributed communication that predated widespread TCP/IP deployment. OpenVMS's enduring legacy is evident in its ongoing deployment, with over 3,000 organizations maintaining active systems worldwide as of 2023, many preserved through emulation solutions like Stromasys , which allow binary-compatible migration to x86 and platforms without codebase alterations. This approach ensures the system's —optimized for and —continues to support vital applications amid hardware obsolescence.

Architecture

Kernel and Executive Structure

OpenVMS features a design that integrates essential operating system functions, including process management, , and I/O handling, into a single for efficiency and low overhead, particularly in uniprocessor configurations while supporting (SMP). The kernel is complemented by a layered executive structure, which organizes privileged code and data into hierarchical components residing in system space (S0, S1, S2 regions), providing for managing system services, interrupts, and resources. This executive handles image activation through mechanisms like the executive loader and system services such as SYSCREATEREGION64andSYSCREATE_REGION_64 and SYSASCEFC, which load executable images, map global sections for shared code (often via the INSTALL command), and initialize process resources, with rundown procedures ensuring cleanup upon process termination. Process scheduling in OpenVMS is priority-based and supports kernel threads (up to 256 per process), utilizing class schedulers and symmetric dispatching across multiple CPUs with options for explicit CPU affinity to optimize performance on (NUMA) systems. Memory management employs a paged system with 64-bit addressing, enabling large address spaces of 8 TB total (4 TB process-private and 4 TB system space) on Alpha using 43 significant address bits; up to 16 TB total on (I64) with 44 bits (8 TB process-private); and full 64-bit addressing on (effective 16 TB or more). Page sizes vary by platform—512 bytes on VAX, 8192 bytes on Alpha, Itanium, and x86-64—with support for pagelets and features like memory-resident global sections in very large memory (VLM) setups. The Swap Executive oversees operations, including management, process swapping, page trimming, and writing modified pages to backing section files. Key executive components include the Asynchronous System Trap (AST) mechanism, which delivers interrupts and asynchronous events to kernel threads in user or modes, facilitating responsive handling of conditions like I/O completion or timer expirations. scalability reaches up to 32 CPUs in OpenVMS Version 9 and later, employing lock-free algorithms, spinlocks, and NUMA-aware affinity to ensure high concurrency without traditional locking bottlenecks. The system services interface exposes numerous SYScallssuchasSYS calls—such as SYSQIO for I/O, SYS$ENQ for synchronization, and services for —allowing applications to interact with kernel functions while supporting explicit CPU affinity for . Architectural differences across versions underscore the evolution to 64-bit executives starting with the Alpha port, which introduced extended addressing and page sizes, while the adaptation in Version 9 (released ) incorporates platform-specific calling conventions that support SIMD registers (e.g., XMM, YMM, AVX) for floating-point and vector operations in procedure calls and context switching, with 8 KB pages and full 64-bit VAS. These adaptations maintain compatibility with prior 64-bit implementations on Alpha and , ensuring scalable performance on modern hardware without altering core executive layering.

File System Design

The Files-11 On-Disk Structure (ODS) serves as the primary for OpenVMS, organizing data in a hierarchical manner across devices, directories, subdirectories, and files. ODS-2, the default structure, employs a tree-like where files are identified by names up to 39 characters (with extensions up to 39 characters) and version numbers, supporting up to 255 subdirectory levels on Alpha and systems. Indexed files within this structure allow for efficient key-based , storing in buckets with primary and optional alternate keys for rapid retrieval. Variable-length are supported across file , with a maximum size of 32,767 bytes for most formats (up to 65,535 bytes in stream format), enabling flexible without fixed padding beyond the record content. Record Management Services (RMS) provides the core interface for file access in OpenVMS, supporting sequential, relative, and indexed methods to handle diverse application needs. processes records in the order they were written or sorted by key, ideal for linear data streams. Relative access uses fixed-length cells addressed by numeric position (up to 2^31-1), facilitating random inserts and deletions without reorganizing the file. Indexed access enables direct lookups via primary keys (1-255 bytes) or alternate keys, with support for exact, partial, or generic searches, making it suitable for database-like operations. To enhance performance, RMS incorporates multibuffered I/O, allowing up to 255 buffers per record access block and global buffers shared across processes (up to 32,767), alongside multiblock I/O transfers of up to 127 blocks per operation, reducing overhead in high-throughput scenarios. Volume management in OpenVMS emphasizes reliability and resource control, with volume shadowing providing redundancy by mirroring data across multiple disks in real time using the for cluster-wide consistency. Shadow sets can include up to 500 disks on standalone or clustered systems, automatically handling failures by switching to surviving members without application interruption. Disk quotas enforce storage limits per user, tied to the User Identification Code (UIC), which uniquely identifies processes and owners; quotas track blocks used and set soft/hard limits, preventing over-allocation on shared volumes via the MOUNT/QUOTA command. Special files extend the file system's utility for system operations, including mailboxes as pseudo-devices (e.g., MBA0:) for (IPC), where processes exchange fixed or variable-length messages asynchronously or with notification via AST routines. Container files act as logical wrappers for layered volumes, particularly in clustered environments, enabling the binding of multiple physical disks into a unified virtual volume for shared access across nodes. The evolved with ODS-5 introduced in OpenVMS V7.3 () as a superset of ODS-2, adding case preservation for file names (maintaining mixed-case as created), support for extended character sets (ISO Latin-1 and , up to 238 bytes per name), and extended attributes like revision dates and lists for enhanced interoperability with non-VMS systems. On ports (Version 9.2, released 2022), OpenVMS supports larger volumes up to 256 TB through bound volume sets (BVS), comprising up to 256 component volumes, leveraging 64-bit addressing to exceed prior limits on single volumes.

Command-Line Interpreter

The Digital Command Language (DCL) serves as the primary command-line interpreter for OpenVMS, providing an English-like, procedure-based interface for interactive system administration, scripting, and automation of routine tasks. As a high-level scripting language, DCL enables users to define symbols (variables) for data storage and manipulation, invoke lexical functions for dynamic operations such as $DATE to retrieve the current date/time or $SEARCH to locate substrings within files, and execute core command verbs including SET for configuring system parameters, SHOW for querying process or system status, and RUN for launching executable images. For instance, the command $ RUN MYPROG initiates a program, while $ SHOW TIME displays the current timestamp. DCL organizes symbols into hierarchical tables—local, job, group, and scopes—facilitating lexical replacement where symbol values are automatically substituted during command to create flexible, parameterized scripts. Qualifiers further refine command execution, such as /OUTPUT=filespec to redirect results, while error handling relies on ON directives to trap conditions like severe errors (e.g., $ ON SEVERE_ERROR THEN CONTINUE) and WAIT to suspend processing until a specified interval or event. This structure supports robust scripting by allowing procedures to respond to runtime issues without abrupt termination. In interactive mode, users enter commands directly at the prompt, but DCL also excels in batch processing via the SUBMIT verb, which queues command procedures (.COM files) to batch job queues for unattended execution. Batch scripts incorporate control flow, including conditional logic with IF-THEN-ELSE statements (e.g., $ IF COUNT .LT. 8 THEN WRITE SYS$OUTPUT "Low" ) and loops implemented through GOTO labels or FOR constructs to iterate over symbol values or files. An example loop might read lines from a file until end-of-file using $ LOOP: READ/END_OF_FILE=ENDIT IN NAME followed by $ GOTO LOOP. DCL extends its capabilities with low-level integrations, such as MACRO-32 inline assembly for performance-critical routines callable from procedures, and utility lexical functions like F$EXTRACT for substring extraction (e.g., $ X = F$EXTRACT(0,3,"OpenVMS") yields "Ope"). These features make DCL suitable for complex administrative tasks, from file operations to system monitoring. OpenVMS version 9 and subsequent releases enhance DCL with Unicode support via the ODS-5 file system, enabling handling of international characters in symbols and output, alongside improved scripting interoperability with POSIX-compliant shells through extended parse styles and pipe commands like PIPE for UNIX-style data streaming.

Core Features

Clustering Capabilities

OpenVMS Cluster implements a shared-everything architecture, enabling up to 96 nodes to operate as a single virtual system by sharing processing power, mass storage, and other resources under unified management. Nodes connect via interconnects such as LANs, IP, MEMORY CHANNEL, SCSI, Fibre Channel, or SAS, with shared storage accessed transparently across the cluster. The Distributed Lock Manager (DLM) serves as the core mechanism for resource arbitration, synchronizing access to shared data and ensuring consistency by managing locks, with capacity for up to 16,776,959 locks per process and built-in deadlock detection. Failover mechanisms in OpenVMS Cluster prioritize through voting, which determines cluster viability using the formula quorum = (EXPECTED_VOTES + 2)/2 to prevent scenarios during network partitions or node failures. This supports phase-split recovery, allowing the cluster to reform without by requiring a vote for continued operation. Rolling upgrades further enhance by permitting sequential node reboots for software updates or patches, avoiding full cluster downtime. The shared-everything model relies on MSCP servers for disk access and TMSCP servers for tape access, enabling efficient distribution of I/O load across nodes. This design accommodates heterogeneous architectures, including VAX, Alpha, and servers, with post-Alpha configurations maintaining compatibility through separate system disks and boot protocols like MOP or PXE. Performance features include cache coherency to maintain during concurrent access, QIO interfaces for low-latency I/O operations, and balanced via static and dynamic load balancing on MSCP servers, generic queues, and tunable parameters such as NISCS_MAX_PKTSZ. Since the production release of OpenVMS V9.2 for in July 2022 and its update V9.2-3 in November 2024, clustering extends to platforms, supporting configurations with shared storage integrated with HPE solutions.

Networking Support

OpenVMS provides robust networking capabilities through its integrated TCP/IP stack, which was provided as a layered product starting in the late , with native TCP/IP Services introduced in V5.0 alongside OpenVMS V7.0 in 1996, replacing earlier layered products like UCX with built-in support for core protocols such as IP, TCP, and UDP. This stack, now known as TCP/IP Services for OpenVMS, includes utilities equivalent to those in UCX and third-party solutions like MultiNet, accessible via the TCPIP$ prefix for configuration and management tasks. support was introduced in TCP/IP Services V5.1 alongside OpenVMS V7.3 in 2001, enabling dual-stack operation for both IPv4 and addressing, routing, and socket programming. For legacy compatibility, OpenVMS maintains DECnet Phase IV and Phase V protocols, with Phase IV providing traditional routing using DECnet (DNA addresses in environments requiring with older DEC hardware and software. Phase V, part of DECnet-Plus, extends this with OSI integration and is the preferred modern implementation, while Phase IV is emulated in newer releases to support transitional networks without full replacement. File and print sharing in OpenVMS leverages multiple protocols for cross-platform interoperability. The NFS client and server support versions 3 and 4, allowing seamless mounting of remote file systems and serving OpenVMS files to NFS clients, with proxy-based for security. SMB/CIFS support is provided through the OpenVMS CIFS extension or ported implementation, enabling Windows clients to access OpenVMS shares and printers via standard domain integration. Additionally, DECnet-over-IP encapsulates DECnet traffic within TCP/IP packets, facilitating hybrid environments where legacy DECnet applications communicate over IP infrastructures. Network management tools in OpenVMS include LAT (Local Area Transport) for terminal services, which connects asynchronous terminals and terminal servers to the host for legacy terminal access over Ethernet. SNMP () is integrated into TCP/IP Services, allowing remote monitoring of system metrics, interface statistics, and network events via MIBs compatible with standard management stations. As of October 2025, OpenVMS incorporates version 9.9-2 for secure remote access, supporting SSH-2 protocol for encrypted logins, file transfers via SFTP/SCP, and , with native integration into the TCP/IP stack for both client and server operations. In versions, enhanced integration enables dynamic virtual network configurations and compatibility with platforms like AWS and for scalable distributed deployments, including the V9.2-3 update in November 2024. These advancements allow OpenVMS clusters to extend over IP-based networks for resource sharing, as detailed in clustering .

Security Mechanisms

OpenVMS employs a robust privilege model to enforce least- principles, featuring over 35 distinct privileges such as CMKRNL for kernel-mode execution and SYSPRV for system-wide resource access. These privileges are assigned to user accounts in the System User Authorization File (SYSUAF) and can be dynamically enabled or disabled for processes using commands like SET PROCESS/PRIVILEGE, with auditing triggered via system services such as $CHECK_PRIVILEGE to monitor usage and prevent escalation. The model categorizes privileges into levels like normal user, group, and system, ensuring that operations like logical I/O (LOG_IO) or bypass access (BYPASS) are restricted to authorized contexts, thereby minimizing unauthorized system modifications. Access control in OpenVMS relies on a combination of User Identification Codes (UIC) for group-based protections, Rights Lists for capability-like authorizations, and for granular object permissions. UICs, formatted as [group,member], define ownership and protection categories such as system, owner, group, and world, allowing processes with SYSPRV or matching group privileges to modify protections on files and other resources. Rights Lists, stored in RIGHTSLIST.DAT, grant identifier-based access that bypasses traditional protections and are synchronized across clusters for consistent enforcement. , embedded in object metadata like file headers, support and fine-grained rights (e.g., read, write, execute) via Access Control Entries (ACEs), enhancing protections for critical files such as SYS$SYSTEM:LOGINOUT.EXE. Since OpenVMS V7.0, mandatory integrity labels have been integrated to enforce , where access requires matching or superior integrity levels, managed through privileges like and . The auditing subsystem, centered on the SECmodule,logssecurityrelevanteventsinrealtimetoSECURITY.AUDIT module, logs security-relevant events in real-time to SECURITY.AUDITJOURNAL or operator consoles, capturing activities like login failures, privilege uses, file accesses, and authorization changes. Enabled via SET commands, it supports customizable classes (e.g., ACL modifications, break-ins, log failures) and integrates with the process for centralized processing, while the ANALYZE/ utility analyzes logs for anomalies such as repeated intrusion attempts. Real-time alerts can be configured for high-risk events like privilege escalations, aiding proactive threat detection. Encryption capabilities in OpenVMS include built-in support for DES and 3DES (available since V7.3-2), with AES added in V8.3 supporting 128-, 192-, and 256-bit keys in modes like AESCBC. These features are invoked via BACKUP/encrypt or DCL commands like ENCRYPT, eliminating the need for separate products since V8.3. Kerberos integration, available since OpenVMS V7.3 in 2000 with V7.3-1 enhancements in 2002, enables secure network authentication through the ACME agent and supports site-specific algorithms for client-server interactions. Despite its strong design, OpenVMS has faced historical vulnerabilities, including a 2003 page management flaw (CERT VU#10031) allowing unauthorized access in pre-1993 versions and a 2010 auditing bypass issue (CVE-2010-2612) affecting V7.3-2 through V8.3. In 2017, unpatched systems running legacy services were indirectly impacted by WannaCry through connected Windows environments, though native OpenVMS components remained unaffected due to incompatible protocols. More recently, VSI issued patches in 2025 for OpenVMS V9.2-2 addressing CVEs in layered products like TCP/IP Services, maintaining compatibility with ongoing security updates for Alpha and platforms, including the V9.2-3 release in November 2024.

Development and Programming

Programming Languages and Tools

OpenVMS supports a range of native and third-party programming languages, enabling developers to create everything from kernel components to enterprise applications. Native languages emphasize systems-level efficiency and compatibility with the operating system's architecture. BLISS-32 and BLISS-64 are high-level, block-structured languages designed for systems programming, particularly for developing the OpenVMS kernel and executive; the BLISS compiler generates optimized code for Alpha, Itanium, and x86-64 platforms. VAX MACRO-32 and MACRO-64 provide assembly-level access for low-level tasks, such as device drivers and performance-critical routines, with direct support for VAX and Alpha/VMS instruction sets. For higher-level development, the VSI C compiler suite includes the DECC compiler for ANSI/ISO C and VSI C++ for object-oriented programming, both optimized for OpenVMS on VAX, Alpha, Itanium, and x86-64 systems, with features like extended run-time libraries for POSIX compliance and thread support. Third-party language support extends OpenVMS's versatility for modern and legacy workloads. Java development is facilitated by VSI 17.0-13C, which maintains compatibility with prior Java versions on OpenVMS and supports applications on Integrity servers, with a planned release for in December 2025. Python is available through a native port of version 3.10, including wheels for package management, enabling scripting and data processing on OpenVMS ; the -VMS (GNV) project further integrates Python with GNU utilities for enhanced open-source compatibility. Legacy applications rely on compilers for , , BASIC, and Pascal, which preserve compatibility for mission-critical in finance and engineering sectors. Development tools streamline the build, debug, and integration processes on OpenVMS. The Module Management System (MMS) and its enhanced counterpart MMK function as makefile utilities, automating compilation, linking, and dependency resolution using description files to build complex projects efficiently. The OpenVMS Debugger provides comprehensive runtime analysis, supporting breakpoints, watchpoints, and symbol table inspection across VSI compilers and third-party languages like and Python. The linker utility creates executables and shareable images, incorporating overlay support for and allowing psect (program section) attributes—such as SHR (shareable), OVR (overlaid), or PIC ()—to be defined via option files for modular, reusable code libraries. Integrated development environments (IDEs) bridge OpenVMS with contemporary workflows. In 2025, VSI introduced VMS/XDE, a lightweight cross-development tool running natively on for building and testing OpenVMS applications without . Complementing this, the VMS IDE extends with OpenVMS-specific features like DCL integration and , facilitating remote editing and debugging. The overall build process leverages MMS or MMK to orchestrate compilation with language-specific compilers, followed by linking to produce shareable images optimized for OpenVMS's and clustering features.

Database Management Systems

OpenVMS provides robust support for database management systems through its native Record Management Services (RMS), which includes journaling capabilities for ensuring data recoverability during file operations. RMS journaling records changes to files, allowing recovery from failures by replaying or rolling back transactions, thereby maintaining in the event of system crashes or media failures. This feature has been integral to OpenVMS since its early versions and forms the foundation for higher-level database operations. Oracle Rdb, originally developed by and released in 1984 as part of the VMS ecosystem, extends RMS journaling to support management with . Rdb leverages RMS for underlying file storage and recovery, enabling after-image journaling (AIJ) for databases to facilitate and automatic roll-forward operations after failures. Acquired by in 1994, Rdb continues to be optimized for OpenVMS environments, supporting large-scale production applications with features like multi-file databases and SQL interfaces. As of 2025, the latest release is Oracle Rdb 7.4.1.4, compatible with OpenVMS Alpha and servers; efforts are underway to extend support to the architecture in OpenVMS V9.2, though not yet available. The standard has been ported to OpenVMS since the , initially as Oracle Version 5 in 1983, providing capabilities for on VAX systems. Support for single-instance deployments continued through 11g Release 2 (11.2.0.4), the terminal version for OpenVMS as of 2025, with integration into OpenVMS clustering for via shared storage. (Real Application Clusters) is not supported on OpenVMS, relying instead on native OpenVMS clustering mechanisms for scalability. Other database management systems supported on OpenVMS include , a multidimensional database optimized for high-performance applications like healthcare and . Caché, ported to OpenVMS in the early , supports object-oriented and SQL access, running on OpenVMS clusters for distributed processing; the last major release, 2017.1, remains compatible with VSI OpenVMS 8.4-1H1 and later. PostgreSQL is accessible via VSI's ported client API (libpq), enabling OpenVMS applications to connect to remote PostgreSQL servers, though a full server port is not officially available. For modernization on , alternatives like Mimer SQL support migration from legacy systems such as Rdb. For in-memory operations, OpenVMS's shared images and global sections facilitate code and data sharing across processes, allowing efficient caching and reducing I/O overhead in systems like Rdb or custom applications. Transaction processing in OpenVMS databases is enhanced by DECdtm services, which implement a to ensure atomicity across distributed resources. DECdtm coordinates resource managers like RMS Journaling or Rdb, guaranteeing that either all changes in a transaction are committed or none are applied, even in clustered environments. For interoperability with external systems, the DECdtm XA Gateway provides compliance, allowing XA-capable transaction managers to integrate with OpenVMS resource managers for heterogeneous transactions. High availability for database queries is achieved through integration with OpenVMS Volume Shadowing, which mirrors database volumes across disks or nodes in a cluster, enabling transparent and continuous access during hardware failures. In OpenVMS V9.2 for , database performance benefits from architecture-specific optimizations, including improved and I/O throughput, supporting faster query execution in virtualized environments like or KVM. These features collectively enable OpenVMS to handle demanding transaction workloads with minimal downtime.

User Interfaces

OpenVMS provides a range of user interfaces that have evolved to support both traditional terminal-based interactions and modern graphical and web-based access, catering to administrators, developers, and end-users in enterprise environments. Initially rooted in character-cell terminals common to systems of the 1970s and 1980s, the operating system shifted toward graphical user interfaces (GUIs) in the 1990s to align with broader industry trends toward visual computing. This transition began with the introduction of DECwindows in the early 1990s, enabling X11-based windowing systems that allowed users to interact with OpenVMS applications through point-and-click interfaces rather than solely command-line inputs. By the mid-1990s, this evolution included support for emulated environments that extended touch capabilities, facilitating interaction on virtualized or modern hardware setups. Text-based user interfaces remain a cornerstone for OpenVMS, particularly for system administration and text manipulation tasks. The DECterm emulator, integrated within the DECwindows environment, serves as a VT520-compatible , allowing users to run character-based applications in a windowed session while maintaining compatibility with legacy terminal protocols. For , the EDT (Editor for Disk and Tape) provides a line-oriented, interactive suitable for creating and modifying files in batch or interactive modes, while VTEDT offers a visual, keypad-driven variant that enhances usability within graphical sessions. Additionally, the utility functions as a messaging tool for sending, receiving, and managing electronic mail within the OpenVMS , supporting features like message extraction to files and integration with user directories. Graphical user interfaces in OpenVMS are primarily delivered through DECwindows Motif, an X11-based system that became available starting with Version 6.0 in 1993, providing a Motif-compliant , desktop, and for running GUI-enabled software. This interface supports client-server processing, where the server handles display management and clients execute applications, enabling remote access and multi-window operations. VSI plans enhancements to graphics support for the port, including updates to the (GKS) V9.1 in February 2026 for improved rendering in virtualized environments, ensuring compatibility with modern display hardware. Touch support is available in emulated setups, such as those on or AWS, allowing gesture-based interactions through layered emulation. Web-based interfaces have expanded OpenVMS accessibility, with VSI's Web Server providing a robust platform for hosting browser-accessible applications and administrative tools. Based on 2.4, this server integrates seamlessly with OpenVMS, supporting secure configurations for serving dynamic content and enabling browser-based system management. The VMS WebUI, a modern , allows users to perform tasks like process monitoring and via HTML5-compliant interfaces, particularly in cloud-deployed versions on platforms like AWS. For , historical integration with DECtalk supported screen readers for visually impaired users on character-based terminals, while contemporary cloud versions leverage standards for compatibility with standard assistive technologies like JAWS or NVDA.

Compatibility and Extensions

POSIX Compliance

OpenVMS achieved compliance with the release of version 6.0 in , earning full under FIPS PUB 151-1, which implements the IEEE P1003.1-1990 standard for core system interfaces. This covered hosted implementations supporting key features like the mountable and appropriate privileges, tested on VAX hardware with the VAX C . Subsequent versions, including V7.0 in 1996, maintained and expanded this foundation by integrating components directly into the operating system rather than relying solely on the discontinued POSIX layered product. Key POSIX implementations in OpenVMS include native support for the Socket API and signals through the C Run-Time Library (CRTL), while process creation mechanisms like and exec are emulated via layered products such as GNV (GNU for VMS), which provides a POSIX-like environment for porting Unix applications. GNV enables execution of tools and libraries, bridging gaps in process management by mapping VMS DCL commands and image activation to Unix semantics. Extensions to POSIX.2, covering shell and utilities, were introduced in V8.2, enhancing command-line compatibility through CRTL functions for utilities like and . Threading support arrived with POSIX threads (pthreads) in V8.3, implementing the IEEE 1003.1c-1995 standard via the Threads Library, which includes real-time extensions for and scheduling. This library allows multithreaded programming with routines like pthread_create and pthread_mutex_lock, integrated with OpenVMS kernel threads for efficient concurrency. Despite these advances, OpenVMS does not fully replicate semantics, such as byte-stream atomicity or native hierarchical permissions; instead, paths are mapped using the ODS-5 volume structure, which supports Unix-style filenames, , and deeper directories up to 255 levels. ODS-5 enables mixed VMS and Unix naming conventions, facilitating application portability without complete semantic equivalence. The CRTL V10 ECO kit, released in August 2025 for x86-64, Alpha, and IA-64, provides bug fixes and updates to the C Run-Time Library. These updates preserve backward compatibility for legacy applications.

Virtualization and Cloud Integration

OpenVMS provides native support for virtualization on x86-64 architecture starting with version 9.2, enabling deployment on industry-standard hypervisors such as VMware ESXi 6.7 and later, KVM (tested on platforms including CentOS 7.9 and openSUSE), and VirtualBox. This allows OpenVMS to run as a guest operating system in virtualized environments, facilitating integration with modern infrastructure while maintaining compatibility with existing applications. For legacy Alpha and Itanium systems, emulation solutions like Stromasys Charon provide virtualized replicas of original hardware, supporting OpenVMS workloads on contemporary x86 servers without requiring source code modifications. In cloud environments, OpenVMS is available on (AWS) via EC2 instances, with deployment guides emphasizing configuration for virtualized x86 versions provided by VMS Software, Inc. (VSI). This integration, enabled since at least early 2024, supports migration of legacy applications to scalable cloud infrastructure. For other platforms like and , OpenVMS can be deployed using emulation tools such as Charon, which is cloud-agnostic and compatible with major providers including AWS, Azure, Google Cloud, and . Containerization support in OpenVMS leverages its compliance layers for partial Docker compatibility, though full native integration remains limited. As of 2025, OpenVMS supports partial through compliance, but native Docker or Podman integration is not available. VSI released the RTL V10 for x86, Alpha, and in September 2025. The V9.2-3 Update V2, released in July 2025, includes KVM PCI passthrough for improved data disk support in virtual environments. Migration to virtualized and cloud setups is aided by tools such as Stromasys emulators, which enable seamless transitions from legacy VAX, Alpha, and hardware to x86-based . Additionally, LegacyMap, introduced in 2025, automates documentation of OpenVMS applications by generating call graphs, procedural maps, and SQL access details for languages including , , BASIC, C++, and Pascal, aiding in analysis and modernization efforts. The 2025 roadmap from VSI outlines further enhancements for cloud-native features, including potential integrations for and , though specific details on remain in planning phases.

Hobbyist and Community Programs

The OpenVMS Hobbyist Program originated in 1997 under Computer Corporation, providing free access to the operating system and certain layered products for personal, non-commercial educational purposes on VAX hardware. This initiative aimed to foster learning and experimentation among enthusiasts following the decline in DEC's direct support for legacy systems. The program persisted through Hewlett-Packard's acquisition in 2002 and HPE's stewardship until its termination in March 2020. In April 2020, VMS Software, Inc. (VSI) launched the Community License Program as a successor, extending free licenses to hobbyists, students, and non-commercial users for OpenVMS on Alpha, , and platforms to support ongoing education and development. VSI, founded in to steward OpenVMS after HPE's divestiture, further expanded access in 2023 by including pre-configured images in the program. However, in March 2024, VSI announced significant restrictions: new licenses for Alpha and were discontinued, with existing Alpha licenses renewable only until March 2025 and until December 2025. As of November 2025, Alpha licenses are no longer renewable, while licenses remain renewable until December 2025, shifting focus exclusively to for future hobbyist use. Eligibility for the VSI Community License requires applicants to affirm non-commercial intent, such as personal learning, open-source contributions, or knowledge sharing, with annual renewal mandatory and strict prohibitions against production or revenue-generating applications. The provided x86-64 images are pre-installed virtual machines configured with 2 virtual CPUs and 12 GB of RAM, suitable for emulation on hypervisors like VMware, though users may adjust host resources within license terms. Applications are submitted via VSI's online form, granting access to download kits after approval. The OpenVMS community thrives through dedicated resources, including the official VSI OpenVMS Forum, where users discuss installation, troubleshooting, and best practices across topics like virtualization and programming. Hobbyists contribute to repositories hosting ports of , such as adaptations of for secure communications on OpenVMS, enabling integration with modern tools. The emulator, an open-source project, allows emulation of VAX and Alpha hardware on contemporary platforms like or Windows, facilitating access to historical OpenVMS versions without proprietary equipment. Community engagement is bolstered by events like the 2025 OpenVMS Bootcamp, held October 22–24 in , which featured demonstrations of tools such as VMS/XDE—a native /Linux-based development environment for OpenVMS applications without emulation overhead. These gatherings promote hands-on learning, networking among developers, and showcases of community-driven innovations. Overall, the program preserves OpenVMS expertise, supports educational initiatives, and encourages ongoing contributions to its ecosystem amid evolving hardware landscapes.

Release History

OpenVMS has undergone numerous releases since its inception as VMS in 1978, transitioning across architectures from VAX to Alpha, Itanium (Integrity), and now x86-64. The following table summarizes major versions, focusing on production releases and significant updates. Minor patch releases are omitted for brevity.
VersionRelease DatePrimary Architecture(s)Key Notes
V1.0October 1978VAXInitial production release for VAX-11/780 minicomputers.
V2.0April 1980VAXSupport for VAX-11/750; enhanced system management.
V3.0April 1982VAXIntroduction of VAX-11/730 compatibility; improved performance.
V4.0September 1984VAX, MicroVAXVAXcluster support; advanced security features.
V5.0May 1988VAXSymmetric multiprocessing (SMP); internationalization support.
V1.0 (AXP)November 1992AlphaFirst port to 64-bit Alpha architecture, based on VAX V5.4-2.
V6.0June 1993VAXExtended virtual addressing; support for VAX 7000/10000.
V7.0December 1995VAX, Alpha64-bit addressing on Alpha; kernel threads introduced.
V7.3November 1998VAX, AlphaEnhanced clustering and TCP/IP integration.
V8.0June 2003Alpha, IntegrityEvaluation release for Itanium (Integrity) servers.
V8.2February 2005Alpha, IntegrityFirst production release for Itanium; improved 64-bit support.
V8.3July 2008Alpha, IntegrityIPv6 support; enhanced security and auditing.
V8.4March 2010Alpha, IntegritySupport for newer hardware; OpenSSL integration.
V8.4-2H1February 2016IntegrityVSI's first release; support for Itanium 9500 processors.
V8.4-2L1August 2016Alpha, IntegrityOpenSSL update to v1.0.2; binary compatibility maintained.
V9.1June 2021x86-64Field test release for x86-64 via emulation.
V9.2July 2022x86-64First production release for x86-64; native support on VMware, KVM, VirtualBox.
V9.2-1March 2023x86-64Stability updates; AMD CPU support added.
V9.2-2July 2024x86-64RTL and networking enhancements.
V9.2-3November 2024x86-64Virtualization improvements (e.g., VMware vMotion); TCP/IP and OpenSSH updates. As of November 2025, the latest release.
Support for VAX and Alpha has been end-of-life since 2013 and 2018, respectively, though hobbyist programs allow limited use. Integrity support continues under VSI, with as the focus for future development.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.