Recent from talks
Nothing was collected or created yet.
VM (operating system)
View on WikipediaConstructs such as ibid., loc. cit. and idem are discouraged by Wikipedia's style guide for footnotes, as they are easily broken. Please improve this article by replacing them with named references (quick guide), or an abbreviated title. (September 2025) |
| z/VM | |
|---|---|
| Developer | IBM |
| OS family | VM family |
| Working state | Current |
| Source model | 1972–1986 Open source, 1977–present Closed source |
| Initial release | 1972 |
| Latest release | IBM z/VM V7.4 / September 20, 2024[1] |
| Marketing target | IBM mainframe computers |
| Available in | English |
| Supported platforms | System/370, System/390, IBM Z |
| License | 1972–1981 Public domain, 1976–present Proprietary |
| Official website | www |
| History of IBM mainframe operating systems |
|---|

VM, often written VM/CMS, is a family of virtual machine operating systems used on IBM mainframes including the System/370, System/390, IBM Z and compatible systems. It replaced the older CP-67 that formed the basis of the CP/CMS operating system. It was first released as the free Virtual Machine Facility/370 for the S/370 in 1972, followed by chargeable upgrades[a] and versions that added support for new hardware.[d]
VM creates virtual machines into which a conventional operating system may be loaded to allow user programs to run. Originally, that operating system was CMS, a simple single-user system similar to DOS. VM can also be used with a number of other IBM operating systems, including large systems like MVS or VSE, which are often run on their own without VM. In other cases, VM is used with a more specialized operating system or even programs that provided many OS features. These include RSCS[e] and MUMPS, among others.
Design
[edit]The heart of the VM architecture is the Control Program or hypervisor abbreviated CP, VM-CP and sometimes, ambiguously, VM. It runs on the physical hardware, and creates the virtual machine environment. VM-CP provides full virtualization of the physical machine – including all I/O and other privileged operations. It performs the system's resource-sharing, including device management, dispatching, virtual storage management, and other traditional operating system tasks. Each VM user is provided with a separate virtual machine having its own address space, virtual devices, etc., and which is capable of running any software that could be run on a stand-alone ("bare-metal") machine. A given VM mainframe typically runs hundreds or thousands of virtual machine instances. VM-CP began life as CP-370, a reimplementation of CP-67, itself a reimplementation of CP-40.
Running within each virtual machine is another operating system, a guest operating system. This might be:
- CMS (Conversational Monitor System, renamed from the Cambridge Monitor System of CP/CMS). Most virtual machines run CMS, a lightweight, single-user operating system. Its interactive environment is comparable to that of a single-user PC, including a file system, programming services, device access, and command-line processing. (While an earlier version of CMS was uncharitably described as "CP/M on a mainframe", the comparison is an anachronism; the author of CP/M, Gary Kildall, was an experienced CMS user.)
- GCS (Group Control System), which provides a limited simulation of the MVS API. IBM originally provided GCS in order to run VTAM without a service OS/VS1 virtual machine and VTAM Communications Network Application (VCNA). RSCS V2 also ran under GCS.
- A mainstream operating system. IBM's mainstream operating systems (e.g., the MVS and DOS/VSE families, OS/VS1, TSS/370, or another layer of VM/370 itself (see below)) can be loaded and run without modification. The VM hypervisor treats guest operating systems as application programs with exceptional privileges – it prevents them from directly using privileged instructions (those which would let applications take over the whole system or significant parts of it), but simulates privileged instructions on their behalf. Most mainframe operating systems terminate a normal application which tries to usurp the operating system's privileges. The VM hypervisor can simulate several types of console terminals for the guest operating system, such as the hardcopy line-mode 3215, the graphical 3270 family, and the integrated console on newer System/390 and IBM Z machines. Other users can then access running virtual machines using the DIAL command at the logon screen, which will connect their terminal to the first available emulated 3270 device, or the first available 2703 device if the user is DIALing from a typewriter terminal.
- Another copy of VM. A second level instance of VM can be fully virtualized inside a virtual machine. This is how VM development and testing is done (a second-level VM can potentially implement a different virtualization of the hardware). This technique was used to develop S/370 software before S/370 hardware was available, and it has continued to play a role in new hardware development at IBM. The literature cites practical examples of virtualization five levels deep.[2] Levels of VM below the top are also treated as applications but with exceptional privileges.
- A copy of the mainframe version of AIX or Linux. In the mainframe environment, these operating systems often run under VM, and are handled like other guest operating systems. (They can also run as 'native' operating systems on the bare hardware.) There was also the short-lived IX/370, as well as S/370 and S/390 versions of AIX (AIX/370 and AIX/ESA).
- A specialized VM subsystem. Several non-CMS systems run within VM-CP virtual machines, providing services to CMS users such as spooling, interprocess communications, specialized device support, and networking. They operate behind the scenes, extending the services available to CMS without adding to the VM-CP control program. By running in separate virtual machines, they receive the same security and reliability protections as other VM users. Examples include:
- RSCS (Remote Spooling and Communication Subsystem, aka VNET) – communication and information transfer facilities between virtual machines and other systems[3]
- RACF (Resource Access Control Facility) — a security system
- Shared File System (SFS), which organizes shared files in a directory tree (the servers are commonly named "VMSERVx")
- VTAM (Virtual Telecommunications Access Method) – a facility that provides support for a Systems Network Architecture network
- PVM (VM/Pass-Through Facility) – a facility that provides remote access to other VM systems
- TCPIP, SMTP, FTPSERVE, PORTMAP, VMNFS – a set of service machines that provide TCP/IP networking to VM/CMS
- Db2 Server for VM – a SQL database system, the servers are often named similarly to "SQLMACH" and "SQLMSTR"
- DIRMAINT – A simplified user directory management system (the directory is a listing of every account on the system, including virtual hardware configuration, user passwords, and minidisks).
- MUMPS/VM — an implementation of the MUMPS database and programming language which could run as guest on VM/370.[4] MUMPS/VM was introduced in 1987 and discontinued in 1991.[5]
- A user-written or modified operating system, such as National CSS's CSS or Boston University's VPS/VM.
Versions
[edit]The following versions are known:
- Virtual Machine Facility/370
- VM/370, released in 1972, is a System/370 reimplementation of earlier CP/CMS operating system.
- VM/370 Basic System Extensions Program Product
- VM/BSE (BSEPP) is an enhancement to VM/370 that adds support for more devices (such as 3370-type fixed-block-architecture DASD drives), improvements to the CMS environment (such as an improved editor), and some stability enhancements to CP.
- VM/370 System Extensions Program Product
- VM/SE (SEPP) is an enhancement to VM/370 that includes the facilities of VM/BSE, as well as a few additional fixes and features.
- Virtual Machine/System Product
- VM/SP, a milestone version, replaces VM/370, VM/BSE and VM/SE. Release 1 added EXEC2 and XEDIT System Product Editor; Release 3 added REXX; Release 6 added the shared filesystem.[6]
- Virtual Machine/System Product High Performance Option
- VM/SP HPO adds additional device support and functionality to VM/SP, and allows certain S/370 machines that can utilize more than 16 MB of real storage to do so, up to 64 MB. This version was intended for users that would be running multiple S/370 guests at once.[7][8]
- Virtual Machine/Extended Architecture Migration Aid
- VM/XA MA is intended to ease the migration from MVS/370 to MVS/XA by allowing both to run concurrently on the same processor complex.[9]
- Virtual Machine/Extended Architecture System Facility
- VM/XA SF is an upgraded VM/XA MA with improved functionality and performance.[10]
- Virtual Machine/Extended Architecture System Product
- VM/XA SP is an upgraded VM/XA SF with improved functionality and performance, offered as a replacement for VM/SP HPO on machines supporting S/370-XA. It includes a version of CMS that can run in either S/370 or S/370-XA mode.[11]
- Virtual Machine/Enterprise Systems Architecture
- VM/ESA provides the facilities of VM/SP, VM/SP HPO and VM/XA SP. VM/ESA version 1 can run in S/370, ESA/370 or ESA/390 mode; it does not support S/370 XA mode. Version 2 only runs in ESA/390 mode. The S/370-capable versions of VM/ESA were actually their own separate version from the ESA/390 versions of VM/ESA, as the S/370 versions are based on the older VM/SP HPO codebase, and the ESA/390 versions are based on the newer VM/XA codebase.[12]
- z/VM
- z/VM, the last version still widely used as one of the main full virtualization solutions for the mainframe market.[citation needed] z/VM 4.4 was the last version that could run in ESA/390 mode; subsequent versions only run in z/Architecture mode.[13]
The CMS in the name refers to the Conversational Monitor System, a component of the product that is a single-user operating system that runs in a virtual machine and provides conversational time-sharing in VM.
Hypervisor interface
[edit]IBM coined the term hypervisor for the 360/65[14] and later used it for the DIAG handler of CP-67.
The Diagnose instruction ('83'x—no mnemonic) is a privileged instruction originally intended by IBM to perform "built-in diagnostic functions, or other model-dependent functions."[15] IBM repurposed DIAG for "communication between a virtual machine and CP."[16][17] The instruction contains two four-bit register numbers, called Rx and Ry, which can "contain operand storage addresses or return codes passed to the DIAGNOSE interface," and a two-byte code "that CP uses to determine what DIAGNOSE function to perform."[16] The available diagnose functions include:
| Hexadecimal code | Function |
|---|---|
| 0000 | Store Extended-Identification Code |
| 0004 | Examine Real Storage |
| 0008 | Virtual Console Function—Execute a CP command |
| 0018 | Standard DASD I/O |
| 0020 | General I/O—Execute any valid CCW chain on a tape or disk device |
| 003C | Update the VM/370 directory |
| 0058 | 3270 Virtual Console Interface—perform full-screen I/O on an IBM 3270 terminal |
| 0060 | Determine Virtual Machine Storage Size |
| 0068 | Virtual Machine Communication Facility (VMCF) |
At one time, CMS was capable of running on a bare machine, as a true operating system (though such a configuration would be unusual). It now runs only as a guest OS under VM. This is because CMS relies on a hypervisor interface to VM-CP, to perform file system operations and request other VM services. This paravirtualization interface:
- Provides a fast path to VM-CP, to avoid the overhead of full simulation.
- Was first developed as a performance improvement for CP/CMS release 2.1, an important early milestone in CP's efficiency.
- Uses a non-virtualized, model-dependent machine instruction as a signal between CMS and CP: DIAG (diagnose).
Minidisks
[edit]

CMS and other operating systems often have DASD requirements much smaller than the sizes of actual volumes. For this reason CP allows an installation to define virtual disks of any size up to the capacity of the device. For CKD volumes, a minidisk must be defined in full cylinders. A minidisk has the same attributes as the underlying real disk, except that it is usually smaller and the beginning of each minidisk is mapped to cylinder or block 0. The minidisk may be[f] accessed using the same channel programs as the real disk.
A minidisk that has been initialized with a CMS file system is referred to as a CMS minidisk, although CMS is not the only system that can use them.
It is common practice to define full volume minidisks for use by such guest operating systems as z/OS instead of using DEDICATE to assign the volume to a specific virtual machine. In addition, "full-pack links" are often defined for every DASD on the system, and are owned by the MAINT userid. These are used for backing up the system using the DASD Dump/Restore program, where the entire contents of a DASD are written to tape (or another DASD) exactly.
Shared File System
[edit]
With modern VM versions, most of the system can be installed to SFS, with the few remaining minidisks being the ones absolutely necessary for the system to start up, and the ones being owned by the filepool server machines.

VM/SP Release 6 introduced the Shared File System [18] which vastly improved CMS file storage capabilities. The CMS minidisk file system does not support directories (folders) at all, however, the SFS does. SFS also introduces more granular security. With CMS minidisks, the system can be configured to allow or deny users read-only or read-write access to a disk, but single files cannot have the same security. SFS alleviates this, and vastly improves performance.
The SFS is provided by service virtual machines. On a modern VM system, there are usually three that are required: VMSERVR, the "recovery machine" that does not actually serve any files; VMSERVS, the server for the VMSYS filepool; and VMSERVU, the server for the VMSYSU (user) filepool.[19] The file pool server machines own several minidisks, usually including a CMS A-disk (virtual device address 191, containing the file pool configuration files), a control disk, a log disk, and any number of data disks that actually store user files.
If a user account is configured to only use SFS (and does not own any minidisks), the user's A-disk will be FILEPOOL:USERID. and any subsequent directories that the user creates will be FILEPOOL:USERID.DIR1.DIR2.DIR3 where the equivalent UNIX file path is /dir1/dir2/dir3. SFS directories can have much more granular access controls when compared to minidisks (which, as mentioned above, can often only have a read password, a write password, and a multi-write password). SFS directories also solve the issues that may arise when two users write to the same CMS minidisk at the same time, which may cause disk corruption (as the CMS VM performing the writes may be unaware that another CMS instance is also writing to the minidisk).
The file pool server machines also serve a closely related filesystem: the Byte File System. BFS is used to store files on a UNIX-style filesystem. Its primary use is for the VM OpenExtensions POSIX environment for CMS. The CMS user virtual machines themselves communicate with the SFS server virtual machines through the IUCV mechanism.[20]
History
[edit]

The early history of VM is described in the articles CP/CMS and History of CP/CMS. VM/370 is a reimplementation of CP/CMS, and was made available in 1972 as part of IBM's System/370 Advanced Function announcement (which added virtual memory hardware and operating systems to the System/370 series). Early releases of VM through VM/370 Release 6 continued in open source through 1981, and today are considered to be in the public domain. This policy ended in 1977 with the chargeable VM/SE and VM/BSE upgrades and in 1980 with VM/System Product (VM/SP). However, IBM continued providing updates in source form for existing code for many years, although the upgrades to all but the free base required a license. As with CP-67, privileged instructions in a virtual machine cause a program interrupt, and CP simulated the behavior of the privileged instruction.
VM remained an important platform within IBM, used for operating system development and time-sharing use; but for customers it remained IBM's "other operating system". The OS and DOS families remained IBM's strategic products, and customers were not encouraged to run VM. Those that did formed close working relationships, continuing the community-support model of early CP/CMS users. In the meantime, the system struggled with political infighting within IBM over what resources should be available to the project, as compared with other IBM efforts. A basic problem with the system was seen at IBM's field sales level: VM/CMS demonstrably reduced the amount of hardware needed to support a given number of time-sharing users. IBM was, after all, in the business of selling computer systems.
Melinda Varian provides this fascinating quote, illustrating VM's unexpected success:[21]
The marketing forecasts for VM/370 predicted that no more than one 168 would ever run VM during the entire life of the product. In fact, the first 168 delivered to a customer ran only CP and CMS. Ten years later, ten percent of the large processors being shipped from Poughkeepsie would be destined to run VM, as would a very substantial portion of the mid-range machines that were built in Endicott. Before fifteen years had passed, there would be more VM licenses than MVS licenses.
A PC DOS version that runs CMS on the XT/370 (and later on the AT/370) is called VM/PC. VM/PC 1.1 was based on VM/SP release 3. When IBM introduced the P/370 and P/390 processor cards, a PC could now run full VM systems, including VM/370, VM/SP, VM/XA, and VM/ESA (these cards were fully compatible with S/370 and S/390 mainframes, and could run any S/370 operating system from the 31-bit era, e.g., MVS/ESA, VSE/ESA).
In addition to the base VM/SP releases, IBM also introduced VM/SP HPO (High Performance Option). This add-on (which is installed over the base VM/SP release) improved several key system facilities, including allowing the usage of more than 16 MB of storage (RAM) on supported models (such as the IBM 4381). With VM/SP HPO installed, the new limit was 64 MB; however, a single user (or virtual machine) could not use more than 16 MB. The functions of the spool filesystem were also improved, allowing 9900 spool files to be created per user, rather than 9900 for the whole system. The architecture of the spool filesystem was also enhanced, each spool file now had a unique user ID associated with it, and reader file control blocks were now held in virtual storage. The system could also be configured to deny certain users access to the vector facility (by means of user directory entries).[7]
Releases of VM since VM/SP Release 1 supported multiprocessor systems. System/370 versions of VM (such as VM/SP and VM/SP HPO) supported a maximum of two processors, with the system operating in either UP (uniprocessor) mode, MP (multiprocessor) mode, or AP (attached processor) mode.[22] AP mode is the same as MP mode, except the second processor lacks I/O capability. System/370-XA releases of VM (such as VM/XA) supported more. System/390 releases (such as VM/ESA) almost removed the limit entirely, and some modern z/VM systems can have as many as 80 processors.[23] The per-VM limit for defined processors is 64.
When IBM introduced the System/370 Extended Architecture on the 3081, customers were faced with the need to run a production MVS/370 system while testing MVS/XA on the same machine. IBM's solution was VM/XA Migration Aid, which used the new Start Interpretive Execution (SIE) instruction to run the virtual machine. SIE automatically handled some privileged instructions and returned to CP for cases that it couldn't handle. The Processor Resource/System Manager (PR/SM) of the later 3090 also used SIE. There were several VM/XA products before it was eventually supplanted by VM/ESA and z/VM.
In addition to RSCS networking, IBM also provided users with VTAM networking. ACF/VTAM for VM was fully compatible with ACF/VTAM on MVS and VSE.[24] Like RSCS, VTAM on VM ran under the specialized GCS operating system. However, VM also supported TCP/IP networking. In the late 1980s, IBM produced a TCP/IP stack for VM/SP and VM/XA.[25] The stack supported IPv4 networks, and a variety of network interface systems (such as inter-mainframe channel-to-channel links, or a specialized IBM RT PC that would relay traffic out to a Token Ring or Ethernet network). The stack provided support for Telnet connections, from either simple line-mode terminal emulators or VT100-compatible emulators, or proper IBM 3270 terminal emulators. The stack also provided an FTP server. IBM also produced an optional NFS server for VM; early versions were rather primitive, but modern versions are much more advanced.[26]
There was also a fourth networking option, known as VM/Pass-Through Facility (or more commonly called, PVM). PVM, like VTAM, allowed for connections to remote VM/CMS systems, as well as other IBM systems.[27] If two VM/CMS nodes were linked together over a channel-to-channel link or bisync link (possibly using a dialup modem or leased line), a user could remotely connect to either system by entering "DIAL PVM" on the VM login screen, then entering the system node name (or choosing it from a list of available nodes). Alternatively, a user running CMS could use the PASSTHRU program that was installed alongside PVM, allowing for quick access to remote systems without having to log out of the user's session. PVM also supported accessing non-VM systems, by utilizing a 3x74 emulation technique. Later releases of PVM also featured a component that could accept connections from a SNA network.
VM was also the cornerstone operating system of BITNET, as the RSCS system available for VM provided a simple network that was easy to implement, and somewhat reliable. VM sites were interlinked by means of an RSCS VM on each VM system communicating with one another, and users could send and receive messages, files, and batch jobs through RSCS. The "NOTE" command used XEDIT to display a dialog to create an email, from which the user could send it. If the user specified an address in the form of user at node, the email file would be delivered to RSCS, which would then deliver it to the target user on the target system. If the site has TCP/IP installed, RSCS could work with the SMTP service machine to deliver notes (emails) to remote systems, as well as receive them. If the user specified user at some.host.name, the NOTE program would deliver the email to the SMTP service machine, which would then route it out to the destination site on the Internet.
VM's role changed within IBM when hardware evolution led to significant changes in processor architecture. Backward compatibility remained a cornerstone of the IBM mainframe family, which still uses the basic instruction set introduced with the original System/360; but the need for efficient use of the 64-bit zSeries made the VM approach much more attractive. VM was also utilized in data centers converting from DOS/VSE to MVS and is useful when running mainframe AIX and Linux, platforms that were to become increasingly important. The current z/VM platform has finally achieved the recognition within IBM that VM users long felt it deserved. Some z/VM sites run thousands of simultaneous virtual machine users on a single system. z/VM was first released in October 2000[28] and remains in active use and development.
IBM and third parties have offered many applications and tools that run under VM. Examples include RAMIS, FOCUS, SPSS, NOMAD, DB2, REXX, RACF, and OfficeVision. Current VM offerings run the gamut of mainframe applications, including HTTP servers, database managers, analysis tools, engineering packages, and financial systems.
CP commands
[edit]As of release 6, the VM/370 Control Program has a number of commands for General Users, concerned with defining and controlling the user's virtual machine. Lower-case portions of the command are optional[29]
| Command | Description |
|---|---|
| #CP | Allows the user to issue a CP command from a command environment, or any other virtual machine after pressing the break key (defaults to PA1) |
| ADSTOP | Sets an address stop to halt the virtual machine at a specific instruction |
| ATTN | Causes an attention interruption allowing CP to take control in a command environment |
| Begin | Continue or resume execution of the user's virtual machine, optionally at a specified address |
| CHange | Alter attributes of a spool file or files. For example, the output class or the name of the file can be changed, or printer-specific attributes set |
| Close | Closes an open printer, punch, reader, or console file and releases it to the spooling system |
| COUPLE | Connect a virtual channel-to-channel adapter (CTCA) to another. Also used to connect simulated QDIO Ethernet cards to a virtual switch. |
| CP | Execute a CP command in a CMS environment |
| DEFine | Alter the current virtual machine configuration. Add virtual devices or change available storage size |
| DETach | Remove a virtual device or channel from the current configuration |
| DIAL | Connect your terminal at the logon screen to a logged-on multi-access virtual machine's simulated 3270 or typewriter terminals |
| DISConn | Disconnect your terminal while allowing your virtual machine to continue running |
| Display | Display virtual machine storage or (virtual) hardware registers |
| DUMP | Print a snapshot dump of the current virtual machine on the virtual spooled printer |
| ECHO | Set the virtual machine to echo typed lines |
| EXTernal | Cause an external interrupt to the virtual machine |
| INDicate | Display current system load or your resource usage |
| Ipl | IPL (boot) an operating system on your virtual machine |
| LINK | Attach a device from another virtual machine, if that machine's definition allows sharing |
| LOADVFCB | Specify a forms control buffer (FCB) for a virtual printer |
| LOGoff LOGout |
Terminate execution of the current virtual machine and disconnect from the system |
| Logon Login |
Sign on to the system |
| Message MSG |
Send a one-line message to the system operator or another user |
| NOTReady | Cause a virtual device to appear not ready |
| ORDer | Reorder closed spool files by ID or class |
| PURge | Delete closed spool files for a device by class,m ID, or ALL |
| Query | Display status information for your virtual machine, or the message of the day, or number or names of logged-in users |
| READY | Cause a device end interruption for a device |
| REQuest | Cause an interrupt on your virtual console |
| RESET | Clear all pending interrupts for a device |
| REWind | Rewind a real (non virtual) magnetic tape unit |
| SET | Set various attributes for your virtual machine, including messaging or terminal function keys |
| SLeep | Place your virtual machine in a dormant state indefinitely or for a specified period of time |
| SMsg | Send a one-line special message to another virtual machine (usually used to control the operation of the virtual machine; commonly used with RSCS) |
| SPool | Set options for a spooled virtual device (printer, reader, or punch) |
| STore | Alter the contents of registers or storage of your virtual machine |
| SYStem | Reset or restart your virtual machine or clear storage |
| TAg | Set a tag associated with a spooled device or file. The tag is usually used by VM's Remote Spooling Communications Subystem (RSCS) to identify the destination of a file |
| TERMinal | Set characteristics of your terminal |
| TRace | Start or stop tracing of specified virtual machine activities |
| TRANsfer | Transfer a spool file to or from another user |
| VMDUMP | Dump your virtual machine in a format readable by the Interactive Problem Control System (IPCS) program product |
OpenEdition extensions
[edit]Starting with VM/ESA Version 2, IBM introduced the chargeable optional feature OpenEdition for VM/ESA Shell and Utilities Feature,[30] which provides POSIX compatibility for CMS. The stand-out feature was a UNIX shell for CMS. The C compiler for this UNIX environment is provided by either C/370 or C for VM/ESA. Neither the CMS filesystem nor the standard VM Shared File System has any support for UNIX-style files and paths; instead, the Byte File System is used. Once a BFS extent is created in an SFS file pool, the user can mount it using the OPENVM MOUNT /../VMBFS:fileservername:filepoolname /path/to/mount/point. The user must also mount the root filesystem, done with OPENVM MOUNT /../VMBFS:VMSYS:ROOT/ /, a shell can then be started with OPENVM SHELL. Unlike the normal SFS, access to BFS filesystems is controlled by POSIX permissions (with chmod and chown).
Starting with z/VM Version 3, IBM integrated OpenEdition into z/VM[13] and renamed it OpenExtensions. OpenEdition and OpenExtensions provide POSIX.2 compliance to CMS.[31] Programs compiled to run under the OpenExtensions shell are stored in the same format as standard CMS executable modules. Visual editors, such as vi are unavailable, as 3270 terminals are not capable. Users can use ed or XEDIT instead of vi.
Marketing
[edit]In the early 1980s, the VM group within SHARE (the IBM user group) sought a mascot or logo for the community to adopt. This was in part a response to IBM's MVS users selecting the turkey as a mascot (chosen, according to legend, by the MVS Performance Group in the early days of MVS, when its performance was a sore topic). In 1983, the teddy bear became VM's de facto mascot at SHARE 60, when teddy bear stickers were attached to the nametags of "cuddlier oldtimers" to flag them for newcomers as "friendly if approached". The bears were a hit and soon appeared widely.[32] Bears were awarded to inductees of the "Order of the Knights of VM", individuals who made "useful contributions" to the community.[33][34]
Notes
[edit]- ^ Virtual Virtual Machine Facility/370 Basic System Extensions Program Product (VM/BSE, BSEPP) and Virtual Machine Facility/370 System Extensions Program Product (VM/SE, SEPP) installed on top of VM/370
- ^ Including ESA-capable hardware in XA mode.
- ^ Including z-capable hardware in ESA/390 mode.
- ^ For processor architecture, the requirements are
- ^ Introduced in VM/370 Release 2.
- ^ CMS can use DIAG for I/O on CMS file systems.
See also
[edit]Notes
[edit]References
[edit]- ^ "Introducing IBM z/VM 7.4". August 6, 2024.
- ^ Varian, Melinda (April 1991). "VM AND THE VM COMMUNITY: Past, Present, and Future" (PDF). p. 55. Archived (PDF) from the original on August 23, 2022. Retrieved June 9, 2022.
- ^ Creasy, op. cit., p. 483 — role of RSCS.
- ^ "Two versions of MUMPS out". Computerworld. Vol. XXI, no. 48. November 30, 1987. Archived from the original on March 6, 2023. Retrieved July 9, 2022.
- ^ "Licensed Products Migration Matrix for z/VM" (PDF). IBM. December 2, 2009. Archived (PDF) from the original on August 10, 2022. Retrieved July 9, 2022.
- ^ Elliott, Jim (August 17, 2004). The Evolution of IBM Mainframes and VM (PDF). SHARE August 2004. Linux for S/390 Linux for Big Iron. SHARE. Session 9140. Archived (PDF) from the original on October 13, 2006. Retrieved October 21, 2007.
- ^ a b Virtual Machine/System Product High Performance Option Release 5 Guide (PDF). IBM. July 1987. SC23-0189-3. Archived (PDF) from the original on June 17, 2022. Retrieved August 19, 2021.
- ^ VM/SYSTEM PRODUCT HIGH PERFORMANCE OPTION ANNOUNCED. Announcement Letters. IBM. October 21, 1981. ZP81-0805. Retrieved August 20, 2025.
- ^ VIRTUAL MACHINE/EXTENDED ARCHITECTURE MIGRATION AID. Announcement Letters. IBM. October 21, 1981. ZP81-0811. Retrieved August 20, 2025.
- ^ VIRTUAL MACHINE/EXTENDED ARCHITECTURE (VM/XA) SYSTEMS FACILITY. Announcement Letters. IBM. February 12, 1985. 285-044. Retrieved August 20, 2025.
- ^ VIRTUAL MACHINE/EXTENDED ARCHITECTURE SYSTEM PRODUCT (VM/XA SP) RELEASE 1. Announcement Letters. IBM. June 11, 1987. 287-239. Retrieved August 20, 2025.
- ^ VIRTUAL MACHINE/ENTERPRISE SYSTEMS ARCHITECTURE VERSION 1 RELEASE 1.0 AND VERSION 1 RELEASE 1.1. Announcement Letters. IBM. September 5, 1990. 290-499. Retrieved August 20, 2025.
- ^ a b z/VM V3R1 Enabled for 64-bit Architecture. Announcement Letters. IBM. October 3, 2000. 200-358. Retrieved August 14, 2025.
- ^ Gary R. Allred (May 1971). System/370 integrated emulation under OS and DOS (PDF). 1971 Spring Joint Computer Conference. Vol. 38. AFIPS Press. p. 164. doi:10.1109/AFIPS.1971.58. Archived (PDF) from the original on July 25, 2018. Retrieved June 12, 2022.
The Hypervisor concept was relatively simple. It consisted of an addendum to the emulator program and a hardware modification on a Model 65 having a compatibility feature. The hardware modification divided the Model 65 into partitions, each addressable from 0-n. The program addendum, having overlaid the system Program Status Words (PSW) with its own, became the interrupt handler for the entire system. After determining which partition had initiated the event causing the interrupt, control was transferred accordingly. The Hypervisor required dedicated I/O devices fore each partition and, because of this, the I/O configurations were usually quite large, and, therefore, prohibitive to the majority of uses.
- ^ IBM System/370 Principles of Operation (PDF). IBM. 1987. pp. 10–5. Archived (PDF) from the original on September 29, 2019. Retrieved August 17, 2019.
- ^ a b "DIAGNOSE Instruction in a Virtual Machine" (PDF). IBM Virtual Machine Facility/370: System Programmer's Guide (PDF) (Eighth ed.). IBM. March 1979. GC20-1807-7. Archived (PDF) from the original on April 2, 2020. Retrieved August 17, 2019.
- ^ "Chapter 1. The DIAGNOSE Instruction in a Virtual Machine" (PDF). z/VM Version 7 Release 2 CP Programming Services (PDF). IBM. August 12, 2020. p. 3. SC24-6272-04. Archived (PDF) from the original on April 30, 2021. Retrieved May 9, 2021.
In a real processor, the DIAGNOSE instruction performs processor-dependent diagnostic functions. In a virtual machine, you use the DIAGNOSE interface to request that CP perform services for your virtual machine. When your virtual machine attempts to execute a DIAGNOSE instruction, control is returned to CP. CP uses information provided in the code portion of the instruction to determine what service it should perform. Once this service is provided, control returns to the virtual machine.
- ^ VIrtual Machine/System Product CMS User's Guide Release 6 (PDF). IBM. July 1988. Chapter 4 (Using the Shared File System). SC19-6210-05. Archived (PDF) from the original on June 17, 2022. Retrieved August 19, 2021.
- ^ "File Pool Server Machines" (PDF). CMS File Pool Planning, Administration, and Operation (PDF). z/VM 7.2. IBM. November 12, 2021. pp. 18–23. SC24-6261-02. Archived (PDF) from the original on October 6, 2022. Retrieved June 10, 2022.
- ^ "IUCV Overview". www.ibm.com. Archived from the original on July 31, 2022. Retrieved July 31, 2022.
- ^ Varian, op. cit., p. 30 – extent of VM use; more VM licenses than MVS licenses
- ^ Virtual Machine/System Product Installation Guide Release 5 (PDF). IBM. December 1986. SC24-5237-3. Archived (PDF) from the original on June 17, 2022. Retrieved August 19, 2021.
- ^ "Vm66265: Z/Vm Support for 80 Logical Processors". IBM. August 27, 2020. Archived from the original on August 19, 2021. Retrieved August 19, 2021.
- ^ VTAM Reference Summary Version 3 Release 3 for MVS, VM, and VSE/ESA (PDF). IBM. September 1990. LY43--0047-1. Archived (PDF) from the original on August 19, 2021. Retrieved August 19, 2021.
- ^ IBM 9370 LAN Volume 2 - IEE 802.3 Support (PDF). IBM. April 1988. GG24-3227-0. Archived (PDF) from the original on August 19, 2021. Retrieved August 19, 2021.
- ^ "VM TCP/IP NFS Server Support". IBM. May 29, 2001. Archived from the original on April 26, 2021. Retrieved August 19, 2021.
- ^ VM/Pass-Through Facility Administration and Operation Version 2 (PDF). IBM. June 1993. SC24-5557-01. Archived from the original (PDF) on August 19, 2021. Retrieved August 19, 2021.
- ^ "IBM: About the z/VM Operating System". IBM z/VM virtualization technology. Vm.ibm.com. Archived from the original on July 3, 2015. Retrieved July 2, 2015.
- ^ IBM Virtual Machine Facility/370: CP Command Reference for General Users (PDF). IBM. August 1, 1979. Archived (PDF) from the original on April 2, 2020. Retrieved August 15, 2019.
- ^ "Availability: VM/ESA Version 2 Release 1.0 with OpenEdition for VM/ESA". Announcement Letters. IBM. June 12, 1995. 295-240. Retrieved August 20, 2025.
- ^ "IBM z/VM: OpenExtensions POSIX Conformance Document (GC24-6298-01)". www.ibm.com. August 21, 2020. Archived from the original on February 28, 2024. Retrieved July 31, 2022.
- ^ "Gallery of VM web GIFs". IBM z/VM site. September 13, 2001. Archived from the original on October 18, 2006.
- ^ Varian, op. cit., p. 2 – the teddy bear story
- ^ "Explain "official VM teddy"". Mr. Alan J. Flavell. Alanflavell.org.uk. Archived from the original on March 4, 2016. Retrieved July 2, 2015.
Further reading
[edit]- Primary CP/CMS sources
- R. J. Creasy, "The origin of the VM/370 time-sharing system", IBM Journal of Research & Development, Vol. 25, No. 5 (September 1981), pp. 483–90, PDF
― perspective on CP/CMS and VM history by the CP-40 project lead, also a CTSS author - E.W. Pugh, L.R. Johnson, and John H. Palmer, IBM's 360 and early 370 systems, MIT Press, Cambridge MA and London, ISBN 0-262-16123-0
― extensive (819 pp.) treatment of IBM's offerings during this period; the limited coverage of CP/CMS in such a definitive work is telling - Melinda Varian, VM and the VM community, past present, and future, SHARE 89 Sessions 9059–61, 1997;
― an outstanding source for CP/CMS and VM history - Bitsavers, Index of /pdf/ibm/360/cp67
- Additional CP/CMS sources
- R. J. Adair, R. U. Bayles, L. W. Comeau and R. J. Creasy, A Virtual Machine System for the 360/40, IBM Corporation, Cambridge Scientific Center Report No. 320‐2007 (May 1966)
― a seminal paper describing implementation of the virtual machine concept, with descriptions of the customized CSC S/360-40 and the CP-40 design - International Business Machines Corporation, CP-67/CMS, Program 360D-05.2.005, IBM Program Information Department (June 1969)
― IBM's reference manual - R. A. Meyer and L. H. Seawright, "A virtual machine time-sharing system," IBM Systems Journal, Vol. 9, No. 3, pp. 199–218 (September 1970)
― describes the CP-67/CMS system, outlining features and applications - R. P. Parmelee, T. I. Peterson, C. C. Tillman, and D. J. Hatfield, "Virtual storage and virtual machine concepts," IBM Systems Journal, Vol. 11, No. 2 (June 1972)
- Background CP/CMS sources
- F. J. Corbató, et al., The Compatible Time-Sharing System, A Programmer’s Guide, M.I.T. Press, 1963
- F. J. Corbató, M. Merwin-Daggett, and R. C. Daley, "An Experimental Time-sharing System," Proc. Spring Joint Computer Conference (AFIPS) 21, pp. 335–44 (1962) — description of CTSS
- F. J. Corbató and V. A. Vyssotsky, "Introduction and Overview of the MULTICS System", Proc. Fall Joint Computer Conference (AFIPS) 27, pp. 185–96 (1965)
- P. J. Denning, "Virtual Memory", Computing Surveys Vol. 2, pp. 153–89 (1970)
- J. B. Dennis, "Segmentation and the Design of Multi-Programmed Computer Systems," JACM Vol. 12, pp. 589–602 (1965)
― virtual memory requirements for Project MAC, destined for GE 645 - C. A. R. Hoare and R. H. Perrott, Eds., Operating Systems Techniques, Academic Press, Inc., New York (1972)
- T. Kilburn, D. B. G. Edwards, M. J. Lanigan, and F. H. Sumner, "One-Level Storage System", IRE Trans. Electron. Computers EC-11, pp. 223–35 (1962)
― Manchester/Ferranti Atlas - R. A. Nelson, "Mapping Devices and the M44 Data Processing System," Research Report RC 1303, IBM Thomas J. Watson Research Center (1964)
― about the IBM M44/44X - R. P. Parmelee, T. I. Peterson, C. C. Tillman, and D. J. Hatfield, "Virtual Storage and Virtual Machine Concepts", IBM Systems Journal, Vol. 11, pp. 99–130 (1972)
- Additional on-line CP/CMS resources
- febcm.club.fr — Information Technology Timeline Archived October 7, 2006, at the Wayback Machine, 1964–74
- www.multicians.org — Tom Van Vleck's short essay The IBM 360/67 and CP/CMS
- www.cap-lore.com — Norman Hardy's Short history of IBM's virtual machines
- www.cap-lore.com — Norman Hardy's short description of the "Blaauw Box"
External links
[edit]- Bob DuCharme, Operating Systems Handbook, Part 5: VM/CMS: a fairly detailed user's guide to VM/CMS
- E. C. Hendricks and T. C. Hartmann, "Evolution of a Virtual Machine Subsystem", IBM Systems Journal Vol. 18, pp. 111–142 (1979): RSCS design and implementation
- IBM Corporation, IBM Virtual Machine Facility/370 Introduction, GC20-1800, (1972): the original manual
- IBM Redbooks Publication – z/VM textbook
- IBM: z/VM portal
- IBM: z/VM manuals
- VM/PC documentation on bitsavers
| → derivation >> strong influence > some influence/precedence | ||
| CTSS | ||
| > IBM M44/44X | ||
| >> CP-40/CMS → CP[-67]/CMS | → VM/370 → VM/SE versions → VM/SP versions → VM/XA versions → VM/ESA → z/VM | |
| → VP/CSS | ||
| > TSS/360 | ||
| > TSO for MVT → for OS/VS2 → for MVS → ... → for z/OS | ||
| >> MULTICS and most other time-sharing platforms | ||
VM (operating system)
View on GrokipediaHistory
Origins and Early Development
The development of the VM operating system traces its roots to the mid-1960s at IBM's Cambridge Scientific Center, where researchers sought innovative ways to maximize the utility of expensive mainframe hardware. In late 1964, project leader Robert Creasy, along with Les Comeau and other team members including Bob Adair, Dick Bayles, and John Harmon, initiated work on CP-40, an experimental time-sharing system designed for the IBM System/360 Model 40.[9][10] This system was motivated by the need to enable multiple users to concurrently access a single computer, addressing the limitations of batch processing on early System/360 machines and drawing inspiration from MIT's Compatible Time-Sharing System (CTSS). By providing each user with the illusion of a dedicated machine, CP-40 aimed to improve resource utilization and reduce operational costs for research environments, where a single Model 40—limited to 256 KB of memory—served a growing team of scientists and engineers.[11][12][13] CP-40 introduced foundational virtualization concepts, creating up to 14 virtual machines that emulated the full System/360 environment, each capable of running independent operating systems or applications. To overcome the Model 40's hardware constraints, the team modified the machine with an associative memory device for dynamic address translation, allowing efficient multiplexing of CPU and memory resources among virtual machines. This experimental setup not only facilitated time-sharing for interactive computing but also gathered critical data on virtualization performance, influencing subsequent IBM designs. By late 1966, CP-40 became operational alongside the Conversational Monitor System (CMS), a lightweight interactive environment that provided users with a simple command-line interface for program development and execution.[14][15][13] Building on CP-40's success, the project evolved into CP-67 in 1967, adapted for the newly introduced System/360 Model 67, which included built-in virtual memory support via dynamic address translation hardware. CP-67 extended the virtualization model to support more sophisticated guest operating systems, such as IBM's Time Sharing System (TSS/360), while maintaining compatibility with other System/360 OSes like OS/360 and DOS/360. This version enhanced scalability, allowing dozens of virtual machines to run concurrently on a single mainframe, further demonstrating the potential for cost-effective resource sharing in multi-user and multi-OS environments. The motivations remained centered on boosting mainframe efficiency, enabling parallel testing of software and operating systems to accelerate development cycles without requiring additional hardware investments.[12][16] These research efforts culminated in the commercial release of VM/370 on August 2, 1972, as IBM's first production virtualization system for the System/370 family, which standardized virtual memory across its models. VM/370 integrated the mature CP hypervisor with CMS, offering a robust platform for time-sharing and virtual machine hosting that directly addressed enterprise needs for improved hardware utilization and operational flexibility. This announcement marked the transition from experimental prototypes to a widely deployable product, setting the stage for VM's enduring role in mainframe computing.[17][12][13]Evolution and Major Releases
The evolution of VM from its experimental roots in the 1960s and 1970s transitioned into commercial maturation with the release of VM/SP on February 11, 1980, as a replacement for VM/370, incorporating spooling capabilities for unit record devices and various performance enhancements to support broader usability on System/370 hardware.[18][17] These improvements included better resource management and extensions for running legacy operating systems like DOS/VS and VSE/SP, enabling VM/SP to serve as a multi-purpose virtualization platform for mid-range servers throughout the 1980s. In 1985, IBM introduced VM/XA to leverage the System/370 Extended Architecture (System/370-XA), providing extended 31-bit addressing for virtual machines and facilitating migration from 24-bit environments.[19] This release addressed limitations in memory addressing, allowing VM to support larger virtual storage configurations and improved I/O handling on XA-compatible hardware.[12] A key milestone came in 1989 with VM/SP HPO Release 6, the High Performance Option, which optimized VM for running MVS as a guest operating system through enhanced single-system image support, faster paging, and resource allocation tuned for large-scale processors.[20] This integration improved overall system throughput and efficiency, particularly for environments combining VM with MVS workloads on high-end System/370 systems.[21] VM/ESA followed in 1990, aligning with the announcement of the ESA/390 architecture and introducing support for PR/SM logical partitioning, which enabled dynamic resource allocation across multiple virtual machines within partitioned mainframes.[17] This convergence of prior VM variants—VM/SP, VM/XA, and VM/HPO—into a single product enhanced scalability and compatibility with Enterprise Systems Architecture features like access registers and expanded storage.[22] By 2000, with the introduction of z/Architecture for 64-bit addressing, VM/ESA was renamed z/VM to reflect compatibility with the new eServer zSeries hardware, marking a shift toward supporting advanced workloads including 64-bit guests.[22][23] This rebranding, effective October 3, 2000, positioned z/VM as the flagship hypervisor for IBM's evolving mainframe ecosystem.[12] Following the rebranding, z/VM continued to evolve with key enhancements for modern computing. Version 3.1 (2001) introduced full 64-bit support, enabling larger memory addressing and better performance for 64-bit guest operating systems like z/OS and Linux. Subsequent releases, such as z/VM 4.4 (2005) with Virtual Switch support and z/VM 6.4 (2012) introducing Single System Image clustering for high availability, adapted to IBM's zSeries, System z, and later IBM Z hardware. As of 2025, z/VM 7.4 incorporates continuous delivery for features like enhanced security, container support, and hybrid cloud integration, maintaining its role in enterprise virtualization.[12][1]Architecture and Design
Core Components
The IBM VM operating system, now known as z/VM in its modern iterations, is built around two fundamental components: the Control Program (CP) and the Conversational Monitor System (CMS). These elements form the core of its hypervisor-based architecture, enabling the emulation of complete hardware environments for multiple virtual machines on a single physical mainframe. CP serves as the foundational virtualization layer, acting as a hypervisor that manages real hardware resources and allocates them to guest operating systems, including CMS itself and others such as z/OS or Linux distributions.[24][25] CP provides each virtual machine with the illusion of dedicated access to a full system, including virtual CPUs, memory, and I/O devices, through techniques like time-sharing and demand paging. It operates in a supervisor state to intercept and simulate hardware operations, ensuring isolation between guests while optimizing resource utilization across the physical machine. For instance, CP dynamically maps virtual storage to real memory and handles interruptions, allowing multiple operating systems to run concurrently without interference. This structure supports scalability, with modern z/VM instances capable of hosting thousands of virtual machines on zSystems hardware.[25][26] CMS, in contrast, functions as a lightweight, single-user operating system that executes within its own virtual machine provided by CP, serving as the primary interactive environment for users and administrators. It handles user-level tasks such as file management, program execution, and command processing, relying on CP for underlying hardware abstraction. The interaction between CP and CMS is symbiotic: CP delivers the virtualized infrastructure, including virtual devices and resource scheduling, while CMS manages applications and data within its isolated space, often using specialized interfaces like DIAGNOSE instructions to request services from CP. This division enables efficient personal computing and system administration, with CMS providing a conversational interface that has been a hallmark of VM since its origins.[24][26]Virtualization Principles
The virtualization approach in VM, originally introduced with VM/370 and evolved in z/VM, is built on a type-1 hypervisor model where the Control Program (CP) executes directly on the bare hardware without an underlying host operating system, enabling the creation and management of multiple virtual machines.[27][28] This bare-metal execution allows CP to partition the physical system's resources into isolated virtual environments, providing each with a complete and transparent simulation of the underlying hardware architecture.[29] At its core, VM employs full virtualization, granting each virtual machine a self-contained emulation of the entire hardware stack, including the CPU, memory, and peripheral devices such as I/O channels and storage.[27] This simulation operates through a trap-and-emulate mechanism, where privileged instructions from guest operating systems are intercepted by CP and emulated to maintain the illusion of dedicated hardware, ensuring no modifications are required for the guests to run unmodified.[28] Resource isolation is paramount, preventing interference between virtual machines while allowing secure multitasking; for instance, each VM maintains its own address space and device state, fostering fault tolerance and ease of debugging.[27] VM's design emphasizes time-sharing through dynamic resource allocation, where CP schedules CPU cycles and manages memory paging to support concurrent execution across multiple VMs, optimizing utilization on high-cost mainframe hardware.[28] This enables scalability to thousands of virtual machines on modern System z servers, far exceeding the dozens supported in early implementations, by leveraging overcommitment techniques like virtual processor allocation and shared paging.[29] Key principles include broad compatibility with diverse guest operating systems, such as z/OS for batch processing, Linux distributions for open-source workloads, and the Conversational Monitor System (CMS) for interactive use, all without architectural alterations.[27][29] In contrast to many contemporary hypervisors on x86 architectures, VM's virtualization is natively integrated with IBM mainframe hardware features, such as the Start Interpretive Execution (SIE) instruction for efficient trapping and specialized I/O protocols like FICON, eliminating the need for binary translation or paravirtualization layers.[29] CP implements these principles by handling resource dispatching and virtualization services, directly supporting the hypervisor's foundational goals of efficiency and isolation in enterprise-scale environments.[27]Control Program (CP)
Primary Functions
The Control Program (CP) in IBM z/VM serves as the foundational hypervisor responsible for creating, managing, and terminating virtual machines (VMs) to simulate independent computing environments for guest operating systems. VM creation occurs through entries in the z/VM user directory, which define each VM's configuration, including allocated resources and access privileges; users initiate logon via commands like LOGON, establishing the VM, while suspension and termination are handled via commands such as SUSPEND and LOGOFF, respectively, ensuring orderly resource reclamation upon logout.[30][26] CP handles resource scheduling to enable efficient sharing of physical hardware among multiple VMs, including CPU time-sharing where it dispatches virtual CPUs—up to 64 per VM—across real processors using a priority-based scheduler that supports dedicated or shared allocation modes. Memory management involves paging virtual storage to real memory, with dynamic reconfiguration allowing adjustments to main storage sizes and the use of shared segments to optimize memory utilization across VMs. For I/O device assignment, CP allocates virtual devices such as minidisks (logical partitions of DASD volumes) and spool files, mediating access to shared physical I/O resources while supporting advanced features like Parallel Access Volumes (PAV) for improved throughput.[30][26][31] System operator functions under CP include comprehensive console management, where each VM receives a virtual console (typically 3215-type) for issuing commands, and global resource control through privileged operator commands like QUERY to monitor and adjust system-wide elements such as storage and processor availability. These capabilities allow operators to oversee the entire z/VM environment, including dynamic reconfiguration of resources without halting operations.[30][26] Security features in CP enforce user authentication via directory-stored passwords required at logon, supporting encrypted protocols like SSL for remote access and integration with external systems such as RACF for enhanced auditing. VM isolation is maintained through hardware-assisted mechanisms like the Interpretive Execution Facility (SIE instruction) and two-level address translation, preventing interference between VMs or unauthorized access to shared resources, with privilege classes restricting command usage to authorized users.[32][30] Performance monitoring and tuning in CP are facilitated by built-in tools such as the CP Monitor, which collects data on CPU utilization, paging rates, and I/O activity, accessible via commands like QUERY VIRTUAL and integrated with the Performance Toolkit for z/VM to generate reports and identify bottlenecks. Tuning options include adjusting scheduler parameters for CPU shares and enabling features like collaborative memory management to optimize resource allocation dynamically.[33][31]Hypervisor Interface
The hypervisor interface in the IBM z/VM Control Program (CP) enables guest operating systems to access underlying hardware resources and CP services through a combination of hardware-assisted virtualization facilities and software-defined calls. Central to this interface is the DIAGNOSE instruction, a privileged System/370 instruction (opcode X'83') that virtual machines use to invoke CP hypervisor services, such as resource allocation, I/O operations, and timing functions. When issued by a guest, the DIAGNOSE instruction is intercepted by CP, which interprets the code and subcode to perform the requested action, ensuring isolation and controlled access to real machine resources. This mechanism allows guests to operate as if they have direct hardware access while CP mediates all privileged operations.[34] Key hypervisor services are accessed via specific DIAGNOSE codes. For storage management, DIAGNOSE X'10' allows a guest to release pages of second-level storage, effectively altering the virtual machine's paging configuration by notifying CP of unused pages for reclamation. In I/O operations, DIAGNOSE X'24' provides guests with identifying information and status about virtual devices, including type and features, facilitating device-specific handling without direct hardware probing. I/O interception occurs transparently through CP's monitoring of guest-issued I/O instructions (e.g., SSCH for start subchannel), where CP simulates or redirects them to real devices as needed. Timer management is supported via DIAGNOSE codes like X'258' for accessing the time-of-day (TOD) clock and interval timer, enabling guests to query or set virtual timers while CP ensures accurate real-time synchronization across virtual machines. These calls exemplify how the interface balances guest autonomy with hypervisor oversight.[35][36][34] Access to hypervisor instructions is governed by privilege classes defined in the z/VM user directory, which categorize CP commands and DIAGNOSE codes into eight predefined classes (A through G, plus "Any"). For instance, system operator tasks (class A) may permit broad DIAGNOSE usage for resource control, while general users (class G) are restricted to virtual machine-specific calls, preventing unauthorized access to shared resources. Installations can customize classes I-Z and 1-6 for DIAGNOSE codes, but core hypervisor services require explicit authorization via the directory's PRIVCLASS statement to maintain security. Unauthorized attempts result in program checks or intercepts, enforcing the principle of least privilege in the virtualized environment.[37][38] The interface has evolved significantly since its inception in VM/370. Introduced in 1972, VM/370 relied on the Start Interpretive Execution (SIE) instruction—a hardware facility in System/370 processors—to enable efficient simulation of guest instructions, including interception of privileged operations like I/O and storage access. Subsequent releases extended this: VM/SP (1979) added support for virtual storage extensions, while VM/XA (1985) incorporated 370-XA architecture for expanded addressing. By VM/ESA (1990), the interface supported ESA/390 mode with enhanced interpretive execution for 31-bit addressing. Modern z/VM (version 7.2 and later) integrates z/Architecture extensions, introduced in 2000, providing 64-bit addressing, dynamic reconfiguration of virtual processors, and improved SIE performance for high-scale virtualization, allowing up to 64 logical processors per guest. These advancements maintain backward compatibility while optimizing for contemporary workloads.[39] Compatibility modes ensure seamless support for diverse guests. In z/Architecture mode, guests benefit from full 64-bit capabilities, including extended addressing and cryptographic accelerations. Legacy modes, such as ESA/390, provide compatibility for 24/31-bit guests like older MVS or VSE systems, with CP emulating missing facilities via intercepts. Specialized XC modes (ESA/XC and z/XC) extend the interface for cross-memory services and data spaces, used in advanced applications like cross-system coupling. Guests can switch modes dynamically if supported, but the hypervisor enforces mode-specific restrictions to preserve integrity.[40]Conversational Monitor System (CMS)
Core Features
The Conversational Monitor System (CMS) is designed as a lightweight, interactive operating system that operates within a single virtual address space per user, providing a streamlined environment for memory, file, and resource management without the complexity of multiple address spaces. This single-address-space approach enables direct access to virtual storage, facilitating efficient program execution and data handling for individual users.[41] CMS employs a flat file system structure, where files are identified by a three-part name consisting of a filename (up to 8 characters), filetype (up to 8 characters), and filemode (e.g., A1 or B), allowing simple organization and access without hierarchical directories.[41] The system's command-line interface supports interactive input in both uppercase and lowercase, enabling users to execute commands directly for tasks such as file operations and system queries, which promotes simplicity and rapid response in a terminal-based environment.[41] CMS supports multitasking within its single-user context through pipelines and EXEC scripts, allowing users to chain commands for background processing and automate workflows. The PIPE command facilitates pipelines by connecting multiple program stages, where output from one stage serves as input to the next, enabling efficient data processing without manual intervention; for instance, a user can pipe file listings to sorting utilities for concurrent execution. EXEC scripts, which can be written in the EXEC language or the more advanced REXX programming language, permit the creation of reusable command sequences that run as background jobs, supporting task automation such as batch file manipulations or repetitive queries.[42] This capability extends CMS's interactivity by allowing limited concurrency within the user's virtual machine, though it remains constrained to sequential or pipelined operations rather than full parallelism.[42] A key utility in CMS is the XEDIT editor, a full-screen, line-oriented tool that serves as the primary interface for creating, editing, and managing text files. XEDIT supports modes for input, command execution, and browsing, with features like prefix commands for deleting (D), changing (C), or locating (LOCATE) lines, as well as scrolling commands such as TOP and BOTTOM for navigation.[41] Built-in utilities complement XEDIT by providing essential file manipulation tools, including FILELIST for listing files, COPYFILE and ERASE for copying or deleting, and GET/PUT for merging content, as well as UDIFF and UPATCH (introduced in z/VM 7.4) for file differencing and patching, all accessible via the command line to maintain CMS's emphasis on simplicity.[41][43] These tools operate on the flat file system, ensuring straightforward data handling without advanced structuring. CMS integrates closely with the Control Program (CP) by leveraging its virtualization services, such as loading CMS via the IPL CMS command to initialize the user environment and accessing virtual devices like disks and terminals through CP-managed resources.[30] This integration allows seamless interaction, where users can issue CP commands (e.g., QUERY) from within CMS or switch modes to manage virtual hardware, enabling shared access to system resources while CMS handles application-level tasks.[44] However, CMS's design imposes limitations as a single-user system, focusing on one interactive session per virtual machine without native support for multiprocessing, which is instead managed at the CP level across multiple virtual machines. This single-user orientation prioritizes low overhead and direct control but restricts scalability for multi-user or parallel workloads within a single CMS instance.User Environment
Users log on to CMS by issuing the IPL CMS command from the CP READ prompt, which loads the Conversational Monitor System into the virtual machine and initiates an interactive session. Upon loading, CMS automatically searches for and executes the PROFILE EXEC file on the accessed A-disk or in the Shared File System (SFS), if present, to personalize the startup environment. This EXEC contains user-specified CP and CMS commands executed at the beginning of every terminal session, such as accessing specific minidisks or SFS directories, configuring terminal characteristics like programmable function (PF) keys, and loading macro libraries or exec procedures. For example, a typical PROFILE EXEC might include commands likeACCESS 191 D to link a user minidisk and SET PF12 RETRIEVE to enable command recall via a function key. Users can suppress PROFILE EXEC execution during IPL by specifying the NPROF option or run it manually with the PROFILE command.[45][45][45]
Interactive sessions in CMS emphasize two-way communication between the user and the system via terminal input/output (I/O), typically using 3270-compatible terminals or emulations for full-screen interaction. Command entry occurs at the CMS READY prompt, where users type commands followed by parameters, with support for line editing features such as insert, delete, and overtype during input. Command recall is provided through PF key assignments, often set in the PROFILE EXEC, allowing users to retrieve and edit previous commands without retyping, which streamlines repetitive tasks. The built-in HELP facility offers online assistance directly from the terminal, displaying hierarchical information on commands, macros, messages, and usage examples; for instance, typing HELP followed by a command name retrieves its syntax, parameters, and notes. Core tools like XEDIT further support interactive editing within sessions.[46][47][48][49]
Customization in the CMS user environment allows fine-grained control over resources and behaviors to suit individual workflows. Access control lists (ACLs) manage permissions for files and directories in the SFS, enabling users to grant or revoke read, write, or execute access to specific other users or groups without relying on passwords, thus facilitating secure sharing in multi-user setups. User-defined macros, implemented as EXEC procedures in REXX or older EXEC languages, permit automation of complex command sequences; these can be created, stored as files (e.g., MYMACRO EXEC), and invoked like built-in commands, often loaded globally via the PROFILE EXEC for session-wide availability. Such macros enhance productivity by encapsulating repetitive operations, like file backups or directory listings with custom filters.[50][45]
Daily workflows in CMS often combine interactive and non-interactive elements for efficient task handling in a virtual machine setting. For batch processing, users submit jobs to the CMS batch facility via virtual card readers, using CMS commands and EXECs in input files to sequence compilations, assemblies, or data processing without tying up the interactive terminal; the facility runs on a dedicated virtual machine, processing jobs sequentially and directing output to virtual printers or readers.[51][51] Integration with other guest operating systems occurs through shared minidisks or inter-user communication vehicles, allowing CMS users to offload compute-intensive tasks to batch while maintaining oversight via interactive queries. A representative workflow might involve an operator editing code interactively with XEDIT, testing via a macro, then submitting a batch job for large-scale compilation, all within the time-shared environment.
The ergonomics of the CMS user environment are tailored for mainframe operators and developers operating in time-sharing scenarios, prioritizing rapid, conversational interaction to minimize latency in multi-user systems. Designed as a lightweight, single-user OS within each virtual machine, CMS supports quick file management, program execution, and debugging directly from the terminal, reducing the need for batch submissions common in non-interactive mainframe environments. This focus on immediacy and simplicity enables efficient resource sharing among hundreds of concurrent users, with features like PF key customization and online help promoting intuitive operation for routine maintenance and development tasks.[52][46][48]
Storage Management
Minidisks
Minidisks in z/VM are emulated direct-access storage devices (DASD) that function as virtual disk drives attached to virtual machines, providing dedicated storage space for user data and files.[53] They simulate physical DASD volumes, allowing virtual machines to interact with them as if they were real hardware devices, and can be formatted by the Conversational Monitor System (CMS) or other guest operating systems to support file storage needs.[54] This emulation enables efficient resource sharing on the underlying mainframe hardware while maintaining isolation for individual virtual machines. Minidisks are created using Control Program (CP) commands, primarily the DEFINE MDISK or DEFINE DISK commands, which allocate space from real DASD volumes or system paging space.[53] For example, an administrator might issueDEFINE MDISK 0200 100 100 pack01 to define a minidisk with virtual device address 0200 using 100 cylinders starting at extent 100 on volume pack01.[53] Temporary minidisks, known as T-disks, can be defined dynamically during a session with commands like DEFINE T3380 AS vdev CYL ncyl, drawing from available paging space for short-term use without permanent allocation.[53]
Access to minidisks supports various modes to control sharing and permissions, including read-only (R, RR, SR), read/write (W, WR, MW), and multi-access options with locking mechanisms such as working allegiance or the SHARED parameter.[53] Multi-access allows multiple virtual machines to link to the same minidisk, but write operations are serialized through allegiance to prevent conflicts, ensuring data integrity during shared use.[53] In CMS environments, users access minidisks via file mode letters (e.g., A for the default A-disk) and commands like ACCESS or LINK, integrating them into the CMS file system search order.[54]
Sizing for minidisks is flexible, with up to 1,182,006 cylinders (~812 GB) for ECKD devices emulating 3390-type with Extended Address Volume (EAV) support, though older configurations or specific uses (e.g., CMS file system without enhancements) may limit to 65,520 cylinders (~45 GB), and cache eligibility to 32,767 cylinders. Emulated FBA minidisks support up to 2,147,483,640 512-byte blocks (~1 TB) for SCSI/NVMe-backed devices, though CMS file system use is practically limited to 381 GB; certain VFB-512 emulations are capped at 4,194,296 blocks (~2 GB), with blocks aligned on 4K boundaries for paging.[53][55] Management involves CP utilities such as the MDISK command for definition and querying, CPACCESS for linking, and QUERY MDISK for status checks, along with CMS tools like RELEASE and QUERY DISK to monitor space and detach devices.[53] These utilities facilitate maintenance, including space reclamation and error handling, to optimize performance.
One key advantage of minidisks is their portability, as they can be easily detached, moved across systems, or reattached without disrupting the virtual machine environment, making them suitable for development workflows.[53] Additionally, they support snapshot capabilities through IBM's FlashCopy technology, enabling point-in-time copies for safe testing and backup without affecting the original data.[53]
Shared File System
The Shared File System (SFS) in VM, introduced with VM/ESA, provides a hierarchical, multi-user file management system that enables controlled sharing of files and directories across virtual machines, contrasting with the more personal storage of minidisks. SFS addresses the need for centralized data access in multi-user environments by organizing files in a tree-like structure of directories and subdirectories, stored in file pools managed by dedicated SFS server virtual machines. These servers handle dynamic allocation of storage from underlying minidisks, ensuring efficient use of DASD resources while maintaining data integrity through implicit and explicit locking mechanisms that prevent concurrent writes.[56][57] The structure of SFS revolves around file pools, each comprising a collection of minidisks assigned to an SFS server VM, which acts as a repository for user file spaces. Users are enrolled in a file pool by the system administrator, granting them a root directory within that pool; from there, they can create subdirectories up to eight levels deep, with each directory name limited to 16 characters and supporting underscores for readability. Files within SFS are record-oriented, managed in 4KB blocks, and can be shared across different z/VM systems if the file pools are accessible. SFS file pools support a large number of directories, scalable to enterprise needs, with no architected hard limit documented beyond overall system resources.[56][57][58] Access controls in SFS emphasize security and resource management through owner-defined permissions and quotas. Directory and file access follows a model similar to Unix, with permissions categorized for owner, group, and world (public) levels, including read (for listing or viewing contents), write (for creating, modifying, or deleting), and control (for granting or revoking access). Owners use the GRANT and REVOKE commands to assign these authorities explicitly to individual users, groups, or all users, while file pool administrators hold overarching control. Directory quotas limit storage usage per user or directory, typically set in blocks during enrollment (e.g., 10,000 4KB blocks), preventing overuse and ensuring fair allocation across the shared pool.[59][57][60] Operations on SFS files and directories are performed via specialized FS commands integrated into CMS, facilitating common tasks across user boundaries. For instance, the FS LISTFILE command displays file details in a specified directory, while FS COPY transfers files between SFS locations or from accessed minidisks, and FS LINK creates symbolic links for shared access without duplication. Standard CMS utilities like COPY, ERASE, and RENAME also apply when a directory is accessed (e.g., via ACCESS or VMLINK with an SFS target), supporting seamless integration. These operations commit changes at the end of a logical unit of work, ensuring consistency in multi-user scenarios.[61][57][62] The primary benefits of SFS include centralized storage that reduces the proliferation of personal minidisks, simplifies administration by consolidating DASD usage, and enhances collaboration through secure, file-level sharing. By leveraging SFS servers for dynamic allocation and locking, VM environments achieve higher efficiency and reliability, particularly in large-scale deployments where up to thousands of users can access shared resources without performance degradation from excessive minidisk linking. This design has made SFS a cornerstone for data management in enterprise mainframe operations since its VM/ESA debut.[56][57][63]Commands and Extensions
CP Commands
CP commands form the primary interface for users and operators to interact with the Control Program (CP) in z/VM, enabling management of virtual machines, devices, and system resources. These commands are entered in the CP environment, accessed by pressing the attention key or entering #CP from the Conversational Monitor System (CMS), and are case-insensitive with operands limited to 8 characters within columns 5-72 of the input line.[53] Command syntax typically follows the structurecommand [operand] [options], where operands are separated by spaces unless specified otherwise, and options can often be arranged in any order. Two-letter abbreviations are supported for efficiency, such as IP for IPL, QU for QUERY, DE for DEFINE, and TR for TRACE. Comments within command lines begin with /*. Privilege levels determine access, categorized into classes: A (highest system control), B (resource management), C (programming), D (spooling), E (analyst/service), F (IBM service), G (general user), and installation-defined classes H-Z or 1-6; some commands require <ANY> (no restriction) or specific authorization.[53]
Basic commands handle essential virtual machine operations. The IPL command initiates a program or system load from a specified device, such as IPL 0100 to load from virtual device 0100 or IPL CMS to start the CMS operating system with optional parameters like PARMREGS=0-15; it requires privilege class G or higher and simulates an initial program load, potentially including a CLEAR option to reset storage. QUERY provides status information, for example QUERY USERS to list active virtual machines or QUERY STORAGE to display virtual storage allocation; privilege varies by operand, with G for user-level queries and A for system-wide details like QUERY LPARS. SUSPEND pauses the current or a specified virtual machine with SUSPEND or SUSPEND userid, halting execution until resumed, and requires classes G, A, B, or C. The complementary RESUME command restarts a suspended machine using RESUME or RESUME userid, with the same privilege requirements.[53]
Resource management commands configure and allocate devices and storage. DEFINE sets up virtual devices or resources, such as DEFINE VDEV 0200 for a virtual device or DEFINE STORAGE 16G to allocate 16 gigabytes of virtual storage; it typically needs class B or higher, and may cause a system reset if defining certain resources. DETACH removes a device from a virtual machine, as in DETACH 0200 or DETACH NIC 0500 for a network interface card, requiring class G for user devices or B for system ones. LINK connects a user to a minidisk or device for sharing, using syntax like LINK userid 191 191 MR to link maintenance user ID's minidisk 191 in multi-read mode; it demands authorization and class G, with modes including R (read), W (write), or RR (read with password). These commands support scripting in console sessions, such as chaining IPL followed by LINK in a startup sequence.[53]
Operator commands facilitate communication and diagnostics. MSG broadcasts or targets messages, for instance MSG ALL System maintenance at 02:00 to notify all users or MSG OPERATOR Report issue; it operates under class TRACE ON for general tracing or TRACE I/O 0100-010F for I/O on specific devices; it requires classes A, B, C, E, or G depending on scope, with options like EVENT or SETS for detailed logging, often used in scripts to capture activity during VM sessions.[53]| Command | Abbreviation | Privilege Classes | Example Usage |
|---|---|---|---|
| IPL | IP | G, A, B | IPL 191 (loads from device 191) |
| QUERY | QU | Varies (G to A) | QUERY DASD 191 (checks minidisk status) |
| SUSPEND | - | G, A, B, C | SUSPEND USER1 (pauses VM USER1) |
| RESUME | - | G, A, B, C | RESUME (restarts current VM) |
| DEFINE | DE | B, A, C, E | DEFINE VDEV 0300 (defines virtual device) |
| DETACH | - | G, B | DETACH 0300 (removes device 0300) |
| LINK | - | G | LINK MAINT 191 618 R (links read-only minidisk) |
| MSG | - | MSG USER2 Check logs (sends to USER2) | |
| TRACE | TR | A, B, C, E, G | TRACE EVENT ON (traces events) |
