Hubbry Logo
VM (operating system)VM (operating system)Main
Open search
VM (operating system)
Community hub
VM (operating system)
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
VM (operating system)
VM (operating system)
from Wikipedia
z/VM
DeveloperIBM
OS familyVM family
Working stateCurrent
Source model1972–1986 Open source, 1977–present Closed source
Initial release1972; 54 years ago (1972)
Latest releaseIBM z/VM V7.4 / September 20, 2024; 16 months ago (2024-09-20)[1]
Marketing targetIBM mainframe computers
Available inEnglish
Supported platformsSystem/370, System/390, IBM Z
License1972–1981 Public domain, 1976–present Proprietary
Official websitewww.vm.ibm.com
The default login screen on VM/370 Release 6

VM, often written VM/CMS, is a family of virtual machine operating systems used on IBM mainframes including the System/370, System/390, IBM Z and compatible systems. It replaced the older CP-67 that formed the basis of the CP/CMS operating system. It was first released as the free Virtual Machine Facility/370 for the S/370 in 1972, followed by chargeable upgrades[a] and versions that added support for new hardware.[d]

VM creates virtual machines into which a conventional operating system may be loaded to allow user programs to run. Originally, that operating system was CMS, a simple single-user system similar to DOS. VM can also be used with a number of other IBM operating systems, including large systems like MVS or VSE, which are often run on their own without VM. In other cases, VM is used with a more specialized operating system or even programs that provided many OS features. These include RSCS[e] and MUMPS, among others.

Design

[edit]

The heart of the VM architecture is the Control Program or hypervisor abbreviated CP, VM-CP and sometimes, ambiguously, VM. It runs on the physical hardware, and creates the virtual machine environment. VM-CP provides full virtualization of the physical machine – including all I/O and other privileged operations. It performs the system's resource-sharing, including device management, dispatching, virtual storage management, and other traditional operating system tasks. Each VM user is provided with a separate virtual machine having its own address space, virtual devices, etc., and which is capable of running any software that could be run on a stand-alone ("bare-metal") machine. A given VM mainframe typically runs hundreds or thousands of virtual machine instances. VM-CP began life as CP-370, a reimplementation of CP-67, itself a reimplementation of CP-40.

Running within each virtual machine is another operating system, a guest operating system. This might be:

  • CMS (Conversational Monitor System, renamed from the Cambridge Monitor System of CP/CMS). Most virtual machines run CMS, a lightweight, single-user operating system. Its interactive environment is comparable to that of a single-user PC, including a file system, programming services, device access, and command-line processing. (While an earlier version of CMS was uncharitably described as "CP/M on a mainframe", the comparison is an anachronism; the author of CP/M, Gary Kildall, was an experienced CMS user.)
  • GCS (Group Control System), which provides a limited simulation of the MVS API. IBM originally provided GCS in order to run VTAM without a service OS/VS1 virtual machine and VTAM Communications Network Application (VCNA). RSCS V2 also ran under GCS.
  • A mainstream operating system. IBM's mainstream operating systems (e.g., the MVS and DOS/VSE families, OS/VS1, TSS/370, or another layer of VM/370 itself (see below)) can be loaded and run without modification. The VM hypervisor treats guest operating systems as application programs with exceptional privileges – it prevents them from directly using privileged instructions (those which would let applications take over the whole system or significant parts of it), but simulates privileged instructions on their behalf. Most mainframe operating systems terminate a normal application which tries to usurp the operating system's privileges. The VM hypervisor can simulate several types of console terminals for the guest operating system, such as the hardcopy line-mode 3215, the graphical 3270 family, and the integrated console on newer System/390 and IBM Z machines. Other users can then access running virtual machines using the DIAL command at the logon screen, which will connect their terminal to the first available emulated 3270 device, or the first available 2703 device if the user is DIALing from a typewriter terminal.
  • Another copy of VM. A second level instance of VM can be fully virtualized inside a virtual machine. This is how VM development and testing is done (a second-level VM can potentially implement a different virtualization of the hardware). This technique was used to develop S/370 software before S/370 hardware was available, and it has continued to play a role in new hardware development at IBM. The literature cites practical examples of virtualization five levels deep.[2] Levels of VM below the top are also treated as applications but with exceptional privileges.
  • A copy of the mainframe version of AIX or Linux. In the mainframe environment, these operating systems often run under VM, and are handled like other guest operating systems. (They can also run as 'native' operating systems on the bare hardware.) There was also the short-lived IX/370, as well as S/370 and S/390 versions of AIX (AIX/370 and AIX/ESA).
  • A specialized VM subsystem. Several non-CMS systems run within VM-CP virtual machines, providing services to CMS users such as spooling, interprocess communications, specialized device support, and networking. They operate behind the scenes, extending the services available to CMS without adding to the VM-CP control program. By running in separate virtual machines, they receive the same security and reliability protections as other VM users. Examples include:
    • RSCS (Remote Spooling and Communication Subsystem, aka VNET) – communication and information transfer facilities between virtual machines and other systems[3]
    • RACF (Resource Access Control Facility) — a security system
    • Shared File System (SFS), which organizes shared files in a directory tree (the servers are commonly named "VMSERVx")
    • VTAM (Virtual Telecommunications Access Method) – a facility that provides support for a Systems Network Architecture network
    • PVM (VM/Pass-Through Facility) – a facility that provides remote access to other VM systems
    • TCPIP, SMTP, FTPSERVE, PORTMAP, VMNFS – a set of service machines that provide TCP/IP networking to VM/CMS
    • Db2 Server for VM – a SQL database system, the servers are often named similarly to "SQLMACH" and "SQLMSTR"
    • DIRMAINT – A simplified user directory management system (the directory is a listing of every account on the system, including virtual hardware configuration, user passwords, and minidisks).
    • MUMPS/VM — an implementation of the MUMPS database and programming language which could run as guest on VM/370.[4] MUMPS/VM was introduced in 1987 and discontinued in 1991.[5]
  • A user-written or modified operating system, such as National CSS's CSS or Boston University's VPS/VM.

Versions

[edit]

The following versions are known:

Virtual Machine Facility/370
VM/370, released in 1972, is a System/370 reimplementation of earlier CP/CMS operating system.
VM/370 Basic System Extensions Program Product
VM/BSE (BSEPP) is an enhancement to VM/370 that adds support for more devices (such as 3370-type fixed-block-architecture DASD drives), improvements to the CMS environment (such as an improved editor), and some stability enhancements to CP.
VM/370 System Extensions Program Product
VM/SE (SEPP) is an enhancement to VM/370 that includes the facilities of VM/BSE, as well as a few additional fixes and features.
Virtual Machine/System Product
VM/SP, a milestone version, replaces VM/370, VM/BSE and VM/SE. Release 1 added EXEC2 and XEDIT System Product Editor; Release 3 added REXX; Release 6 added the shared filesystem.[6]
Virtual Machine/System Product High Performance Option
VM/SP HPO adds additional device support and functionality to VM/SP, and allows certain S/370 machines that can utilize more than 16 MB of real storage to do so, up to 64 MB. This version was intended for users that would be running multiple S/370 guests at once.[7][8]
Virtual Machine/Extended Architecture Migration Aid
VM/XA MA is intended to ease the migration from MVS/370 to MVS/XA by allowing both to run concurrently on the same processor complex.[9]
Virtual Machine/Extended Architecture System Facility
VM/XA SF is an upgraded VM/XA MA with improved functionality and performance.[10]
Virtual Machine/Extended Architecture System Product
VM/XA SP is an upgraded VM/XA SF with improved functionality and performance, offered as a replacement for VM/SP HPO on machines supporting S/370-XA. It includes a version of CMS that can run in either S/370 or S/370-XA mode.[11]
Virtual Machine/Enterprise Systems Architecture
VM/ESA provides the facilities of VM/SP, VM/SP HPO and VM/XA SP. VM/ESA version 1 can run in S/370, ESA/370 or ESA/390 mode; it does not support S/370 XA mode. Version 2 only runs in ESA/390 mode. The S/370-capable versions of VM/ESA were actually their own separate version from the ESA/390 versions of VM/ESA, as the S/370 versions are based on the older VM/SP HPO codebase, and the ESA/390 versions are based on the newer VM/XA codebase.[12]
z/VM
z/VM, the last version still widely used as one of the main full virtualization solutions for the mainframe market.[citation needed] z/VM 4.4 was the last version that could run in ESA/390 mode; subsequent versions only run in z/Architecture mode.[13]

The CMS in the name refers to the Conversational Monitor System, a component of the product that is a single-user operating system that runs in a virtual machine and provides conversational time-sharing in VM.

Hypervisor interface

[edit]

IBM coined the term hypervisor for the 360/65[14] and later used it for the DIAG handler of CP-67.

The Diagnose instruction ('83'x—no mnemonic) is a privileged instruction originally intended by IBM to perform "built-in diagnostic functions, or other model-dependent functions."[15] IBM repurposed DIAG for "communication between a virtual machine and CP."[16][17] The instruction contains two four-bit register numbers, called Rx and Ry, which can "contain operand storage addresses or return codes passed to the DIAGNOSE interface," and a two-byte code "that CP uses to determine what DIAGNOSE function to perform."[16] The available diagnose functions include:

Hexadecimal code Function
0000 Store Extended-Identification Code
0004 Examine Real Storage
0008 Virtual Console Function—Execute a CP command
0018 Standard DASD I/O
0020 General I/O—Execute any valid CCW chain on a tape or disk device
003C Update the VM/370 directory
0058 3270 Virtual Console Interface—perform full-screen I/O on an IBM 3270 terminal
0060 Determine Virtual Machine Storage Size
0068 Virtual Machine Communication Facility (VMCF)

At one time, CMS was capable of running on a bare machine, as a true operating system (though such a configuration would be unusual). It now runs only as a guest OS under VM. This is because CMS relies on a hypervisor interface to VM-CP, to perform file system operations and request other VM services. This paravirtualization interface:

  • Provides a fast path to VM-CP, to avoid the overhead of full simulation.
  • Was first developed as a performance improvement for CP/CMS release 2.1, an important early milestone in CP's efficiency.
  • Uses a non-virtualized, model-dependent machine instruction as a signal between CMS and CP: DIAG (diagnose).

Minidisks

[edit]
CMS starting up after the user MAINT (system administrator) has logged in
The CMS editor on VM/370, editing a COBOL program source file

CMS and other operating systems often have DASD requirements much smaller than the sizes of actual volumes. For this reason CP allows an installation to define virtual disks of any size up to the capacity of the device. For CKD volumes, a minidisk must be defined in full cylinders. A minidisk has the same attributes as the underlying real disk, except that it is usually smaller and the beginning of each minidisk is mapped to cylinder or block 0. The minidisk may be[f] accessed using the same channel programs as the real disk.

A minidisk that has been initialized with a CMS file system is referred to as a CMS minidisk, although CMS is not the only system that can use them.

It is common practice to define full volume minidisks for use by such guest operating systems as z/OS instead of using DEDICATE to assign the volume to a specific virtual machine. In addition, "full-pack links" are often defined for every DASD on the system, and are owned by the MAINT userid. These are used for backing up the system using the DASD Dump/Restore program, where the entire contents of a DASD are written to tape (or another DASD) exactly.

Shared File System

[edit]
Invoking the System/360 COBOL compiler on VM/370 CMS, then loading and running the program

With modern VM versions, most of the system can be installed to SFS, with the few remaining minidisks being the ones absolutely necessary for the system to start up, and the ones being owned by the filepool server machines.

An example of a non-CMS guest operating system running under VM/370: DOS/VS Release 34. The DOS/VS system is now prompting the operator to enter a supervisor name to continue loading

VM/SP Release 6 introduced the Shared File System [18] which vastly improved CMS file storage capabilities. The CMS minidisk file system does not support directories (folders) at all, however, the SFS does. SFS also introduces more granular security. With CMS minidisks, the system can be configured to allow or deny users read-only or read-write access to a disk, but single files cannot have the same security. SFS alleviates this, and vastly improves performance.

The SFS is provided by service virtual machines. On a modern VM system, there are usually three that are required: VMSERVR, the "recovery machine" that does not actually serve any files; VMSERVS, the server for the VMSYS filepool; and VMSERVU, the server for the VMSYSU (user) filepool.[19] The file pool server machines own several minidisks, usually including a CMS A-disk (virtual device address 191, containing the file pool configuration files), a control disk, a log disk, and any number of data disks that actually store user files.

If a user account is configured to only use SFS (and does not own any minidisks), the user's A-disk will be FILEPOOL:USERID. and any subsequent directories that the user creates will be FILEPOOL:USERID.DIR1.DIR2.DIR3 where the equivalent UNIX file path is /dir1/dir2/dir3. SFS directories can have much more granular access controls when compared to minidisks (which, as mentioned above, can often only have a read password, a write password, and a multi-write password). SFS directories also solve the issues that may arise when two users write to the same CMS minidisk at the same time, which may cause disk corruption (as the CMS VM performing the writes may be unaware that another CMS instance is also writing to the minidisk).

The file pool server machines also serve a closely related filesystem: the Byte File System. BFS is used to store files on a UNIX-style filesystem. Its primary use is for the VM OpenExtensions POSIX environment for CMS. The CMS user virtual machines themselves communicate with the SFS server virtual machines through the IUCV mechanism.[20]

History

[edit]
OS/VS1 starting under VM/370
Using DASD Dump/Restore (DDR) to back up a VM/370 system

The early history of VM is described in the articles CP/CMS and History of CP/CMS. VM/370 is a reimplementation of CP/CMS, and was made available in 1972 as part of IBM's System/370 Advanced Function announcement (which added virtual memory hardware and operating systems to the System/370 series). Early releases of VM through VM/370 Release 6 continued in open source through 1981, and today are considered to be in the public domain. This policy ended in 1977 with the chargeable VM/SE and VM/BSE upgrades and in 1980 with VM/System Product (VM/SP). However, IBM continued providing updates in source form for existing code for many years, although the upgrades to all but the free base required a license. As with CP-67, privileged instructions in a virtual machine cause a program interrupt, and CP simulated the behavior of the privileged instruction.

VM remained an important platform within IBM, used for operating system development and time-sharing use; but for customers it remained IBM's "other operating system". The OS and DOS families remained IBM's strategic products, and customers were not encouraged to run VM. Those that did formed close working relationships, continuing the community-support model of early CP/CMS users. In the meantime, the system struggled with political infighting within IBM over what resources should be available to the project, as compared with other IBM efforts. A basic problem with the system was seen at IBM's field sales level: VM/CMS demonstrably reduced the amount of hardware needed to support a given number of time-sharing users. IBM was, after all, in the business of selling computer systems.

Melinda Varian provides this fascinating quote, illustrating VM's unexpected success:[21]

The marketing forecasts for VM/370 predicted that no more than one 168 would ever run VM during the entire life of the product. In fact, the first 168 delivered to a customer ran only CP and CMS. Ten years later, ten percent of the large processors being shipped from Poughkeepsie would be destined to run VM, as would a very substantial portion of the mid-range machines that were built in Endicott. Before fifteen years had passed, there would be more VM licenses than MVS licenses.

A PC DOS version that runs CMS on the XT/370 (and later on the AT/370) is called VM/PC. VM/PC 1.1 was based on VM/SP release 3. When IBM introduced the P/370 and P/390 processor cards, a PC could now run full VM systems, including VM/370, VM/SP, VM/XA, and VM/ESA (these cards were fully compatible with S/370 and S/390 mainframes, and could run any S/370 operating system from the 31-bit era, e.g., MVS/ESA, VSE/ESA).

In addition to the base VM/SP releases, IBM also introduced VM/SP HPO (High Performance Option). This add-on (which is installed over the base VM/SP release) improved several key system facilities, including allowing the usage of more than 16 MB of storage (RAM) on supported models (such as the IBM 4381). With VM/SP HPO installed, the new limit was 64 MB; however, a single user (or virtual machine) could not use more than 16 MB. The functions of the spool filesystem were also improved, allowing 9900 spool files to be created per user, rather than 9900 for the whole system. The architecture of the spool filesystem was also enhanced, each spool file now had a unique user ID associated with it, and reader file control blocks were now held in virtual storage. The system could also be configured to deny certain users access to the vector facility (by means of user directory entries).[7]

Releases of VM since VM/SP Release 1 supported multiprocessor systems. System/370 versions of VM (such as VM/SP and VM/SP HPO) supported a maximum of two processors, with the system operating in either UP (uniprocessor) mode, MP (multiprocessor) mode, or AP (attached processor) mode.[22] AP mode is the same as MP mode, except the second processor lacks I/O capability. System/370-XA releases of VM (such as VM/XA) supported more. System/390 releases (such as VM/ESA) almost removed the limit entirely, and some modern z/VM systems can have as many as 80 processors.[23] The per-VM limit for defined processors is 64.

When IBM introduced the System/370 Extended Architecture on the 3081, customers were faced with the need to run a production MVS/370 system while testing MVS/XA on the same machine. IBM's solution was VM/XA Migration Aid, which used the new Start Interpretive Execution (SIE) instruction to run the virtual machine. SIE automatically handled some privileged instructions and returned to CP for cases that it couldn't handle. The Processor Resource/System Manager (PR/SM) of the later 3090 also used SIE. There were several VM/XA products before it was eventually supplanted by VM/ESA and z/VM.

In addition to RSCS networking, IBM also provided users with VTAM networking. ACF/VTAM for VM was fully compatible with ACF/VTAM on MVS and VSE.[24] Like RSCS, VTAM on VM ran under the specialized GCS operating system. However, VM also supported TCP/IP networking. In the late 1980s, IBM produced a TCP/IP stack for VM/SP and VM/XA.[25] The stack supported IPv4 networks, and a variety of network interface systems (such as inter-mainframe channel-to-channel links, or a specialized IBM RT PC that would relay traffic out to a Token Ring or Ethernet network). The stack provided support for Telnet connections, from either simple line-mode terminal emulators or VT100-compatible emulators, or proper IBM 3270 terminal emulators. The stack also provided an FTP server. IBM also produced an optional NFS server for VM; early versions were rather primitive, but modern versions are much more advanced.[26]

There was also a fourth networking option, known as VM/Pass-Through Facility (or more commonly called, PVM). PVM, like VTAM, allowed for connections to remote VM/CMS systems, as well as other IBM systems.[27] If two VM/CMS nodes were linked together over a channel-to-channel link or bisync link (possibly using a dialup modem or leased line), a user could remotely connect to either system by entering "DIAL PVM" on the VM login screen, then entering the system node name (or choosing it from a list of available nodes). Alternatively, a user running CMS could use the PASSTHRU program that was installed alongside PVM, allowing for quick access to remote systems without having to log out of the user's session. PVM also supported accessing non-VM systems, by utilizing a 3x74 emulation technique. Later releases of PVM also featured a component that could accept connections from a SNA network.

VM was also the cornerstone operating system of BITNET, as the RSCS system available for VM provided a simple network that was easy to implement, and somewhat reliable. VM sites were interlinked by means of an RSCS VM on each VM system communicating with one another, and users could send and receive messages, files, and batch jobs through RSCS. The "NOTE" command used XEDIT to display a dialog to create an email, from which the user could send it. If the user specified an address in the form of user at node, the email file would be delivered to RSCS, which would then deliver it to the target user on the target system. If the site has TCP/IP installed, RSCS could work with the SMTP service machine to deliver notes (emails) to remote systems, as well as receive them. If the user specified user at some.host.name, the NOTE program would deliver the email to the SMTP service machine, which would then route it out to the destination site on the Internet.

VM's role changed within IBM when hardware evolution led to significant changes in processor architecture. Backward compatibility remained a cornerstone of the IBM mainframe family, which still uses the basic instruction set introduced with the original System/360; but the need for efficient use of the 64-bit zSeries made the VM approach much more attractive. VM was also utilized in data centers converting from DOS/VSE to MVS and is useful when running mainframe AIX and Linux, platforms that were to become increasingly important. The current z/VM platform has finally achieved the recognition within IBM that VM users long felt it deserved. Some z/VM sites run thousands of simultaneous virtual machine users on a single system. z/VM was first released in October 2000[28] and remains in active use and development.

IBM and third parties have offered many applications and tools that run under VM. Examples include RAMIS, FOCUS, SPSS, NOMAD, DB2, REXX, RACF, and OfficeVision. Current VM offerings run the gamut of mainframe applications, including HTTP servers, database managers, analysis tools, engineering packages, and financial systems.

CP commands

[edit]

As of release 6, the VM/370 Control Program has a number of commands for General Users, concerned with defining and controlling the user's virtual machine. Lower-case portions of the command are optional[29]

Command Description
#CP Allows the user to issue a CP command from a command environment, or any other virtual machine after pressing the break key (defaults to PA1)
ADSTOP Sets an address stop to halt the virtual machine at a specific instruction
ATTN Causes an attention interruption allowing CP to take control in a command environment
Begin Continue or resume execution of the user's virtual machine, optionally at a specified address
CHange Alter attributes of a spool file or files. For example, the output class or the name of the file can be changed, or printer-specific attributes set
Close Closes an open printer, punch, reader, or console file and releases it to the spooling system
COUPLE Connect a virtual channel-to-channel adapter (CTCA) to another. Also used to connect simulated QDIO Ethernet cards to a virtual switch.
CP Execute a CP command in a CMS environment
DEFine Alter the current virtual machine configuration. Add virtual devices or change available storage size
DETach Remove a virtual device or channel from the current configuration
DIAL Connect your terminal at the logon screen to a logged-on multi-access virtual machine's simulated 3270 or typewriter terminals
DISConn Disconnect your terminal while allowing your virtual machine to continue running
Display Display virtual machine storage or (virtual) hardware registers
DUMP Print a snapshot dump of the current virtual machine on the virtual spooled printer
ECHO Set the virtual machine to echo typed lines
EXTernal Cause an external interrupt to the virtual machine
INDicate Display current system load or your resource usage
Ipl IPL (boot) an operating system on your virtual machine
LINK Attach a device from another virtual machine, if that machine's definition allows sharing
LOADVFCB Specify a forms control buffer (FCB) for a virtual printer
LOGoff
LOGout
Terminate execution of the current virtual machine and disconnect from the system
Logon
Login
Sign on to the system
Message
MSG
Send a one-line message to the system operator or another user
NOTReady Cause a virtual device to appear not ready
ORDer Reorder closed spool files by ID or class
PURge Delete closed spool files for a device by class,m ID, or ALL
Query Display status information for your virtual machine, or the message of the day, or number or names of logged-in users
READY Cause a device end interruption for a device
REQuest Cause an interrupt on your virtual console
RESET Clear all pending interrupts for a device
REWind Rewind a real (non virtual) magnetic tape unit
SET Set various attributes for your virtual machine, including messaging or terminal function keys
SLeep Place your virtual machine in a dormant state indefinitely or for a specified period of time
SMsg Send a one-line special message to another virtual machine (usually used to control the operation of the virtual machine; commonly used with RSCS)
SPool Set options for a spooled virtual device (printer, reader, or punch)
STore Alter the contents of registers or storage of your virtual machine
SYStem Reset or restart your virtual machine or clear storage
TAg Set a tag associated with a spooled device or file. The tag is usually used by VM's Remote Spooling Communications Subystem (RSCS) to identify the destination of a file
TERMinal Set characteristics of your terminal
TRace Start or stop tracing of specified virtual machine activities
TRANsfer Transfer a spool file to or from another user
VMDUMP Dump your virtual machine in a format readable by the Interactive Problem Control System (IPCS) program product

OpenEdition extensions

[edit]

Starting with VM/ESA Version 2, IBM introduced the chargeable optional feature OpenEdition for VM/ESA Shell and Utilities Feature,[30] which provides POSIX compatibility for CMS. The stand-out feature was a UNIX shell for CMS. The C compiler for this UNIX environment is provided by either C/370 or C for VM/ESA. Neither the CMS filesystem nor the standard VM Shared File System has any support for UNIX-style files and paths; instead, the Byte File System is used. Once a BFS extent is created in an SFS file pool, the user can mount it using the OPENVM MOUNT /../VMBFS:fileservername:filepoolname /path/to/mount/point. The user must also mount the root filesystem, done with OPENVM MOUNT /../VMBFS:VMSYS:ROOT/ /, a shell can then be started with OPENVM SHELL. Unlike the normal SFS, access to BFS filesystems is controlled by POSIX permissions (with chmod and chown).

Starting with z/VM Version 3, IBM integrated OpenEdition into z/VM[13] and renamed it OpenExtensions. OpenEdition and OpenExtensions provide POSIX.2 compliance to CMS.[31] Programs compiled to run under the OpenExtensions shell are stored in the same format as standard CMS executable modules. Visual editors, such as vi are unavailable, as 3270 terminals are not capable. Users can use ed or XEDIT instead of vi.

Marketing

[edit]

In the early 1980s, the VM group within SHARE (the IBM user group) sought a mascot or logo for the community to adopt. This was in part a response to IBM's MVS users selecting the turkey as a mascot (chosen, according to legend, by the MVS Performance Group in the early days of MVS, when its performance was a sore topic). In 1983, the teddy bear became VM's de facto mascot at SHARE 60, when teddy bear stickers were attached to the nametags of "cuddlier oldtimers" to flag them for newcomers as "friendly if approached". The bears were a hit and soon appeared widely.[32] Bears were awarded to inductees of the "Order of the Knights of VM", individuals who made "useful contributions" to the community.[33][34]

Notes

[edit]

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
IBM z/VM is a virtualization operating system and hypervisor developed by IBM for its Z mainframe servers, enabling the efficient sharing of hardware resources to host multiple guest operating systems such as Linux, z/OS, z/VSE, and z/TPF on a single physical machine. It provides a secure, scalable environment for running hundreds to thousands of virtual machines, supporting rapid deployment, system consolidation, and hybrid cloud strategies with high elasticity and resource efficiency. As a control program, z/VM creates virtualized computing environments that emulate physical systems, allowing isolated execution of workloads while optimizing power, space, and operational costs. The history of dates back to August 2, 1972, when announced its first official product, VM/370, for the System/370 mainframe as a pioneering technology. Over the decades, it evolved through successive releases—including VM/SP in the late , VM/XA for extended architecture support in the , VM/ESA in the , and adaptations for 64-bit processing—to become the modern , with the latest versions like 7.4 (released in ) incorporating models for ongoing enhancements. This progression has positioned as a foundational element of 's enterprise ecosystem, adapting to advancements in hardware like and LinuxONE servers. At its core, z/VM comprises two primary components: the Control Program (CP), which manages real hardware resources to simulate multiple virtual machines and facilitates inter-virtual-machine communication, and the Conversational Monitor System (CMS), an interactive, single-user operating system designed for development, testing, and file management within a virtual machine environment. CP serves as the hypervisor layer, providing features like dynamic resource allocation and high availability through clustering, while CMS offers a lightweight, command-driven interface for users and supports scripting and application development. Additional licensed programs, such as IBM Wave for multi-system administration, extend z/VM's capabilities for enterprise-scale management. z/VM's key strengths include robust security features like and isolation for sensitive workloads, support for via , and integration with for AI-driven operations, making it ideal for mission-critical applications in , healthcare, and government sectors. It also emphasizes by reducing the physical footprint of data centers through server consolidation, with recent updates adding support for NVMe storage and enhanced cluster configurations for improved performance and resiliency. Overall, z/VM remains a cornerstone of mainframe , delivering unmatched scalability and reliability for modern hybrid infrastructures.

History

Origins and Early Development

The development of the VM operating system traces its roots to the mid-1960s at IBM's Scientific Center, where ers sought innovative ways to maximize the utility of expensive mainframe hardware. In late 1964, project leader Robert Creasy, along with Les Comeau and other team members including Bob Adair, Dick Bayles, and John Harmon, initiated work on CP-40, an experimental system designed for the Model 40. This system was motivated by the need to enable multiple users to concurrently access a single computer, addressing the limitations of on early System/360 machines and drawing inspiration from MIT's (CTSS). By providing each user with the illusion of a dedicated machine, CP-40 aimed to improve resource utilization and reduce operational costs for environments, where a single Model 40—limited to 256 KB of memory—served a growing team of scientists and engineers. CP-40 introduced foundational virtualization concepts, creating up to 14 virtual machines that emulated the full System/360 environment, each capable of running independent operating systems or applications. To overcome the Model 40's hardware constraints, the team modified the machine with an associative memory device for dynamic address translation, allowing efficient multiplexing of CPU and memory resources among virtual machines. This experimental setup not only facilitated for interactive computing but also gathered critical data on performance, influencing subsequent IBM designs. By late 1966, CP-40 became operational alongside the Conversational Monitor System (CMS), a lightweight interactive environment that provided users with a simple for program development and execution. Building on CP-40's success, the project evolved into CP-67 in 1967, adapted for the newly introduced System/360 Model 67, which included built-in support via dynamic address translation hardware. CP-67 extended the virtualization model to support more sophisticated guest operating systems, such as IBM's Time Sharing System (TSS/360), while maintaining compatibility with other System/360 OSes like OS/360 and DOS/360. This version enhanced scalability, allowing dozens of virtual machines to run concurrently on a single mainframe, further demonstrating the potential for cost-effective resource sharing in multi-user and multi-OS environments. The motivations remained centered on boosting mainframe efficiency, enabling parallel testing of software and operating systems to accelerate development cycles without requiring additional hardware investments. These research efforts culminated in the commercial release of VM/370 on , 1972, as IBM's first production virtualization system for the System/370 family, which standardized across its models. VM/370 integrated the mature CP hypervisor with CMS, offering a robust platform for and hosting that directly addressed enterprise needs for improved hardware utilization and operational flexibility. This announcement marked the transition from experimental prototypes to a widely deployable product, setting the stage for VM's enduring role in mainframe computing.

Evolution and Major Releases

The evolution of VM from its experimental roots in the and transitioned into commercial maturation with the release of VM/SP on , , as a replacement for VM/370, incorporating capabilities for unit record devices and various performance enhancements to support broader usability on System/370 hardware. These improvements included better and extensions for running legacy operating systems like DOS/VS and VSE/SP, enabling VM/SP to serve as a multi-purpose platform for mid-range servers throughout the 1980s. In 1985, IBM introduced VM/XA to leverage the System/370 Extended Architecture (System/370-XA), providing extended 31-bit addressing for virtual machines and facilitating migration from 24-bit environments. This release addressed limitations in memory addressing, allowing VM to support larger virtual storage configurations and improved I/O handling on XA-compatible hardware. A key milestone came in 1989 with VM/SP HPO Release 6, the High Performance Option, which optimized VM for running as a guest operating system through enhanced single-system image support, faster paging, and resource allocation tuned for large-scale processors. This integration improved overall system throughput and efficiency, particularly for environments combining VM with workloads on high-end System/370 systems. VM/ESA followed in 1990, aligning with the announcement of the ESA/390 architecture and introducing support for PR/SM logical partitioning, which enabled dynamic resource allocation across multiple virtual machines within partitioned mainframes. This convergence of prior VM variants—VM/SP, VM/XA, and VM/HPO—into a single product enhanced scalability and compatibility with Enterprise Systems Architecture features like access registers and expanded storage. By 2000, with the introduction of for 64-bit addressing, VM/ESA was renamed to reflect compatibility with the new eServer zSeries hardware, marking a shift toward supporting advanced workloads including 64-bit guests. This rebranding, effective October 3, 2000, positioned as the flagship for IBM's evolving mainframe ecosystem. Following the rebranding, z/VM continued to evolve with key enhancements for modern computing. Version 3.1 (2001) introduced full 64-bit support, enabling larger memory addressing and better performance for 64-bit guest operating systems like and . Subsequent releases, such as z/VM 4.4 (2005) with Virtual Switch support and z/VM 6.4 (2012) introducing Single System Image clustering for , adapted to IBM's zSeries, System z, and later hardware. As of 2025, z/VM 7.4 incorporates for features like enhanced security, container support, and hybrid cloud integration, maintaining its role in enterprise .

Architecture and Design

Core Components

The IBM VM operating system, now known as in its modern iterations, is built around two fundamental components: the Control Program (CP) and the Conversational Monitor System (CMS). These elements form the core of its -based , enabling the emulation of complete hardware environments for multiple virtual machines on a single physical mainframe. CP serves as the foundational layer, acting as a that manages real hardware resources and allocates them to guest operating systems, including CMS itself and others such as or distributions. CP provides each virtual machine with the illusion of dedicated access to a full , including virtual CPUs, , and I/O devices, through techniques like and demand paging. It operates in a supervisor state to intercept and simulate hardware operations, ensuring isolation between guests while optimizing resource utilization across the physical machine. For instance, CP dynamically maps virtual storage to real and handles interruptions, allowing multiple operating systems to run concurrently without interference. This structure supports , with modern instances capable of hosting thousands of virtual machines on zSystems hardware. CMS, in contrast, functions as a lightweight, single-user operating system that executes within its own virtual machine provided by CP, serving as the primary interactive environment for users and administrators. It handles user-level tasks such as file management, program execution, and command processing, relying on CP for underlying . The interaction between CP and CMS is symbiotic: CP delivers the virtualized infrastructure, including virtual devices and resource scheduling, while CMS manages applications and data within its isolated space, often using specialized interfaces like DIAGNOSE instructions to request services from CP. This division enables efficient personal computing and system administration, with CMS providing a conversational interface that has been a hallmark of VM since its origins.

Virtualization Principles

The virtualization approach in VM, originally introduced with VM/370 and evolved in , is built on a type-1 model where the Control Program (CP) executes directly on the bare hardware without an underlying host operating system, enabling the creation and management of multiple virtual machines. This bare-metal execution allows CP to partition the physical system's resources into isolated virtual environments, providing each with a complete and transparent simulation of the underlying hardware architecture. At its core, VM employs full virtualization, granting each a self-contained emulation of the entire hardware stack, including the CPU, , and peripheral devices such as I/O channels and storage. This operates through a trap-and-emulate mechanism, where privileged instructions from guest operating systems are intercepted by CP and emulated to maintain the illusion of dedicated hardware, ensuring no modifications are required for the guests to run unmodified. Resource isolation is paramount, preventing interference between virtual machines while allowing secure multitasking; for instance, each VM maintains its own and device state, fostering and ease of . VM's design emphasizes through dynamic resource allocation, where CP schedules CPU cycles and manages to support concurrent execution across multiple VMs, optimizing utilization on high-cost mainframe hardware. This enables scalability to thousands of virtual machines on modern System z servers, far exceeding the dozens supported in early implementations, by leveraging overcommitment techniques like virtual processor allocation and shared paging. Key principles include broad compatibility with diverse guest operating systems, such as for , distributions for open-source workloads, and the Conversational Monitor System (CMS) for interactive use, all without architectural alterations. In contrast to many contemporary hypervisors on x86 architectures, VM's virtualization is natively integrated with hardware features, such as the Start Interpretive Execution (SIE) instruction for efficient trapping and specialized I/O protocols like FICON, eliminating the need for or paravirtualization layers. CP implements these principles by handling resource dispatching and virtualization services, directly supporting the hypervisor's foundational goals of efficiency and isolation in enterprise-scale environments.

Control Program (CP)

Primary Functions

The Control Program (CP) in IBM z/VM serves as the foundational responsible for creating, managing, and terminating virtual machines (VMs) to simulate independent computing environments for guest operating systems. VM creation occurs through entries in the z/VM user directory, which define each VM's configuration, including allocated resources and access privileges; users initiate logon via commands like LOGON, establishing the VM, while suspension and termination are handled via commands such as SUSPEND and LOGOFF, respectively, ensuring orderly resource reclamation upon logout. CP handles resource scheduling to enable efficient sharing of physical hardware among multiple VMs, including CPU time-sharing where it dispatches virtual CPUs—up to 64 per VM—across real processors using a priority-based scheduler that supports dedicated or shared allocation modes. involves paging virtual storage to real memory, with dynamic reconfiguration allowing adjustments to main storage sizes and the use of shared segments to optimize utilization across VMs. For I/O device assignment, CP allocates virtual devices such as minidisks (logical partitions of DASD volumes) and spool files, mediating access to shared physical I/O resources while supporting advanced features like Parallel Access Volumes (PAV) for improved throughput. System operator functions under CP include comprehensive console management, where each VM receives a (typically 3215-type) for issuing commands, and global resource control through privileged operator commands like QUERY to monitor and adjust system-wide elements such as storage and processor availability. These capabilities allow operators to oversee the entire environment, including dynamic reconfiguration of resources without halting operations. Security features in CP enforce user authentication via directory-stored passwords required at logon, supporting encrypted protocols like SSL for remote access and integration with external systems such as RACF for enhanced auditing. VM isolation is maintained through hardware-assisted mechanisms like the Interpretive Execution Facility (SIE instruction) and two-level address translation, preventing interference between VMs or unauthorized access to shared resources, with privilege classes restricting command usage to authorized users. Performance monitoring and tuning in CP are facilitated by built-in tools such as the CP Monitor, which collects data on CPU utilization, paging rates, and I/O activity, accessible via commands like QUERY VIRTUAL and integrated with the Performance Toolkit for to generate reports and identify bottlenecks. Tuning options include adjusting scheduler parameters for CPU shares and enabling features like collaborative to optimize dynamically.

Hypervisor Interface

The hypervisor interface in the IBM z/VM Control Program (CP) enables guest operating systems to access underlying hardware resources and CP services through a combination of hardware-assisted virtualization facilities and software-defined calls. Central to this interface is the DIAGNOSE instruction, a privileged System/370 instruction (opcode X'83') that virtual machines use to invoke CP hypervisor services, such as resource allocation, I/O operations, and timing functions. When issued by a guest, the DIAGNOSE instruction is intercepted by CP, which interprets the code and subcode to perform the requested action, ensuring isolation and controlled access to real machine resources. This mechanism allows guests to operate as if they have direct hardware access while CP mediates all privileged operations. Key services are accessed via specific DIAGNOSE codes. For storage management, DIAGNOSE X'10' allows a guest to release pages of second-level storage, effectively altering the virtual machine's paging configuration by notifying CP of unused pages for reclamation. In I/O operations, DIAGNOSE X'24' provides guests with identifying information and status about virtual devices, including type and features, facilitating device-specific handling without direct hardware probing. I/O interception occurs transparently through CP's monitoring of guest-issued I/O instructions (e.g., SSCH for start subchannel), where CP simulates or redirects them to real devices as needed. management is supported via DIAGNOSE codes like X'258' for accessing the time-of-day (TOD) clock and interval , enabling guests to query or set virtual timers while CP ensures accurate real-time synchronization across virtual machines. These calls exemplify how the interface balances guest autonomy with oversight. Access to hypervisor instructions is governed by privilege classes defined in the z/VM user directory, which categorize CP commands and DIAGNOSE codes into eight predefined classes (A through G, plus "Any"). For instance, system operator tasks (class A) may permit broad DIAGNOSE usage for resource control, while general users (class G) are restricted to virtual machine-specific calls, preventing unauthorized access to shared resources. Installations can customize classes I-Z and 1-6 for DIAGNOSE codes, but core hypervisor services require explicit authorization via the directory's PRIVCLASS statement to maintain security. Unauthorized attempts result in program checks or intercepts, enforcing the principle of least privilege in the virtualized environment. The interface has evolved significantly since its inception in VM/370. Introduced in 1972, VM/370 relied on the Start Interpretive Execution (SIE) instruction—a hardware facility in System/370 processors—to enable efficient of guest instructions, including of privileged operations like I/O and storage access. Subsequent releases extended this: VM/SP (1979) added support for virtual storage extensions, while VM/XA (1985) incorporated 370-XA architecture for expanded addressing. By VM/ESA (1990), the interface supported ESA/390 mode with enhanced interpretive execution for 31-bit addressing. Modern (version 7.2 and later) integrates extensions, introduced in 2000, providing 64-bit addressing, dynamic reconfiguration of virtual processors, and improved SIE performance for high-scale virtualization, allowing up to 64 logical processors per guest. These advancements maintain backward compatibility while optimizing for contemporary workloads. Compatibility modes ensure seamless support for diverse guests. In mode, guests benefit from full 64-bit capabilities, including extended addressing and cryptographic accelerations. Legacy modes, such as ESA/390, provide compatibility for 24/31-bit guests like older or VSE systems, with CP emulating missing facilities via intercepts. Specialized XC modes (ESA/XC and z/XC) extend the interface for cross-memory services and data spaces, used in advanced applications like cross-system coupling. Guests can switch modes dynamically if supported, but the enforces mode-specific restrictions to preserve integrity.

Conversational Monitor System (CMS)

Core Features

The Conversational Monitor System (CMS) is designed as a lightweight, interactive operating system that operates within a single per user, providing a streamlined environment for , file, and without the complexity of multiple address spaces. This single-address-space approach enables direct access to virtual storage, facilitating efficient program execution and data handling for individual users. CMS employs a flat structure, where files are identified by a three-part name consisting of a (up to 8 characters), filetype (up to 8 characters), and filemode (e.g., A1 or B), allowing simple organization and access without hierarchical directories. The system's supports interactive input in both uppercase and lowercase, enabling users to execute commands directly for tasks such as file operations and system queries, which promotes simplicity and rapid response in a terminal-based environment. CMS supports multitasking within its single-user context through pipelines and EXEC scripts, allowing users to chain commands for background processing and automate workflows. The PIPE command facilitates pipelines by connecting multiple program stages, where output from one stage serves as input to the next, enabling efficient without manual intervention; for instance, a user can pipe file listings to sorting utilities for concurrent execution. EXEC scripts, which can be written in the EXEC language or the more advanced programming language, permit the creation of reusable command sequences that run as background jobs, supporting task such as manipulations or repetitive queries. This capability extends CMS's interactivity by allowing limited concurrency within the user's , though it remains constrained to sequential or pipelined operations rather than full parallelism. A key utility in CMS is the XEDIT editor, a full-screen, line-oriented tool that serves as the primary interface for creating, editing, and managing text files. XEDIT supports modes for input, command execution, and browsing, with features like prefix commands for deleting (D), changing (C), or locating (LOCATE) lines, as well as scrolling commands such as TOP and BOTTOM for navigation. Built-in utilities complement XEDIT by providing essential file manipulation tools, including FILELIST for listing files, COPYFILE and ERASE for copying or deleting, and GET/PUT for merging content, as well as UDIFF and UPATCH (introduced in z/VM 7.4) for file differencing and patching, all accessible via the command line to maintain CMS's emphasis on simplicity. These tools operate on the flat , ensuring straightforward data handling without advanced structuring. CMS integrates closely with the Control Program (CP) by leveraging its virtualization services, such as loading CMS via the IPL CMS command to initialize the user environment and accessing virtual devices like disks and terminals through CP-managed resources. This integration allows seamless interaction, where users can issue CP commands (e.g., QUERY) from within CMS or switch modes to manage virtual hardware, enabling shared access to system resources while CMS handles application-level tasks. However, CMS's design imposes limitations as a single-user system, focusing on one interactive session per virtual machine without native support for multiprocessing, which is instead managed at the CP level across multiple virtual machines. This single-user orientation prioritizes low overhead and direct control but restricts scalability for multi-user or parallel workloads within a single CMS instance.

User Environment

Users log on to CMS by issuing the IPL CMS command from the CP READ prompt, which loads the Conversational Monitor System into the virtual machine and initiates an interactive session. Upon loading, CMS automatically searches for and executes the PROFILE EXEC file on the accessed A-disk or in the Shared File System (SFS), if present, to personalize the startup environment. This EXEC contains user-specified CP and CMS commands executed at the beginning of every terminal session, such as accessing specific minidisks or SFS directories, configuring terminal characteristics like programmable function (PF) keys, and loading macro libraries or exec procedures. For example, a typical PROFILE EXEC might include commands like ACCESS 191 D to link a user minidisk and SET PF12 RETRIEVE to enable command recall via a function key. Users can suppress PROFILE EXEC execution during IPL by specifying the NPROF option or run it manually with the PROFILE command. Interactive sessions in CMS emphasize two-way communication between the user and the system via terminal input/output (I/O), typically using 3270-compatible terminals or emulations for full-screen interaction. Command entry occurs at the CMS READY prompt, where users type commands followed by parameters, with support for line editing features such as insert, delete, and overtype during input. Command recall is provided through PF key assignments, often set in the PROFILE EXEC, allowing users to retrieve and edit previous commands without retyping, which streamlines repetitive tasks. The built-in HELP facility offers online assistance directly from the terminal, displaying hierarchical information on commands, macros, messages, and usage examples; for instance, typing HELP followed by a command name retrieves its syntax, parameters, and notes. Core tools like XEDIT further support interactive editing within sessions. Customization in the CMS user environment allows fine-grained control over resources and behaviors to suit individual workflows. Access control lists (ACLs) manage permissions for files and directories in the SFS, enabling users to grant or revoke read, write, or execute access to specific other users or groups without relying on passwords, thus facilitating secure sharing in multi-user setups. User-defined macros, implemented as EXEC procedures in or older EXEC languages, permit automation of complex command sequences; these can be created, stored as files (e.g., MYMACRO EXEC), and invoked like built-in commands, often loaded globally via the PROFILE EXEC for session-wide availability. Such macros enhance productivity by encapsulating repetitive operations, like file backups or directory listings with custom filters. Daily workflows in CMS often combine interactive and non-interactive elements for efficient task handling in a virtual machine setting. For batch processing, users submit jobs to the CMS batch facility via virtual card readers, using CMS commands and EXECs in input files to sequence compilations, assemblies, or data processing without tying up the interactive terminal; the facility runs on a dedicated virtual machine, processing jobs sequentially and directing output to virtual printers or readers. Integration with other guest operating systems occurs through shared minidisks or inter-user communication vehicles, allowing CMS users to offload compute-intensive tasks to batch while maintaining oversight via interactive queries. A representative workflow might involve an operator editing code interactively with XEDIT, testing via a macro, then submitting a batch job for large-scale compilation, all within the time-shared environment. The ergonomics of the CMS user environment are tailored for mainframe operators and developers operating in time-sharing scenarios, prioritizing rapid, conversational interaction to minimize latency in multi-user systems. Designed as a lightweight, single-user OS within each virtual machine, CMS supports quick file management, program execution, and debugging directly from the terminal, reducing the need for batch submissions common in non-interactive mainframe environments. This focus on immediacy and simplicity enables efficient resource sharing among hundreds of concurrent users, with features like PF key customization and online help promoting intuitive operation for routine maintenance and development tasks.

Storage Management

Minidisks

Minidisks in are emulated direct-access storage devices (DASD) that function as virtual disk drives attached to virtual machines, providing dedicated storage space for user data and files. They simulate physical DASD volumes, allowing virtual machines to interact with them as if they were real hardware devices, and can be formatted by the Conversational Monitor System (CMS) or other guest operating systems to support file storage needs. This emulation enables efficient resource sharing on the underlying mainframe hardware while maintaining isolation for individual virtual machines. Minidisks are created using Control Program (CP) commands, primarily the DEFINE MDISK or DEFINE DISK commands, which allocate space from real DASD volumes or system paging space. For example, an administrator might issue DEFINE MDISK 0200 100 100 pack01 to define a minidisk with virtual device address 0200 using 100 cylinders starting at extent 100 on volume pack01. Temporary minidisks, known as T-disks, can be defined dynamically during a session with commands like DEFINE T3380 AS vdev CYL ncyl, drawing from available paging space for short-term use without permanent allocation. Access to minidisks supports various modes to control sharing and permissions, including read-only (R, RR, SR), read/write (W, WR, MW), and multi-access options with locking mechanisms such as working or the SHARED parameter. Multi-access allows multiple virtual machines to link to the same minidisk, but write operations are serialized through to prevent conflicts, ensuring during shared use. In CMS environments, users access minidisks via file mode letters (e.g., A for the default A-disk) and commands like ACCESS or LINK, integrating them into the CMS search order. Sizing for minidisks is flexible, with up to 1,182,006 cylinders (~812 GB) for ECKD devices emulating 3390-type with Extended Address Volume (EAV) support, though older configurations or specific uses (e.g., CMS file system without enhancements) may limit to 65,520 cylinders (~45 GB), and cache eligibility to 32,767 cylinders. Emulated FBA minidisks support up to 2,147,483,640 512-byte blocks (~1 TB) for SCSI/NVMe-backed devices, though CMS file system use is practically limited to 381 GB; certain VFB-512 emulations are capped at 4,194,296 blocks (~2 GB), with blocks aligned on 4K boundaries for paging. Management involves CP utilities such as the MDISK command for definition and querying, CPACCESS for linking, and QUERY MDISK for status checks, along with CMS tools like RELEASE and QUERY DISK to monitor space and detach devices. These utilities facilitate maintenance, including space reclamation and error handling, to optimize performance. One key advantage of minidisks is their portability, as they can be easily detached, moved across systems, or reattached without disrupting the virtual machine environment, making them suitable for development workflows. Additionally, they support snapshot capabilities through IBM's FlashCopy technology, enabling point-in-time copies for safe testing and backup without affecting the original data.

Shared File System

The Shared File System (SFS) in VM, introduced with VM/ESA, provides a hierarchical, multi-user that enables controlled sharing of files and directories across virtual machines, contrasting with the more personal storage of minidisks. SFS addresses the need for centralized data access in multi-user environments by organizing files in a tree-like structure of directories and subdirectories, stored in file pools managed by dedicated SFS server virtual machines. These servers handle dynamic allocation of storage from underlying minidisks, ensuring efficient use of DASD resources while maintaining through implicit and explicit locking mechanisms that prevent concurrent writes. The structure of SFS revolves around file pools, each comprising a collection of minidisks assigned to an SFS server VM, which acts as a repository for user file spaces. Users are enrolled in a file pool by the , granting them a within that pool; from there, they can create subdirectories up to eight levels deep, with each directory name limited to 16 characters and supporting underscores for readability. Files within SFS are record-oriented, managed in 4KB blocks, and can be shared across different systems if the file pools are accessible. SFS file pools support a large number of directories, scalable to enterprise needs, with no architected hard limit documented beyond overall system resources. Access controls in SFS emphasize and through owner-defined permissions and quotas. Directory and file access follows a model similar to Unix, with permissions categorized for owner, group, and world (public) levels, including read (for listing or viewing contents), write (for creating, modifying, or deleting), and control (for granting or revoking access). Owners use the GRANT and REVOKE commands to assign these authorities explicitly to individual users, groups, or all users, while file pool administrators hold overarching control. Directory quotas limit storage usage per user or directory, typically set in blocks during enrollment (e.g., 10,000 4KB blocks), preventing overuse and ensuring fair allocation across the shared pool. Operations on SFS files and directories are performed via specialized FS commands integrated into CMS, facilitating common tasks across user boundaries. For instance, the FS LISTFILE command displays file details in a specified directory, while FS COPY transfers files between SFS locations or from accessed minidisks, and FS LINK creates symbolic links for shared access without duplication. Standard CMS utilities like COPY, ERASE, and RENAME also apply when a directory is accessed (e.g., via ACCESS or VMLINK with an SFS target), supporting seamless integration. These operations commit changes at the end of a logical unit of work, ensuring consistency in multi-user scenarios. The primary benefits of SFS include centralized storage that reduces the proliferation of personal minidisks, simplifies administration by consolidating DASD usage, and enhances through secure, file-level . By leveraging SFS servers for dynamic allocation and locking, VM environments achieve higher and reliability, particularly in large-scale deployments where up to thousands of users can access shared resources without performance degradation from excessive minidisk linking. This design has made SFS a cornerstone for in enterprise mainframe operations since its VM/ESA debut.

Commands and Extensions

CP Commands

CP commands form the primary interface for users and operators to interact with the Control Program (CP) in z/VM, enabling management of virtual machines, devices, and system resources. These commands are entered in the CP environment, accessed by pressing the attention key or entering #CP from the Conversational Monitor System (CMS), and are case-insensitive with operands limited to 8 characters within columns 5-72 of the input line. Command syntax typically follows the structure command [operand] [options], where operands are separated by spaces unless specified otherwise, and options can often be arranged in any order. Two-letter abbreviations are supported for efficiency, such as IP for IPL, QU for QUERY, DE for DEFINE, and TR for TRACE. Comments within command lines begin with /*. Privilege levels determine access, categorized into classes: A (highest control), B (), C (programming), D (), E (analyst/service), F ( service), G (general user), and installation-defined classes H-Z or 1-6; some commands require <ANY> (no restriction) or specific authorization. Basic commands handle essential virtual machine operations. The IPL command initiates a program or system load from a specified device, such as IPL 0100 to load from virtual device 0100 or IPL CMS to start the CMS operating system with optional parameters like PARMREGS=0-15; it requires privilege class G or higher and simulates an initial program load, potentially including a CLEAR option to reset storage. QUERY provides status information, for example QUERY USERS to list active virtual machines or QUERY STORAGE to display virtual storage allocation; privilege varies by operand, with G for user-level queries and A for system-wide details like QUERY LPARS. SUSPEND pauses the current or a specified virtual machine with SUSPEND or SUSPEND userid, halting execution until resumed, and requires classes G, A, B, or C. The complementary RESUME command restarts a suspended machine using RESUME or RESUME userid, with the same privilege requirements. Resource management commands configure and allocate devices and storage. DEFINE sets up virtual devices or resources, such as DEFINE VDEV 0200 for a virtual device or DEFINE STORAGE 16G to allocate 16 gigabytes of virtual storage; it typically needs class B or higher, and may cause a system reset if defining certain resources. DETACH removes a device from a virtual machine, as in DETACH 0200 or DETACH NIC 0500 for a network interface card, requiring class G for user devices or B for system ones. LINK connects a user to a minidisk or device for sharing, using syntax like LINK userid 191 191 MR to link maintenance user ID's minidisk 191 in multi-read mode; it demands authorization and class G, with modes including R (read), W (write), or RR (read with password). These commands support scripting in console sessions, such as chaining IPL followed by LINK in a startup sequence. Operator commands facilitate communication and diagnostics. MSG broadcasts or targets messages, for instance MSG ALL System maintenance at 02:00 to notify all users or MSG OPERATOR Report issue; it operates under class or G. TRACE enables debugging by monitoring events, such as TRACE ON for general tracing or TRACE I/O 0100-010F for I/O on specific devices; it requires classes A, B, C, E, or G depending on scope, with options like EVENT or SETS for detailed logging, often used in scripts to capture activity during VM sessions.
CommandAbbreviationPrivilege ClassesExample Usage
IPLIPG, A, BIPL 191 (loads from device 191)
QUERYQUVaries (G to A)QUERY DASD 191 (checks minidisk status)
SUSPEND-G, A, B, CSUSPEND USER1 (pauses VM USER1)
RESUME-G, A, B, CRESUME (restarts current VM)
DEFINEDEB, A, C, EDEFINE VDEV 0300 (defines virtual device)
DETACH-G, BDETACH 0300 (removes device 0300)
LINK-GLINK MAINT 191 618 R (links read-only minidisk)
MSG-, GMSG USER2 Check logs (sends to USER2)
TRACETRA, B, C, E, GTRACE EVENT ON (traces events)

OpenEdition and Networking

OpenExtensions (originally introduced as OpenEdition), an optional feature announced with VM/ESA Version 2 Release 1 on September 13, 1994, extends the Conversational Monitor System (CMS) with POSIX-compliant interfaces and a environment to facilitate modern networking and application portability in VM. This extension provides a hierarchical byte-oriented known as the OpenExtensions File System (OFS), or Byte File System (BFS), which supports case-sensitive filenames, symbolic links, and permissions akin to Unix standards, enabling seamless integration of Unix tools within CMS without disrupting traditional minidisk operations. The name OpenEdition was renamed to OpenExtensions starting with z/VM Version 3 Release 1.0 in 2001. The POSIX extensions in OpenExtensions include compliance with IEEE 1003.1 standards for system interfaces, encompassing process management functions like fork(), exec(), and signals, as well as a Korn shell (ksh) and utilities such as ls, grep, awk, sed, and gzip, creating a Unix-like shell environment directly in CMS. These features allow developers to port Unix applications to VM with minimal changes, supporting multitasking and threading via APIs like pthread_create(), while maintaining compatibility with CMS pipelines and REXX scripting. Networking in OpenExtensions is powered by the TCP/IP stack, initially introduced as a priced feature for VM/ESA in 1997 but building on earlier late-1980s implementations, providing robust IPv4 support for VM guests and evolving to include in subsequent releases. The core component is the TCPIP service machine, a dedicated virtual machine that manages the protocol stack and hosts servers for FTP (including TFTP for high-speed transfers), (with tn3270 emulation for 3270 terminal access), and SMTP for email services, all configured via PROFILE TCPIP and supporting integration with SFS and BFS file systems. As VM evolved from VM/ESA to starting in 2000, OpenExtensions's networking capabilities advanced with enhancements like SSL/TLS support in the TCP/IP server for secure connections (e.g., via GSKIT for ), addressing for modern internet protocols, and performance optimizations such as reduced I/O for SMTP and offloading to OSA adapters for higher throughput. In 7.4 (released October 2025), TCP/IP includes enhanced QDIO support for improved network performance and resiliency (function level 740, preinstalled). These improvements enable hybrid environments where guests, including distributions, connect to cloud infrastructures, leveraging the TCP/IP stack for seamless data exchange and remote management without requiring separate hardware.

Versions and Compatibility

Release Timeline

The major releases of z/VM from version 4.3 onward marked significant advancements in support for hardware architectures, scalability, integration with guest operating systems like , and performance optimizations. These versions evolved from the earlier VM/ESA lineage, which had transitioned to with the introduction of 64-bit addressing in prior releases. 4.3, generally available on April 30, 2002, introduced initial support for , enabling the to run on eServer zSeries 990 (z990) processors and leveraging 64-bit virtual addressing for improved efficiency. z/VM 5.2, released on July 27, 2005, enhanced scalability for environments with large real memory configurations, allowing better utilization of up to 128 GB of central storage through improved paging and storage management mechanisms that reduced overhead in memory-constrained scenarios. z/VM 6.3, with general availability on July 26, 2013, focused on strengthened guest integration via features like improved directory management and , alongside enhanced through expanded cryptographic support and compliance with Operating System Protection Profile requirements when using RACF. z/VM 7.1, generally available on September 21, 2018, introduced the model for more frequent feature updates and bolstered multi-system clustering support via enhancements to Single System Image (SSI), while delivering performance improvements such as reduced CPU overhead for dump processing and better resource dispatching. z/VM 7.3, released on September 16, 2022, provided compatibility with z16 processors, including support for up to 4 TB of real memory and exploitation of hardware features like the Telum processor for optimized AI inferencing in virtualized workloads. Support lifecycle management for these releases includes defined end-of-service (EOS) dates to guide migrations. For example, z/VM 6.4 reached EOS on March 31, 2021, while z/VM 7.1 EOS occurred on March 31, 2023; later versions like 7.3 remain in service with ongoing updates.

Current Status and Features

IBM z/VM 7.4, the latest version of the operating system, became generally available on September 20, 2024. This release introduces a linear service model, which simplifies the application of updates by incorporating all prior preventive service recommended (PSR) fixes and enhancements into each new program temporary fix (PTF), reducing complexity and improving the service experience for users. Installation processes have been enhanced for better consumability, including support for USB flash drives as a distribution medium in place of physical DVDs, automated panel interfaces for electronic ISO images, and requirements for or later hardware with 3270 console access. Additionally, z/VM 7.4 provides updates to support, enabling compatibility with the IBM z16 family and later systems, including Architecture Level Set (ALS) advancements for improved on these platforms. Key new features in z/VM 7.4 focus on modern workloads, including enhanced container support through zCX integration, which facilitates running cloud-native applications such as and Paks as guests on z/VM. For AI and acceleration, the release leverages embedded AI capabilities in the IBM z16 processor, allowing efficient execution of workloads directly in hardware. Hybrid cloud APIs have been expanded to bridge traditional mainframe operations with cloud environments, supporting modernization efforts via tools like the Infrastructure Center for private cloud management. Compatibility remains robust, with full backward support for legacy guest operating systems in ESA/390 mode and forward integration with z/OS 3.2, ensuring seamless operation of mixed environments including Linux distributions like , SUSE, and . In terms of , 7.4 supports up to 80 logical processors and 4 TB of real memory per image, with guest virtual machines capable of utilizing up to 2 TB of virtual storage, enabling larger single-system images and up to eight-member Single System Image (SSI) clusters for high-availability configurations. Ongoing development emphasizes sustainability through optimized resource utilization and energy-efficient features inherent to hardware, as well as quantum-safe via support for Quantum-Safe APIs on Crypto Express 8S cards, preparing the platform for post-quantum security threats. These enhancements position 7.4 as a premier for hybrid cloud transitions while maintaining its core strengths in reliability.

Marketing and Adoption

Historical Marketing

IBM announced VM/370 on August 2, 1972, positioning it as a cost-effective solution for System/370 mainframes, enabling multiple users and virtual machines to share resources efficiently. The system was marketed to large multi-system installations, universities, and organizations migrating between operating systems like DOS and OS/VS, highlighting its support for up to 16 million bytes of virtual storage per user and its role in boosting productivity through concurrent OS execution and application testing. This initial promotion emphasized VM/370's compatibility with System/370 models such as 135, 145, 155 II, 158, 165 II, and 168, framing it as a tool for growth without requiring extensive hardware upgrades. In the , IBM's campaigns for VM/SP, introduced in 1980 with subsequent releases like Release 6 in 1988, focused on enhanced productivity via the Conversational Monitor System (CMS) and backward compatibility with earlier VM implementations. Marketing materials underscored VM/SP's interactive capabilities for and its support for multiprocessor configurations up to 32 channels, appealing to data processing managers seeking reliable, high-performance environments for diverse workloads. Key promotional efforts included technical documentation and user conferences, where demonstrations showcased CMS's ease of use for development and its integration with productivity tools like PROFS, an office suite released in 1981. VM faced internal challenges from competition with MVS, IBM's batch-oriented system, leading to documented rivalries between development teams and efforts to broaden VM's appeal beyond specialized users. This competition prompted slogans and narratives aimed at democratizing access, though specific phrasing like "VM for the masses" emerged informally in community discussions rather than official campaigns. IBM countered through educational materials, including Redbooks that detailed VM's advantages in multi-OS environments, and trade show demos at events like SHARE conferences, which highlighted virtual machine isolation and guest OS support to attract enterprise adopters. For instance, VM/SP HPO guides promoted its migration benefits and performance optimizations for CMS-based applications. By the VM/ESA era in the , marketing shifted from mainframe-specific to broader enterprise , emphasizing system consolidation, scalability, and e-business enablement. VM/ESA was promoted for reducing hardware costs through resource pooling and supporting multiple guest systems, including Linux precursors, while conserving investments in legacy environments. This evolution was supported by fact sheets and performance reports that positioned VM/ESA as a flexible platform for adapting to changing business needs, marking a transition toward modern concepts.

Modern Usage and Integration

In contemporary enterprise environments, z/VM serves as the primary for hosting workloads on and LinuxONE systems, enabling for mission-critical applications that demand reliability and . This layer consolidates thousands of virtual machines onto a single mainframe, optimizing resource utilization and reducing infrastructure footprint compared to distributed x86 servers. Organizations leverage z/VM to run distributions such as and SUSE Linux Enterprise Server, supporting workloads in and data analytics where low-latency execution is essential. z/VM integrates seamlessly with modern cloud and container technologies, facilitating hybrid cloud architectures. For instance, it supports IBM Cloud Pak for Data, allowing AI model development and deployment on virtual machines provisioned via z/VM within environments on . orchestration is enabled through on , where z/VM provides the underlying for and worker nodes, bridging traditional virtual machines with containerized applications. Additionally, z/VM complements z/ in hybrid setups, enabling shared workloads across mainframe and distributed systems for unified data management and integration via IBM Cloud Pak for Integration. In 2024, IBM announced 7.4, available from September 20, highlighting improvements in service delivery, AI integration, and support for hybrid cloud modernization to enhance customer experiences in . Adoption of remains strong in sectors requiring robust and scalability, such as finance and . In the financial industry, including banking and insurance, underpins over 70% of companies' operations on , handling high-volume transactions with inherent and isolation features. agencies utilize it for secure of sensitive data processing, with implementations in systems for compliance with stringent regulatory standards. Hundreds of organizations across these sectors deploy , drawn to its capacity for thousands of concurrent virtual machines per system. Looking ahead, is poised to play a key role in , particularly AI and . It supports AI frameworks like and on guests, accelerating model training and inference for real-time analytics in hybrid environments. For , z/VM enables deployment of AI workloads on compact IBM Z configurations, processing data closer to sources in industries like and . Sustainability efforts benefit from z/VM's efficient resource sharing on energy-optimized mainframes, reducing carbon footprints by consolidating workloads that would otherwise require multiple x86 servers. Despite these advantages, adoption faces challenges, including a persistent in mainframe expertise. As of , the mainframe gap affects operations, with retiring veterans outpacing new talent acquisition, leading to increased for management. Migrating workloads from x86 hypervisors to requires addressing complexities in resource planning, compatibility testing, and , often necessitating specialized tools to avoid in production environments.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.