Hubbry Logo
CICSCICSMain
Open search
CICS
Community hub
CICS
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
CICS
CICS
from Wikipedia

Other namesCustomer Information Control System
Initial releaseJuly 8, 1969; 56 years ago (1969-07-08)
Stable release
CICS Transaction Server V6.1 / June 17, 2022; 3 years ago (2022-06-17)[1]
Operating systemz/OS, z/VSE
PlatformIBM Z
TypeTeleprocessing monitor
LicenseProprietary
Websitewww.ibm.com/it-infrastructure/z/cics Edit this on Wikidata

IBM CICS (Customer Information Control System) is a family of mixed-language application servers that provide online transaction management and connectivity for applications on IBM mainframe systems under z/OS and z/VSE.

CICS family products are designed as middleware and support rapid, high-volume online transaction processing. A CICS transaction is a unit of processing initiated by a single request that may affect one or more objects.[2] This processing is usually interactive (screen-oriented), but background transactions are possible.

CICS Transaction Server (CICS TS) sits at the head of the CICS family and provides services that extend or replace the functions of the operating system. These services can be more efficient than the generalized operating system services and also simpler for programmers to use, particularly with respect to communication with diverse terminal devices.

Applications developed for CICS may be written in a variety of programming languages and use CICS-supplied language extensions to interact with resources such as files, database connections, terminals, or to invoke functions such as web services. CICS manages the entire transaction such that if for any reason a part of the transaction fails all recoverable changes can be backed out.

While CICS TS has its highest profile among large financial institutions, such as banks and insurance companies, many Fortune 500 companies and government entities are reported to run CICS. Other, smaller enterprises can also run CICS TS and other CICS family products. CICS can regularly be found behind the scenes in, for example, bank-teller applications, ATM systems, industrial production control systems, insurance applications, and many other types of interactive applications.

Recent CICS TS enhancements include new capabilities to improve the developer experience, including the choice of APIs, frameworks, editors, and build tools, while at the same time providing updates in the key areas of security, resilience, and management. In earlier, recent CICS TS releases, support was provided for Web services and Java, event processing, Atom feeds, and RESTful interfaces.

History

[edit]
Chart depicting high-level architecture of CICS (in French)

CICS was preceded by an earlier, single-threaded transaction processing system, IBM MTCS. An 'MTCS-CICS bridge' was later developed to allow these transactions to execute under CICS with no change to the original application programs. IBM's Customer Information Control System (CICS) was first developed in conjunction with Michigan Bell in 1966.[3] Ben Riggins was an IBM systems engineer at Virginia Electric Power Co. when he came up with the idea for the online system.[4]

CICS was originally developed in the United States out of the IBM Development Center in Des Plaines, Illinois, beginning in 1966 to address requirements from the public utility industry. The first CICS product was announced in 1968, named Public Utility Customer Information Control System, or PU-CICS. It became clear immediately that it had applicability to many other industries, so the Public Utility prefix was dropped with the introduction of the first release of the CICS Program Product on July 8, 1969, not long after IMS database management system.

IBM Hursley, where much of the CICS development has been done, 2008

For the next few years, CICS was developed in Palo Alto and was considered a less important "smaller" product than IMS which IBM then considered more strategic. Customer pressure kept it alive, however. When IBM decided to end development of CICS in 1974 to concentrate on IMS, the CICS development responsibility was picked up by the IBM Hursley site in the United Kingdom, which had just ceased work on the PL/I compiler and so knew many of the same customers as CICS. The core of the development work continues in Hursley today alongside contributions from labs in India, China, Russia, Australia, and the United States.

Early evolution

[edit]

CICS originally only supported a few IBM-brand devices like the 1965 IBM 2741 Selectric (golf ball) typewriter-based terminal. The 1964 IBM 2260 and 1972 IBM 3270 video display terminals were widely used later.

In the early days of IBM mainframes, computer software was free – bundled at no extra charge with computer hardware. The OS/360 operating system and application support software like CICS were "open" to IBM customers long before the open-source software initiative. Corporations like Standard Oil of Indiana (Amoco) made major contributions to CICS.

The IBM Des Plaines team tried to add support for popular non-IBM terminals like the ASCII Teletype Model 33 ASR, but the small low-budget software development team could not afford the $100-per-month hardware to test it. IBM executives incorrectly felt that the future would be like the past, with batch processing using traditional punch cards.

IBM reluctantly provided only minimal funding when public utility companies, banks and credit-card companies demanded a cost-effective interactive system (similar to the 1965 IBM Airline Control Program used by the American Airlines Sabre computer reservation system) for high-speed data access-and-update to customer information for their telephone operators (without waiting for overnight batch processing punch card systems).

When CICS was delivered to Amoco with Teletype Model 33 ASR support, it caused the entire OS/360 operating system to crash (including non-CICS application programs). The majority of the CICS Terminal Control Program (TCP – the heart of CICS) and part of OS/360 had to be laboriously redesigned and rewritten by Amoco Production Company in Tulsa Oklahoma. It was then given back to IBM for free distribution to others.

In a few years,[when?] CICS generated over $60 billion in new hardware revenue for IBM, and became their most-successful mainframe software product.

In 1972, CICS was available in three versions – DOS-ENTRY (program number 5736-XX6) for DOS/360 machines with very limited memory, DOS-STANDARD (program number 5736-XX7), for DOS/360 machines with more memory, and OS-STANDARD V2 (program number 5734-XX7) for the larger machines which ran OS/360.[5]

In early 1970, a number of the original developers, including Ben Riggins (the principal architect of the early releases) relocated to California and continued CICS development at IBM's Palo Alto Development Center. IBM executives did not recognize value in software as a revenue-generating product until after federal law required software unbundling. In 1980, IBM executives failed to heed Ben Riggins' strong suggestions that IBM should provide their own EBCDIC-based operating system and integrated-circuit microprocessor chip for use in the IBM Personal Computer as a CICS intelligent terminal (instead of the incompatible Intel chip and relatively immature MS-DOS).

Beginning of a CICSGEN stage one module, 1982

Because of the limited capacity of even large processors of that era every CICS installation was required to assemble the source code for all of the CICS system modules after completing a process similar to system generation (sysgen), called CICSGEN, to establish values for conditional assembly-language statements. This process allowed each customer to exclude support from CICS itself for any feature they did not intend to use, such as device support for terminal types not in use.

CICS owes its early popularity to its relatively efficient implementation when hardware was very expensive, its multi-threaded processing architecture, its relative simplicity for developing terminal-based real-time transaction applications, and many open-source customer contributions, including both debugging and feature enhancement.

Z notation

[edit]

Part of CICS was formalized using the Z notation in the 1980s and 1990s in collaboration with the Oxford University Computing Laboratory, under the leadership of Tony Hoare. This work won a Queen's Award for Technological Achievement.[6]

CICS as a distributed file server

[edit]

In 1986, IBM announced CICS support for the record-oriented file services defined by Distributed Data Management Architecture (DDM). This enabled programs on remote, network-connected computers to create, manage, and access files that had previously been available only within the CICS/MVS and CICS/VSE transaction processing environments.[7]

In newer versions of CICS, support for DDM has been removed. Support for the DDM component of CICS z/OS was discontinued at the end of 2003, and was removed from CICS for z/OS in version 5.2 onward.[8] In CICS TS for z/VSE, support for DDM was stabilised at V1.1.1 level, with an announced intention to discontinue it in a future release.[9] In CICS for z/VSE 2.1 onward, CICS/DDM is not supported.[10]

CICS and the World Wide Web

[edit]

CICS Transaction Server first introduced a native HTTP interface in version 1.2, together with a Web Bridge technology for wrapping green-screen terminal-based programs with an HTML facade. CICS Web and Document APIs were enhanced in CICS TS V1.3 to enable web-aware applications to be written to interact more effectively with web browsers.

CICS TS versions 2.1 through 2.3 focused on introducing CORBA and EJB technologies to CICS, offering new ways to integrate CICS assets into distributed application component models. These technologies relied on hosting Java applications in CICS. The Java hosting environment saw numerous improvements over many releases. A multi-threaded JVM resource called the JVMSERVER was introduced during the CICS TS version 4.1 release, this was further enhanced to use 64-bit JVM technology in version 5.1. Version 5.1 also saw the introduction of the WebSphere Liberty profile web-container. Ultimately WebSphere Liberty was fully embedded into CICS Transaction Server in version 5.3. Numerous web facing technologies could be hosted in CICS using Java, this ultimately resulted in the removal of the native CORBA and EJB technologies.

CICS TS V3.1 added a native implementation of the SOAP and WSDL technologies for CICS, together with client side HTTP APIs for outbound communication. These twin technologies enabled easier integration of CICS components with other Enterprise applications, and saw widespread adoption. Tools were included for taking traditional CICS programs written in languages such as COBOL, and converting them into WSDL defined Web Services, with little or no program changes. This technology saw regular enhancements over successive releases of CICS.

CICS TS V4.1 and V4.2 saw further enhancements to web connectivity, including a native implementation of the Atom publishing protocol.

Many of the newer web facing technologies were made available for earlier releases of CICS using delivery models other than a traditional product release. This allowed early adopters to provide constructive feedback that could influence the final design of the integrated technology. Examples include the Soap for CICS technology preview SupportPac for TS V2.2, or the ATOM SupportPac for TS V3.1. This approach was used to introduce JSON support for CICS TS V4.2, a technology that went on to be integrated into CICS TS V5.2.

The JSON technology in CICS is similar to earlier SOAP technology, both of which allowed programs hosted in CICS to be wrapped with a modern facade. The JSON technology was in turn enhanced in z/OS Connect Enterprise Edition, an IBM product for composing JSON APIs that can leverage assets from several mainframe subsystems.

Many partner products have also been used to interact with CICS. Popular examples include using the CICS Transaction Gateway for connecting to CICS from JCA compliant Java application servers, and IBM DataPower appliances for filtering web traffic before it reaches CICS.

Modern versions of CICS provide many ways for both existing and new software assets to be integrated into distributed application flows. CICS assets can be accessed from remote systems, and can access remote systems; user identity and transactional context can be propagated; RESTful APIs can be composed and managed; devices, users and servers can interact with CICS using standards-based technologies; and the IBM WebSphere Liberty environment in CICS promotes the rapid adoption of new technologies.

MicroCICS

[edit]

By January, 1985 a 1969-founded consulting company, having done "massive on-line systems" for Hilton Hotels, FTD Florists, Amtrak, and Budget Rent-a-Car, announced what became MicroCICS.[11] The initial focus was the IBM XT/370 and IBM AT/370.[12]

CICS Family

[edit]

Although when CICS is mentioned, people usually mean CICS Transaction Server, the CICS Family refers to a portfolio of transaction servers, connectors (called CICS Transaction Gateway) and CICS Tools.

CICS on distributed platforms—not mainframes—is called IBM TXSeries. TXSeries is distributed transaction processing middleware. It supports C, C++, COBOL, Java™ and PL/I applications in cloud environments and traditional data centers. TXSeries is available on AIX, Linux x86, Windows, Solaris, and HP-UX platforms.[13] CICS is also available on other operating systems, notably IBM i and OS/2. The z/OS implementation (i.e., CICS Transaction Server for z/OS) is by far the most popular and significant.

Two versions of CICS were previously available for VM/CMS, but both have since been discontinued. In 1986, IBM released CICS/CMS,[14][11] which was a single-user version of CICS designed for development use, the applications later being transferred to an MVS or DOS/VS system for production execution.[15][16] Later, in 1988, IBM released CICS/VM.[17][18] CICS/VM was intended for use on the IBM 9370, a low-end mainframe targeted at departmental use; IBM positioned CICS/VM running on departmental or branch office mainframes for use in conjunction with a central mainframe running CICS for MVS.[19]

CICS Tools

[edit]

Provisioning, management and analysis of CICS systems and applications is provided by CICS Tools. This includes performance management as well as deployment and management of CICS resources. In 2015, the four core foundational CICS tools (and the CICS Optimization Solution Pack for z/OS) were updated with the release of CICS Transaction Server for z/OS 5.3. The four core CICS Tools: CICS Interdependency Analyzer for z/OS, CICS Deployment Assistant for z/OS, CICS Performance Analyzer for z/OS and CICS Configuration Manager for z/OS.

Releases and versions

[edit]

CICS Transaction Server for z/OS has used the following release numbers:

Version Announcement Date Release date End of Service Date Features
Unsupported: CICS Transaction Server for OS/390 1.1 1996-09-10[20] 1996-11-08 2001-12-31
Unsupported: CICS Transaction Server for OS/390 1.2 1997-09-09[20] 1997-10-24 2002-12-31
Unsupported: CICS Transaction Server for OS/390 1.3 1998-09-08[20] 1999-03-26 2006-04-30
Unsupported: CICS Transaction Server for z/OS 2.1 2001-03-13[21] 2001-03-30 2002-06-30
Unsupported: CICS Transaction Server for z/OS 2.2 2001-12-04[22] 2002-01-25 2008-04-30
Unsupported: CICS Transaction Server for z/OS 2.3 2003-10-28[23] 2003-12-19 2009-09-30
Unsupported: CICS Transaction Server for z/OS 3.1 2004-11-30[24] 2005-03-25 2015-12-31
Unsupported: CICS Transaction Server for z/OS 3.2 2007-03-27[25] 2007-06-29 2015-12-31
Unsupported: CICS Transaction Server for z/OS 4.1 2009-04-28[26] 2009-06-26 2017-09-30
Unsupported: CICS Transaction Server for z/OS 4.2 2011-04-05[27] 2011-06-24 2018-09-30
Unsupported: CICS Transaction Server for z/OS 5.1 2012-10-03[28] 2012-12-14 2019-07-01
Unsupported: CICS Transaction Server for z/OS 5.2 2014-04-07[29] 2014-06-13 2020-12-31
Unsupported: CICS Transaction Server for z/OS 5.3 2015-10-05[30] 2015-12-11 2021-12-31
Unsupported: CICS Transaction Server for z/OS 5.4 2017-05-16[31] 2017-06-16 2023-12-31
Unsupported: CICS Transaction Server for z/OS 5.5 2018-10-02[32] 2018-12-14 2025-09-30
Supported: CICS Transaction Server for z/OS 5.6 2020-04-07[33] 2020-06-12 Support for Spring Boot, Jakarta EE 8, Node.js 12. New JCICSX API with remote development capability. Security, resilience and management enhancements.
Supported: CICS Transaction Server for z/OS 6.1 2022-04-05[34] 2022-06-17 Support for Java 11, Jakarta EE 9.1, Eclipse MicroProfile 5, Node.js 12, TLS 1.3. Security enhancements and simplifications. Region tagging.
Latest version: CICS Transaction Server for z/OS 6.2 2024-04-09[35] 2024-06-14
Legend:
Unsupported
Supported
Latest version
Preview version
Future version

Programming

[edit]

Programming considerations

[edit]

Multiple-user interactive-transaction application programs were required to be quasi-reentrant in order to support multiple concurrent transaction threads. A software coding error in one application could block all users from the system. The modular design of CICS reentrant / reusable control programs meant that, with judicious "pruning," multiple users with multiple applications could be executed on a computer with just 32K of expensive magnetic core physical memory (including the operating system).

Considerable effort was required by CICS application programmers to make their transactions as efficient as possible. A common technique was to limit the size of individual programs to no more than 4,096 bytes, or 4K, so that CICS could easily reuse the memory occupied by any program not currently in use for another program or other application storage needs. When virtual memory was added to versions OS/360 in 1972, the 4K strategy became even more important to reduce paging and thrashing unproductive resource-contention overhead.

The efficiency of compiled high-level COBOL and PL/I language programs left much to be desired. Many CICS application programs continued to be written in assembler language, even after COBOL and PL/I support became available.

With 1960s-and-1970s hardware resources expensive and scarce, a competitive "game" developed among system optimization analysts. When critical path code was identified, a code snippet was passed around from one analyst to another. Each person had to either (a) reduce the number of bytes of code required, or (b) reduce the number of CPU cycles required. Younger analysts learned from what more-experienced mentors did. Eventually, when no one could do (a) or (b), the code was considered optimized, and they moved on to other snippets. Small shops with only one analyst learned CICS optimization very slowly (or not at all).

Because application programs could be shared by many concurrent threads, the use of static variables embedded within a program (or use of operating system memory) was restricted (by convention only).

Advertisement for CICS debugging product, 1978

Unfortunately, many of the "rules" were frequently broken, especially by COBOL programmers who might not understand the internals of their programs or fail to use the necessary restrictive compile time options. This resulted in "non-re-entrant" code that was often unreliable, leading to spurious storage violations and entire CICS system crashes.

Originally, the entire partition, or Multiple Virtual Storage (MVS) region, operated with the same memory protection key including the CICS kernel code. Program corruption and CICS control block corruption was a frequent cause of system downtime. A software error in one application program could overwrite the memory (code or data) of one or all currently running application transactions. Locating the offending application code for complex transient timing errors could be a very-difficult operating-system analyst problem.

These shortcomings persisted for multiple new releases of CICS over a period of more than 20 years, in spite of their severity and the fact that top-quality CICS skills were in high demand and short supply. They were addressed in TS V3.3, V4.1 and V5.2 with the Storage Protection, Transaction Isolation and Subspace features respectively, which utilize operating system hardware features to protect the application code and the data within the same address space even though the applications were not written to be separated. CICS application transactions remain mission-critical for many public utility companies, large banks and other multibillion-dollar financial institutions.

Macro-level programming

[edit]

When CICS was first released, it only supported application transaction programs written in IBM 360 Assembler. COBOL and PL/I support were added years later. Because of the initial assembler orientation, requests for CICS services were made using assembler-language macros. For example, the request to read a record from a file were made by a macro call to the "File Control Program" of CICS might look like this:

DFHFC TYPE=READ,DATASET=myfile,TYPOPER=UPDATE,....etc.

This gave rise to the later terminology "Macro-level CICS."

When high-level language support was added, the macros were retained and the code was converted by a pre-compiler that expanded the macros to their COBOL or PL/I CALL statement equivalents. Thus preparing a HLL application was effectively a "two-stage" compile – output from the preprocessor fed into the HLL compiler as input.

COBOL considerations: unlike PL/I, IBM COBOL does not normally provide for the manipulation of pointers (addresses). In order to allow COBOL programmers to access CICS control blocks and dynamic storage the designers resorted to what was essentially a hack. The COBOL Linkage Section was normally used for inter-program communication, such as parameter passing. The compiler generates a list of addresses, each called a Base Locator for Linkage (BLL) which were set on entry to the called program. The first BLL corresponds to the first item in the Linkage Section and so on. CICS allows the programmer to access and manipulate these by passing the address of the list as the first argument to the program. The BLLs can then be dynamically set, either by CICS or by the application to allow access to the corresponding structure in the Linkage Section.[36]

Command-level programming

[edit]

During the 1980s, IBM at Hursley Park produced a version of CICS that supported what became known as "Command-level CICS" which still supported the older programs but introduced a new API style to application programs.

A typical Command-level call might look like the following:

 EXEC CICS
     SEND MAPSET('LOSMATT') MAP('LOSATT')
 END-EXEC

The values given in the SEND MAPSET command correspond to the names used on the first DFHMSD macro in the map definition given below for the MAPSET argument, and the DFHMSI macro for the MAP argument. This is pre-processed by a pre-compile batch translation stage, which converts the embedded commands (EXECs) into call statements to a stub subroutine. So, preparing application programs for later execution still required two stages. It was possible to write "Mixed mode" applications using both Macro-level and Command-level statements.

Initially, at execution time, the command-level commands were converted using a run-time translator, "The EXEC Interface Program", to the old Macro-level call, which was then executed by the mostly unchanged CICS nucleus programs. But when the CICS Kernel was re-written for TS V3, EXEC CICS became the only way to program CICS applications, as many of the underlying interfaces had changed.

Run-time conversion

[edit]

The Command-level-only CICS introduced in the early 1990s offered some advantages over earlier versions of CICS. However, IBM also dropped support for Macro-level application programs written for earlier versions. This meant that many application programs had to be converted or completely rewritten to use Command-level EXEC commands only.

By this time, there were perhaps millions of programs worldwide that had been in production for decades in many cases. Rewriting them often introduced new bugs without necessarily adding new features. There were a significant number of users who ran CICS V2 application-owning regions (AORs) to continue to run macro code for many years after the change to V3.

It was also possible to execute old Macro-level programs using conversion software such as APT International's Command CICS.[37]

New programming styles

[edit]

Recent CICS Transaction Server enhancements include support for a number of modern programming styles.

CICS Transaction Server Version 5.6[38] introduced enhanced support for Java to deliver a cloud-native experience for Java developers. For example, the new CICS Java API (JCICSX) allows easier unit testing using mocking and stubbing approaches, and can be run remotely on the developer's local workstation. A set of CICS artifacts on Maven Central enable developers to resolve Java dependencies using popular dependency management tools such as Apache Maven and Gradle. Plug-ins for Maven (cics-bundle-maven) and Gradle (cics-bundle-gradle) are also provided to simplify automated building of CICS bundles, using familiar IDEs like Eclipse, IntelliJ IDEA, and Visual Studio Code. In addition, Node.js z/OS support is enhanced for version 12, providing faster startup, better default heap limits, updates to the V8 JavaScript engine, etc. Support for Jakarta EE 8 is also included.

CICS TS 5.5 introduced support for IBM SDK for Node.js, providing a full JavaScript runtime, server-side APIs, and libraries to efficiently build high-performance, highly scalable network applications for IBM Z.

CICS Transaction Server Version 2.1 introduced support for Java. CICS Transaction Server Version 2.2 supported the Software Developers Toolkit. CICS provides the same run-time container as IBM's WebSphere product family so Java EE applications are portable between CICS and Websphere and there is common tooling for the development and deployment of Java EE applications.

In addition, CICS placed an emphasis on "wrapping" existing application programs inside modern interfaces so that long-established business functions can be incorporated into more modern services. These include WSDL, SOAP and JSON interfaces that wrap legacy code so that a web or mobile application can obtain and update the core business objects without requiring a major rewrite of the back-end functions.

Transactions

[edit]

A CICS transaction is a set of operations that perform a task together. Usually, the majority of transactions are relatively simple tasks such as requesting an inventory list or entering a debit or credit to an account. A primary characteristic of a transaction is that it should be atomic. On IBM Z servers, CICS easily supports thousands of transactions per second, making it a mainstay of enterprise computing.

CICS applications comprise transactions, which can be written in numerous programming languages, including COBOL, PL/I, C, C++, IBM Basic Assembly Language, Rexx, and Java.

Each CICS program is initiated using a transaction identifier. CICS screens are usually sent as a construct called a map, a module created with Basic Mapping Support (BMS) assembler macros or third-party tools. CICS screens may contain text that is highlighted, has different colors, and/or blinks depending on the terminal type used. An example of how a map can be sent through COBOL is given below. The end user inputs data, which is made accessible to the program by receiving a map from CICS.

 EXEC CICS
     RECEIVE MAPSET('LOSMATT') MAP('LOSATT') INTO(OUR-MAP)
 END-EXEC.

For technical reasons, the arguments to some command parameters must be quoted and some must not be quoted, depending on what is being referenced. Most programmers will code out of a reference book until they get the "hang" or concept of which arguments are quoted, or they'll typically use a "canned template" where they have example code that they just copy and paste, then edit to change the values.

Example of BMS Map Code

[edit]

Basic Mapping Support defines the screen format through assembler macros such as the following. This was assembled to generate both the physical map set – a load module in a CICS load library – and a symbolic map set – a structure definition or DSECT in PL/I, COBOL, assembler, etc. which was copied into the source program.[39]

 LOSMATT DFHMSD TYPE=MAP,                                               X
                MODE=INOUT,                                             X
                TIOAPFX=YES,                                            X
                TERM=3270-2,                                            X
                LANG=COBOL,                                             X
                MAPATTS=(COLOR,HILIGHT),                                X
                DSATTS=(COLOR,HILIGHT),                                 X
                STORAGE=AUTO,                                           X
                CTRL=(FREEKB,FRSET)
 *
 LOSATT  DFHMDI SIZE=(24,80),                                           X
                LINE=1,                                                 X
                COLUMN=1
 *
 LSSTDII DFHMDF POS=(1,01),                                             X
                LENGTH=04,                                              X
                COLOR=BLUE,                                             X
                INITIAL='MQCM',                                         X
                ATTRB=PROT
 *
         DFHMDF POS=(24,01),                                            X
                LENGTH=79,                                              X
                COLOR=BLUE                                              X
                ATTRB=ASKIP,                                            X
                INITIAL='PF7-          8-           9-          10-     X
                    11-            12-CANCEL'
 *
           DFHMSD   TYPE=FINAL
           END

Structure

[edit]
Chart showing a particular task invocation of CICS, 2010

In the z/OS environment, a CICS installation comprises one or more "regions" (generally referred to as a "CICS Region"),[40] spread across one or more z/OS system images. Although it processes interactive transactions, each CICS region is usually started as a batch processing|batch address space with standard JCL statements: it's a job that runs indefinitely until shutdown. Alternatively, each CICS region may be started as a started task. Whether a batch job or a started task, CICS regions may run for days, weeks, or even months before shutting down for maintenance (MVS or CICS). Upon restart a parameter determines if the start should be "Cold" (no recovery) or "Warm"/"Emergency" (using a warm shutdown or restarting from the log after a crash). Cold starts of large CICS regions with many resources can take a long time as all the definitions are re-processed.

Installations are divided into multiple address spaces for a wide variety of reasons, such as:

  • application separation,
  • function separation,
  • avoiding the workload capacity limitations of a single region, or address space, or mainframe instance in the case of a z/OS SysPlex.

A typical installation consists of a number of distinct applications that make up a service. Each service usually has a number of "Terminal-Owning Region" (TORs) that route transactions to multiple "Application-Owning Regions" (AORs), though other topologies are possible. For example, the AORs might not perform File I/O. Instead there would be a "File-Owning Region" (FOR) that performed the File I/O on behalf of transactions in the AOR – given that, at the time, a VSAM file could only support recoverable write access from one address space at a time.

But not all CICS applications use VSAM as the primary data source (or historically other single-address-space-at-a-time datastores such as CA Datacom) − many use either IMS/DB or Db2 as the database, and/or MQ as a queue manager. For all these cases, TORs can load-balance transactions to sets of AORs which then directly use the shared databases/queues. CICS supports XA two-phase commit between data stores and so transactions that spanned MQ, VSAM/RLS and Db2, for example, are possible with ACID properties.

CICS supports distributed transactions using SNA LU6.2 protocol between the address spaces which can be running on the same or different clusters. This allows ACID updates of multiple datastores by cooperating distributed applications. In practice there are issues with this if a system or communications failure occurs because the transaction disposition (backout or commit) may be in-doubt if one of the communicating nodes has not recovered. Thus the use of these facilities has never been very widespread.

Sysplex exploitation

[edit]
A diagram showing one site's relationship between z/OS and CICS, 2010

At the time of CICS ESA V3.2, in the early 1990s, IBM faced the challenge of how to get CICS to exploit the new z/OS Sysplex mainframe line.

The Sysplex was to be based on CMOS (Complementary Metal Oxide Silicon) rather than the existing ECL (Emitter Coupled Logic) hardware. The cost of scaling the mainframe-unique ECL was much higher than CMOS which was being developed by a keiretsu with high-volume use cases such as Sony PlayStation to reduce the unit cost of each generation's CPUs. The ECL was also expensive for users to run because the gate drain current produced so much heat that the CPU had to packaged into a special module called a Thermal Conduction Module (TCM[41]) that had inert gas pistons and needed plumbed with high-volume chilled water to be cooled. However, the air-cooled CMOS technology's CPU speed initially was much slower than the ECL (notably the boxes available from the mainframe-clone makers Amdahl and Hitachi). This was especially concerning to IBM in the CICS context as almost all the largest mainframe customers were running CICS and for many of them it was the primary mainframe workload.

To achieve the same total transaction throughput on a Sysplex, multiple boxes would need to be used in parallel for each workload. However, a CICS address space, due to its quasi-reentrant application programming model, could not exploit more than about 1.5 processors on one box at the time – even with use of MVS sub-tasks. Without enhanced parallelism, customers would tend to move to IBM's competitors rather than use Sysplex as they scaled up the CICS workloads. There was considerable debate inside IBM as to whether the right approach would be to break upward compatibility for applications and move to a model like IMS/DC which was fully reentrant, or to extend the approach customers had adopted to more fully exploit a single mainframe's power – using multi-region operation (MRO).

Eventually the second path was adopted after the CICS user community was consulted. The community vehemently opposed breaking upward compatibility given that they had the prospect of Y2K to contend with at that time and did not see the value in re-writing and testing millions of lines of mainly COBOL, PL/I, or assembler code.

The IBM-recommended structure for CICS on Sysplex was that at least one CICS Terminal Owning Region was placed on each Sysplex node which dispatched transactions to many Application Owning Regions (AORs) spread across the entire Sysplex. If these applications needed to access shared resources they either used a Sysplex-exploiting datastore (such as IBM Db2 or IMS/DB) or concentrated, by function-shipping, the resource requests into singular-per-resource Resource Owing Regions (RORs) including File Owning Regions (FORs) for VSAM and CICS Data Tables, Queue Owning Regions (QORs) for MQ, CICS Transient Data (TD) and CICS Temporary Storage (TS). This preserved compatibility for legacy applications at the expense of operational complexity to configure and manage many CICS regions.

In subsequent releases and versions, CICS was able to exploit new Sysplex-exploiting facilities in VSAM/RLS,[42] MQ for zOS[43] and placed its own Data Tables, TD, and TS resources into the architected shared resource manager for the Sysplex: the Coupling Facility or CF, dispensing with the need for most RORs. The CF provides a mapped view of resources including a shared timebase, buffer pools, locks and counters with hardware messaging assists that made sharing resources across the Sysplex both more efficient than polling and reliable (utilizing a semi-synchronized backup CF for use in case of failure).

By this time, the CMOS line had individual boxes that exceeded the power available by the fastest ECL box with more processors per CPU. When these were coupled together, 32 or more nodes would be able to scale two orders of magnitude greater in total power for a single workload. For example, by 2002, Charles Schwab was running a "MetroPlex" consisting of a redundant pair of mainframe Sysplexes in two locations in Phoenix, AZ, each with 32 nodes driven by one shared CICS/DB/2 workload to support the vast volume of pre-dotcom-bubble web client inquiry requests.

This cheaper, much more scalable CMOS technology base, and the huge investment costs of having to both get to 64-bit addressing and independently produce cloned CF functionality drove the IBM-mainframe clone makers out of the business one by one.[44][45]

CICS Recovery/Restart

[edit]

The objective of recovery/restart in CICS is to minimize and if possible eliminate damage done to Online System when a failure occurs, so that system and data integrity is maintained.[46] If the CICS region was shut down instead of failing it will perform a "Warm" start exploiting the checkpoint written at shutdown. The CICS region can also be forced to "Cold" start which reloads all definitions and wipes out the log, leaving the resources in whatever state they are in.

Under CICS, following are some of the resources which are considered recoverable. If one wishes these resources to be recoverable then special options must be specified in relevant CICS definitions:

  • VSAM files
  • CMT CICS-maintained data tables
  • Intrapartition TDQ
  • Temporary Storage Queue in auxiliary storage
  • I/O messages from/to transactions in a VTAM network
  • Other database/queuing resources connected to CICS that support XA two-phase commit protocol (like IMS/DB, Db2, VSAM/RLS)

CICS also offers extensive recovery/restart facilities for users to establish their own recovery/restart capability in their CICS system. Commonly used recovery/restart facilities include:

  • Dynamic Transaction Backout (DTB)
  • Automatic Transaction Restart
  • Resource Recovery using System Log
  • Resource Recovery using Journal
  • System Restart
  • Extended Recovery Facility

Components

[edit]

Each CICS region comprises one major task on which every transaction runs, although certain services such as access to IBM Db2 data use other tasks (TCBs). Within a region, transactions are cooperatively multitasked – they are expected to be well-behaved and yield the CPU rather than wait. CICS services handle this automatically.

Each unique CICS "Task" or transaction is allocated its own dynamic memory at start-up and subsequent requests for additional memory were handled by a call to the "Storage Control program" (part of the CICS nucleus or "kernel"), which is analogous to an operating system.

A CICS system consists of the online nucleus, batch support programs, and applications services.[47]

Nucleus

[edit]

The original CICS nucleus consisted of a number of functional modules written in 370 assembler until V3:

  • Task Control Program (KCP)
  • Storage Control Program (SCP)
  • Program Control Program (PCP)
  • Program Interrupt Control Program (PIP)
  • Interval Control Program (ICP)
  • Dump Control Program (DCP)
  • Terminal Control Program (TCP)
  • File Control Program (FCP)
  • Transient Data Control Program (TDP)
  • Temporary Storage Control Program (TSP)

Starting in V3, the CICS nucleus was rewritten into a kernel-and-domain structure using IBM's PL/AS language – which is compiled into assembler.

The prior structure did not enforce separation of concerns and so had many inter-program dependencies which led to bugs unless exhaustive code analysis was done. The new structure was more modular and so resilient because it was easier to change without impact. The first domains were often built with the name of the prior program but without the trailing "P". For example, Program Control Domain (DFHPC) or Transient Data Domain (DFHTD). The kernel operated as a switcher for inter-domain requests – initially this proved expensive for frequently called domains (such as Trace) but by utilizing PL/AS macros these calls were in-lined without compromising on the separate domain design.

In later versions, completely redesigned domains were added like the Logging Domain DFHLG and Transaction Domain DFHTM that replaced the Journal Control Program (JCP).

Support programs

[edit]

In addition to the online functions CICS has several support programs that run as batch jobs.[48] : pp.34–35 

  • High-level language (macro) preprocessor
  • Command language translator
  • Dump utility – prints formatted dumps generated by CICS Dump Management
  • Trace utility – formats and prints CICS trace output
  • Journal formatting utility – prints a formatted dump of the CICS region in case of error

Applications services

[edit]

The following components of CICS support application development.[48]: pp.35–37 

  • Basic Mapping Support (BMS) provides device-independent terminal input and output
  • APPC Support that provides LU6.1 and LU6.2 API support for collaborating distributed applications that support two-phase commit
  • Data Interchange Program (DIP) provides support for IBM 3770 and IBM 3790 programmable devices
  • 2260 Compatibility allows programs written for IBM 2260 display devices to run on 3270 displays
  • EXEC Interface Program – the stub program that converts calls generated by EXEC CICS commands to calls to CICS functions
  • Built-in Functions – table search, phonetic conversion, field verify, field edit, bit checking, input formatting, weighted retrieval

Pronunciation

[edit]

Different countries have differing pronunciations[49]

  • Within IBM (specifically Tivoli) it is referred to as /ˈkɪks/.
  • In the US, it is more usually pronounced by reciting each letter /ˌsˌˌsˈɛs/.
  • In Australia, Belgium, Canada, Hong Kong, the UK and some other countries, it is pronounced /ˈkɪks/.
  • In Denmark, it is pronounced kicks.
  • In Finland, it is pronounced [kiks]
  • In France, it is pronounced [se.i.se.ɛs].
  • In Germany, Austria and Hungary, it is pronounced [ˈtsɪks] and, less often, [ˈkɪks].
  • In Greece, it is pronounced kiks.
  • In India, it is pronounced kicks.
  • In Iran, it is pronounced kicks.
  • In Italy, is pronounced [ˈtʃiks].
  • In Poland, it is pronounced [ˈkʲiks].
  • In Portugal and Brazil, it is pronounced [ˈsiks].
  • In Russia, it is pronounced kiks.
  • In Slovenia, it is pronounced kiks.
  • In Spain, it is pronounced [ˈθiks].
  • In Sweden, it is pronounced kicks.
  • In Uganda, it is pronounced kicks.
  • In Turkey, it is pronounced kiks.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Customer Information Control System (CICS) is a general-purpose subsystem developed by for the operating system, enabling the simultaneous execution of online applications for multiple users while managing resources, ensuring , and delivering fast response times. As a cornerstone of enterprise computing, CICS serves as a mixed-language platform that hosts transactional workloads in hybrid cloud environments, supporting languages such as , (including Java 21 and 10), and others for building scalable, secure applications. It provides essential services through its (API), including data management for accessing files and databases like DB2, communications for terminal interactions, and diagnostic tools for monitoring and troubleshooting. Transactions in CICS are identified by four-character identifiers (TRANSIDs), such as CEMT for system management, and execute as tasks that can link programs or transfer data via mechanisms like COMMAREA (up to 32 KB) or channels and containers. CICS excels in high-volume, mission-critical operations, processing millions of transactions with robust security features like TLS 1.3 support and compliance management, while optimizing resource usage to reduce costs in mainframe environments. Its architecture allows for resource sharing, user authorization, and prioritization, passing database requests to specialized managers and supporting both traditional procedural programs and modern Enterprise JavaBeans. Developers benefit from tools like Maven, , and for enhanced productivity, alongside capabilities for enablement, messaging integration, event streams, and AI-driven modernization. Widely used in industries like , retail, and healthcare, CICS remains a vital platform for reliable, performant on systems.

History

Origins and Early Development

The development of the Customer Information Control System (CICS) began in 1966 at IBM's Development Center in , initially to address requirements from companies for efficient on mainframe systems. This effort evolved from earlier IBM initiatives, including the single-threaded Minimum Teleprocessing Communications System (MTCS), a precursor designed for multi-terminal environments under OS/VS1 and DOS/VS. Although CICS's foundational concepts drew from 1950s collaborations between and American Airlines on the Semi-Automated Business Research Environment () airline reservation system, the core development focused on general-purpose transaction management rather than airline-specific applications. In April 1968, IBM released the Customer Information Control System (PUCICS) as free Type II code, providing initial support for Basic Telecommunications Access Method (BTAM) terminals and laying the groundwork for broader adoption. The first production release of CICS occurred on July 8, 1969, for the under both DOS and OS/360 operating systems, priced at $600 per month and supporting up to 50 BTAM terminals, three file datasets, 100 programs, and 50 transaction types. This version emphasized batch-oriented processing for applications like airline ticketing, enabling high-volume (OLTP) in resource-constrained environments. Early enhancements in the early included compatibility with OS/VS and support for additional terminals, such as the IBM 2741 Selectric typewriter and the 2260 display station. By 1974, worldwide development responsibility shifted to IBM's Hursley Laboratory in the , where the team introduced single architecture in CICS/VS Version 1.0 and expanded terminal support to include early video displays, improving interactivity for distributed users. From its , CICS incorporated pseudo-conversational programming techniques to manage limited terminal storage and network bandwidth, where each user interaction terminated a task and passed state data via channels like Temporary Storage Queue (TSQ) or Transient Data Queue (TDQ), simulating continuous dialogue without holding resources across exchanges. This approach was essential for in early OLTP scenarios. Key early adopters included airlines, leveraging CICS for reservation systems building on SABRE's legacy, and banks for high-volume financial transactions, with over 30% of worldwide terminals running CICS by 1975.

Key Evolutionary Milestones

In the 1980s, introduced the as a language for defining CICS interfaces, leveraging and predicate calculus to model system behaviors precisely. This approach was applied to key CICS modules during development at 's Hursley Laboratory, enabling rigorous verification that caught design flaws early and enhanced overall software reliability by reducing implementation errors in complex logic. During the same decade, CICS evolved into a distributed , allowing seamless access to VSAM datasets—such as key-sequenced (KSDS) and entry-sequenced (ESDS) files—and DL/I hierarchical databases across multiple regions via mechanisms like function shipping and transaction routing. This advancement supported multisystem environments on mainframes, improving resource sharing and scalability for enterprise workloads without requiring data replication. In the 1990s, CICS integrated with the emerging through CICS Web Support, which enabled the system to function as an HTTP server and client, processing web requests and responses while supporting interfaces akin to the (CGI) for dynamic content generation. This feature allowed legacy CICS applications to expose services via forms and handle persistent sessions, bridging mainframe transaction processing with internet-based access and facilitating early implementations. To extend CICS beyond mainframes, developed MicroCICS in the , a lightweight variant optimized for reduced memory footprints on distributed platforms including and AIX (via CICS/6000 on RS/6000 systems). Released starting with CICS Version 1.20 in 1990, it provided core transaction management for client-server architectures, enabling developers to deploy scaled-down CICS environments on personal computers and systems for testing, branching, or edge processing. By the late 1990s, formalized the CICS Family, encompassing variants like CICS/ESA for /ESA environments (introduced in 1991) and CICS/6000 for open systems on AIX (announced in 1993), to unify across heterogeneous platforms. This family architecture promoted through shared APIs and intercommunication protocols, allowing applications to span mainframe and distributed systems while maintaining consistent reliability and security standards. Concurrently, the 1990s saw the emergence of specialized CICS Tools, such as CICS/Architect, which streamlined system design, resource definition, and using graphical interfaces and automated generation of control tables. These tools reduced manual coding errors and accelerated deployment cycles for complex CICS regions, supporting the shift toward more modular and maintainable architectures in enterprise settings.

Recent Developments and Modernization

In the , shifted CICS toward a model, beginning with CICS Transaction Server (TS) 5.1 in 2014, which enabled more frequent enhancements through quarterly Authorized Program Analysis Reports (APARs) rather than waiting for major version releases. This approach allowed for rapid deployment of fixes, performance improvements, and new capabilities, reducing the time between innovations and helping organizations address evolving business needs without full system overhauls. By CICS TS 5.3, this model was fully formalized, supporting ongoing updates via service streams that integrated seamlessly with existing environments. CICS has seen deepened integration with IBM z Systems processors starting from the z13 in 2015, enhancing scalability and performance for high-volume transaction processing. These integrations leverage hardware advancements like improved compression and encryption acceleration, enabling CICS to handle larger workloads efficiently. In 2025, IBM announced full compatibility for CICS TS with the z17 mainframe, which introduces AI-optimized hardware such as the Spyre Accelerator, further boosting throughput and reducing latency for mission-critical applications. Additionally, enhancements in OMEGAMON for CICS, announced in May 2025, incorporate AI-driven insights for better DB2 correlation—linking transaction traces across CICS and DB2 for root-cause analysis—and zIIP optimization to offload specialty processing, minimizing general CPU usage. To modernize for contemporary workloads, CICS TS 5.2 in 2015 discontinued support for legacy features such as Distributed Data Management (DDM), streamlining the system by removing outdated distributed file access capabilities that were no longer aligned with cloud-native priorities. This cleanup paved the way for hybrid cloud readiness, exemplified by CICS TS 6.1 released in June 2022, which embedded the WebSphere Liberty profile to facilitate containerized deployments on z/OS. The Liberty integration supports Jakarta EE 10 and enables CICS regions to run Java applications in lightweight containers, bridging mainframe reliability with DevOps practices for easier portability to hybrid environments. The latest milestone, CICS TS 6.3 released on September 5, 2025, further advances modernization by adding support for Node.js 18, allowing developers to build reactive applications using modern JavaScript runtimes within CICS. It also enhances REST API tooling for simplified service exposure and consumption, including improved OpenAPI generation for better API documentation and testing. Moreover, event processing capabilities have been upgraded to support event-driven architectures, enabling asynchronous handling of business events with integration to Kafka and other streaming platforms for real-time responsiveness.

Architecture

Overall System Structure

CICS employs a modular, region-based architecture that enables efficient transaction processing within IBM z/OS and z/VSE operating environments. Each CICS region functions as a separate logical address space, providing isolation for resources such as programs, data files, and terminals while allowing coordinated operation across multiple regions. This design supports scalability by distributing workloads, with regions specialized for distinct functions: Terminal Owning Regions (TORs) manage input/output operations for connected terminals, Application Owning Regions (AORs) execute application programs, and File Owning Regions (FORs) handle data access and storage management. At a higher level, CICS organizes regions into a hierarchical structure through the CICSplex, a sysplex-wide environment that facilitates coordination and resource sharing across multiple systems. The CICSPlex System Manager (CPSM) oversees this structure, performing load balancing by dynamically routing transactions to optimal regions based on factors like performance metrics and system health. Resource management is centralized through definitions in the CICS System Definition (CSD) file, a VSAM containing entries for programs, transactions, files, and other resources, enabling consistent configuration across regions. On , this supports multi-region operations for complex, distributed setups, whereas z/VSE emphasizes single-region configurations for simpler, resource-constrained environments. As of CICS TS 6.3 (September 2025), a YAML-driven configuration tool simplifies region setup, including , system initialization parameters, and startup JCL. To optimize resource utilization, particularly in terminal interactions, CICS adopts a pseudo-conversational model, where transactions appear continuous to users but release terminal and storage resources between exchanges, minimizing main storage overhead and enabling higher concurrency. This approach contrasts with fully conversational models by terminating and reinitializing tasks as needed, preserving state via mechanisms like communication areas or channels. Additionally, CICS integrates with external subsystems such as IMS and DB2 through intercommunication protocols, including Multiregion Operation (MRO) for intra-sysplex links and Intersystem Communication (ISC) over SNA or IP, allowing seamless data access and transaction distribution across heterogeneous environments.

Core Components and Nucleus

The nucleus of CICS is the principal control element, comprising a set of modular control programs and associated tables that manage essential system operations, including task dispatching, resource allocation, and inter-program communication within the environment. It is loaded into the CICS region , with key modules optionally installed in the link pack area (LPA) for efficient sharing across regions, ensuring that application tasks execute in a controlled, recoverable manner. This central component is customized during system generation to support specific functional requirements. Key functional components within the nucleus include task control, which handles task scheduling, initiation, and termination to enable concurrent execution of multiple transactions while enforcing priorities and detecting runaway tasks. Terminal control manages operations with terminals and networks, interfacing with the Terminal Control Table (TCT) to route messages and maintain session integrity across devices such as 3270 terminals or Communications Server connections. Complementing these, file control provides access to VSAM and BDAM datasets for read, update, and browse operations, while program control oversees loading and linking of application programs using the Program Control Table (PCT). Additionally, storage control allocates and deallocates dynamic memory, maintaining a storage cushion to prevent fragmentation and supporting user-defined limits for efficient resource utilization. Dynamic transaction is facilitated through nucleus services that leverage control tables like the PCT for defining transaction identifiers and rules, allowing workloads to be distributed across available resources without manual intervention. The Kernel Control Table (KCT) centralizes management data for kernel operations, enabling real-time adjustments to and transaction flow in response to demands. These mechanisms ensure scalable processing by dynamically assigning tasks to optimal execution paths based on availability and priority criteria. The nucleus evolved significantly with the introduction of CICS/ESA in the , transitioning from single-region configurations to multi-region support that allowed interconnected CICS regions to operate collaboratively within an sysplex. This advancement enabled shared temporary storage queues across regions, permitting applications to access transient data pools without region-specific boundaries and improving data consistency in distributed environments. Further optimizations came with the integration of Coupling Facility structures, which facilitate high-speed for temporary storage and other resources in sysplex setups, reducing latency and enhancing throughput for large-scale transaction volumes.

Support Programs and Services

CICS provides a range of support programs that facilitate , system management, and resource handling within the environment. The Execution Diagnostic Facility (CEDF), invoked via the CEDF transaction, enables developers to intercept application programs at key points, including initiation, each CICS command execution, and termination, allowing for step-by-step analysis and modification of program flow without altering the source code. Similarly, the CICS-supplied transaction CEMT serves as the primary interface for online system administration, permitting users to inquire about and dynamically alter CICS resource parameters, such as terminals, files, and programs, through a command-line format that supports both inquiry (INQUIRE) and modification (SET) operations. For data queuing needs, CICS includes the Transient Data Program (TDP), which manages transient data queues for sequential storage and retrieval of data items destined for internal processing or external output, such as print files or restart records, with queues categorized as intrapartition (managed within VSAM datasets) or extrapartition (directed to sequential datasets). Complementing TDP, Temporary Storage (TS) queues offer a VSAM-based mechanism for temporary, recoverable accessible by queue name, supporting both main storage for high-performance in-memory operations and auxiliary storage for persistence across CICS restarts, with options for shared queues in sysplex environments to enable distributed access. Interface services extend CICS connectivity to external systems, with the External CICS Interface (EXCI) providing an application programming interface for non-CICS programs running in to establish sessions (pipes) with a CICS region and issue distributed program link (DPL) requests, facilitating synchronous or asynchronous calls to CICS applications from batch jobs or other subsystems. For database integration, the EXCI-to-IMS pathway leverages the CICS-IMS Database Control (DBCTL) interface, allowing CICS transactions to access IMS hierarchical databases through DL/I calls without embedding IMS control regions, using predefined PSB and DBD resources to manage requests in a controlled, non-transactional manner. Monitoring capabilities in CICS are supported by built-in statistics collection and performance tools that generate System Management Facilities (SMF) records for offline analysis. The CICS Statistics facility captures resource usage metrics, such as transaction volumes and file I/O counts, written as SMF type 110 subtype 2 records, while the integrates with these to provide detailed transaction-level data, including CPU times and response latencies, aiding in system tuning and bottleneck identification. As of CICS TS 6.3 (September 2025), OpenTelemetry support enables distributed tracing with new SIT parameters (e.g., OTELTRACE) and monitoring fields for enhanced observability. Utility programs further assist in maintenance tasks, exemplified by the BMS Macro Generation Utility (DFHBMSUP), which reconstructs original Basic Mapping Support (BMS) macro source statements from a compiled mapset load module, enabling modifications or backups when is unavailable.

Programming Model

Supported Languages and Interfaces

CICS Transaction Server (TS) for supports a range of programming languages, enabling developers to create applications using both traditional and modern paradigms. The primary languages include , which utilizes the EXEC CICS command-level interface for accessing system services; ; and C++; and Assembler language, with CICS providing the necessary runtime support for these through Language Environment (LE). These languages have been foundational since early versions, allowing for efficient integration with mainframe environments. Modern additions expand CICS's versatility for contemporary development. Java support began with CICS TS 2.1 (general availability 1999) and the JCICS API was introduced in CICS TS 1.3 (general availability early 1999), providing a Java equivalent to the EXEC CICS API for direct access to CICS resources such as files, queues, and transactions. Enhancements to Java support continued in later versions, including CICS TS 4.1 (general availability June 2009). The JCICSX extension, added in CICS TS 5.6 (general availability June 2020), facilitates remote development and testing without full CICS deployment. Node.js integration began in CICS TS Version 5.5 (general availability December 2018), allowing JavaScript applications to run natively within the CICS address space using the IBM SDK for Node.js on z/OS, with support extending up to Node.js version 18 in CICS TS 6.3 (general availability September 2025). Additionally, REXX is supported for scripting tasks, offering a lightweight option for automation and simple application logic. CICS TS 6.3 further enhances support for Java 21, Spring Boot 3, and MicroProfile 6.1, enabling developers to leverage the latest frameworks for cloud-native applications. CICS accommodates mixed-language programming within a single transaction, leveraging Language Environment to enable seamless among supported languages like , , and . Language-specific translators convert command-level calls—such as EXEC CICS equivalents—into a common internal format, ensuring consistent access to CICS services regardless of the originating language. This capability supports hybrid applications that combine legacy and new code without requiring full rewrites. On the interfaces front, CICS maintains compatibility with traditional mainframe I/O mechanisms, including the System/370 channel interface for high-performance data access to peripherals and shared resources on z/OS. For modern connectivity, CICS exposes services through RESTful APIs facilitated by IBM z/OS Connect, which translates CICS transactions into JSON-based web services, enabling integration with cloud-native applications and microservices architectures. Event endpoints further enhance this by supporting asynchronous processing and event-driven interactions. CICS also provides built-in APIs for resource definition and management, allowing programmatic control over system configuration. Recent advancements underscore CICS's evolution toward enterprise standards. CICS TS 6.2 (general availability June 2024) achieved compliance with 10 via the JVM server, permitting deployment of servlets, enterprise (EJBs), and other components directly within CICS regions. This update builds on prior support, facilitating migration of web applications to the mainframe while maintaining and scalability. CICS TS 6.3 extends this with support for 21 and additional modern frameworks.

Traditional Programming Approaches

Traditional programming in CICS relied on macro-level interfaces, where developers embedded DFH* macros directly into application source code written in , , or assembler language. These macros, prefixed with "DFH" (e.g., DFHPC for program control, DFHTC for terminal control), invoked CICS services for tasks such as file I/O, terminal interactions, and inter-program communication. For instance, the DFHCOMMAREA in macros like DFHPC TYPE=LINK facilitated data passing between programs by specifying a communication area within the task's work area, limited to 32,767 bytes in later versions but typically 4,096 bytes in early implementations for efficiency. This approach required manual management of storage addresses and response codes after each macro execution, ensuring quasi-reentrant programs that avoided direct operating system calls to maintain predictability across multitasking environments. Command-level programming, introduced to simplify development, allowed programmers to use high-level EXEC CICS statements instead of raw macros, such as EXEC CICS READ FILE('FILE1') RIDFLD(KEY) INTO(RECORD) END-EXEC for file access, EXEC CICS WRITE FROM(MAPDATA) END-EXEC for output, or EXEC CICS RETURN END-EXEC to release control. These statements were processed by the CICS translator during program preparation, converting them into equivalent low-level calls or macros in the source language before compilation—for example, transforming EXEC CICS commands into CALL statements to the DFHEI1 interface program. In assembler, the translator generated DFHECALL macros to invoke the CICS command processor, passing parameters via registers and the EXEC Interface Block (EIB). This pre-compile translation maintained compatibility with macro-level execution while abstracting complexity from developers. At runtime, command-level programs executed through the translated code, which invoked the Language Interface (e.g., DFHEI1 in ) to bridge to CICS's macro-level nucleus, ensuring efficient processing without further conversion. The translator's output, combined with the EXEC interface, handled parameter setup and command dispatching, optimizing for the system's transaction-oriented architecture. Key considerations in traditional programming included syncpoint management for transaction integrity, where developers issued explicit syncpoints (e.g., via EXEC CICS SYNCPOINT in command-level) to delineate units of work, triggering CICS's with external resource managers like databases. In phase one (prepare), CICS polled participants for readiness; phase two (commit or ) followed based on votes, ensuring atomicity across resources but requiring careful bracketing to avoid partial updates. Error handling relied on RESP and RESP2 fields in commands, where RESP provided primary codes (e.g., DFHRESP(NORMAL) or DFHRESP(NOTFND)) and RESP2 offered secondary numeric details (e.g., 1 for remote resource failure), inspected post-command via IF statements or HANDLE CONDITION labels to route exceptions without abending the task. Early CICS implementations exhibited limitations in programming models, particularly conversational programming, where tasks remained active during user "think time" to maintain state, leading to , prolonged storage allocation, and reduced throughput in high-volume environments. Pseudo-conversational models addressed this by terminating the task after each screen send (e.g., via RETURN with COMMAREA), restarting on user input to simulate continuity while freeing resources, though they demanded meticulous state preservation via COMMAREA or temporary storage to avoid data loss across invocations.

Contemporary Programming Techniques

In recent years, CICS has evolved to support modern programming paradigms that emphasize developer productivity, integration with cloud-native technologies, and reduced reliance on traditional procedural coding. These techniques leverage APIs, event-driven architectures, and low-code approaches to enable agile development while maintaining compatibility with existing environments. Key advancements include simplified access, integration, event processing enhancements, container-like deployment models, and API-first strategies that minimize custom application logic. The JCICSX Java API, introduced in CICS Transaction Server (TS) for z/OS Version 5.6 in June 2020, provides a streamlined interface for Java developers to access CICS resources without using the legacy EXEC CICS commands. This supports a subset of CICS services, including program linking, file operations, and transient data queues, and can operate both locally within CICS and remotely via Liberty servers. It incorporates modern features such as annotations for resource injection and dependency injection frameworks, allowing developers to declare CICS interactions declaratively rather than imperatively, which simplifies code maintenance and testing. For instance, annotations like @CicsProgram enable automatic mapping of method calls to CICS programs, reducing boilerplate code and facilitating integration with or applications. CICS bundles facilitate seamless integration of applications, enabling developers to build that interact with core CICS transaction logic. Introduced in CICS TS 5.5, this support packages code, dependencies, and configuration into deployable bundles, allowing asynchronous execution within CICS JVM servers. Developers can leverage 's async/await patterns for non-blocking I/O operations, such as invoking programs via the ibm-cics-api module or consuming services from existing CICS assets. This approach supports architectures by enabling event-driven workflows and JSON-based data exchange, with full to legacy applications without requiring modifications to the underlying or code. Event processing in CICS has advanced through event binding and adapters, particularly for integration with streaming platforms like Apache Kafka via IBM Event Streams. Event binding, available since CICS TS 5.5, allows developers to define XML-based specifications that capture business events—such as transaction completions or data updates—directly from CICS applications without code changes. These events can be formatted and routed using built-in EP (Event Processing) adapters to IBM Event Streams, which leverages Kafka for high-throughput, real-time data pipelines. Enhancements in 2025, including improved adapter support for schema evolution and fault-tolerant emission in CICS TS 6.3 updates (general availability September 2025), enable scalable event-driven architectures for analytics and microservices, with events emitted to external consumers in formats like JSON or Avro. CICS supports containerized application development through the Open Liberty profile, a lightweight runtime embedded in CICS JVM servers, which mimics container orchestration for hot deployment and workflows. Bundles enable zero-downtime updates by installing application artifacts dynamically, while tools like CICS Explorer provide IDE integration for building, testing, and deploying Liberty-based apps with Maven or support. This setup allows developers to package as OSGi bundles or WAR files, supporting features like hot-reload for iterative development and integration with Container Extensions for hybrid portability. A shift toward code-light development is exemplified by , which exposes CICS transactions and programs as RESTful APIs with minimal custom coding. By generating API providers from OpenAPI specifications or CICS metadata, developers can create secure, discoverable endpoints for COMMAREA or channel-based services without altering application . This reduces the need for wrappers, enabling hybrid integrations where , or apps consume mainframe logic directly, with built-in support for , , and Swagger in CICS TS 5.6 and later.

Transaction Processing

Transaction Lifecycle and Management

In CICS, a transaction represents the fundamental unit of work and is identified by a unique transaction identifier (TRANSID), consisting of 1 to 4 alphanumeric characters. This TRANSID serves as the entry point for processing, with CICS routing the transaction to the appropriate application program based on definitions stored in the Program Control Table (PCT). The PCT contains entries for each defined transaction, specifying attributes such as the initial program to execute, transaction class for priority assignment, and whether the transaction is enabled for routing or local execution. The lifecycle of a CICS transaction begins with initiation, typically triggered by user input from a terminal, a web request, or an external call submitting the TRANSID to the CICS region. Upon receipt, CICS verifies the requesting device's communication status and the user's authorization before consulting the PCT to locate the transaction definition. If valid, CICS creates a new task under the management of the Terminal Control Program (TCP), which handles input/output operations and schedules the task for execution on an available process. This task encapsulates the transaction's execution environment, including user data and resource access rights. During execution, the task runs one or more programs sequentially or concurrently, with CICS coordinating resource requests such as file access or database queries to ensure isolation and consistency. The transaction reaches termination at a syncpoint, where the application issues an EXEC CICS SYNCPOINT command; CICS then commits all changes to recoverable resources if successful or aborts them via in case of failure, followed by task cleanup and resource deallocation. Multiple instances of the same transaction can execute concurrently as separate tasks, enabling high-volume processing. CICS manages transactions through mechanisms that maintain system stability and performance, including resource locking to serialize access to shared data, deadlock detection to identify and resolve circular wait conditions, and priority scheduling to favor critical workloads. Locks are acquired dynamically on resources like VSAM files or DB2 tables during execution, with CICS enforcing contention resolution via timeouts configurable per transaction class. Deadlock detection in CICS is primarily based on timeouts configured via the DTIMOUT transaction attribute, which abends a suspended task after a specified interval if it remains inactive, helping to resolve potential deadlocks. Priority scheduling assigns transactions to classes (1-15, with lower numbers higher priority), influencing dispatch order and to meet service-level agreements. On modern z17 hardware, these capabilities support transaction throughput scaling to hundreds of thousands per second for typical workloads, such as processing, demonstrating CICS's efficiency in high-volume environments. For inter-transaction communication, CICS provides the communications area (COMMAREA), a contiguous buffer up to 32 KB that can pass between linked programs within a task or to a subsequent transaction via an EXEC CICS LINK or RETURN command. For larger or more structured , channels and containers offer an alternative, where a channel groups multiple named containers, each limited only by the available storage in the CICS region, enabling flexible passing without size constraints of the COMMAREA. These mechanisms support seamless flow across transaction boundaries while maintaining through channel definitions. Transaction monitoring in CICS includes handling abends—abnormal terminations—with specific codes indicating failure types, such as ASRA for program checks like storage violations or arithmetic errors. Upon an abend, CICS rolls back the task, logs details in the system message log, and optionally produces a transaction dump for diagnostics. Trace facilities, enabled via system initialization parameters or dynamic commands, capture detailed execution logs at various levels (e.g., calls, resource accesses), aiding in and without impacting production throughput.

Basic Mapping Support (BMS)

Basic Mapping Support (BMS) provides CICS application programs with a device-independent method for formatting and managing input/output operations on terminals, abstracting the complexities of terminal-specific data streams. By defining screen layouts through maps and mapsets, BMS enables developers to focus on application logic rather than hardware dependencies, supporting efficient screen I/O in transaction processing environments. This support is integral to CICS's terminal handling, where maps translate high-level definitions into formatted displays for devices like 3270 and 5250 terminals. BMS relies on assembler macros to define physical and symbolic mapsets, which are assembled and loaded into CICS for use during transaction execution. The primary macros include DFHMSD for specifying the overall mapset characteristics, such as type (, or inout), mode, terminal type, and ; DFHMDI for defining individual maps within the mapset, including attributes like cursor positioning and validation; and DFHMDF for detailing fields within a map, covering position, length, and attributes. These macros generate two outputs: a physical mapset containing device-dependent formatting instructions (e.g., control characters for screen positioning and highlighting) and a symbolic mapset with or structures for data mapping in application programs. The assembly process uses standard assembler, producing load modules installed via CICS resource definitions. Key features of BMS include field-level attributes that control user interaction and presentation, such as (to make fields read-only), intensity (normal or bright), color options (e.g., , , for 3270 terminals), and highlighting (underline, bold). For 3270 terminals, BMS handles structured fields for advanced formatting like partitioned emulation, while for 5250 terminals (common in environments), it supports similar block-mode operations through device emulation layers, ensuring compatibility without altering map definitions. Additionally, BMS facilitates error handling by allowing insertion of messages into designated fields and supports cyclic navigation via cursor keys, enabling users to tab through unprotected fields sequentially. These capabilities ensure robust, consistent screen interactions across supported terminals. Screen I/O in CICS applications uses the SEND MAP and RECEIVE MAP commands to interact with BMS maps. The SEND MAP command outputs a formatted screen by specifying the map and mapset names, along with a data area containing symbolic map variables; options like ALARM (for audible alerts), FREEKB (to clear the keyboard), and FRSET (to reset modified fields) enhance , while cursor positioning (CURSOR option) directs focus to specific fields. Conversely, the RECEIVE MAP command captures terminal input, mapping it into the application's symbolic map structure and returning the cursor position (via the CURSOR special register) and attention identifier (AID key pressed); it supports length specification to handle variable input sizes and integrates with insertion for validation feedback. These commands execute within the transaction lifecycle, routing data streams transparently to the terminal. A representative BMS macro definition for a simple inquiry mapset might appear as follows, targeting applications on 3270 terminals:

MYMAPS DFHMSD TYPE=&SYSPARM,MODE=INOUT,TERM=3270,LANG=[COBOL](/page/COBOL) MYMAP DFHMDI SIZE=(24,80),LINE=1,CURSOR NAME DFHMDF POS=(1,1),LENGTH=30,ATTRB=(NORM,PROT) DFHMDF POS=(2,1),INITIAL='Enter Name:',ATTRB=(NORM,PROT) INPUT DFHMDF POS=(2,13),LENGTH=20,ATTRB=(ASKIP,UNPROT)

MYMAPS DFHMSD TYPE=&SYSPARM,MODE=INOUT,TERM=3270,LANG=[COBOL](/page/COBOL) MYMAP DFHMDI SIZE=(24,80),LINE=1,CURSOR NAME DFHMDF POS=(1,1),LENGTH=30,ATTRB=(NORM,PROT) DFHMDF POS=(2,1),INITIAL='Enter Name:',ATTRB=(NORM,PROT) INPUT DFHMDF POS=(2,13),LENGTH=20,ATTRB=(ASKIP,UNPROT)

This code defines a mapset (DFHMSD) with mode, a map (DFHMDI) of standard 3270 size, and fields for a protected label and unprotected input area; assembly produces the necessary physical and symbolic components for use in CICS programs. In modern CICS versions, BMS has evolved to support web and mobile integrations by generating templates from existing map definitions, using procedures like DFHMAPT to translate 3270-like layouts into web forms while preserving field attributes as elements (e.g., readonly inputs for protected fields). For output, CICS extends BMS compatibility through utilities such as DFHLS2JS and DFHJS2LS, which map symbolic structures to schemas, enabling BMS-defined data to be serialized as for RESTful services and mobile apps without rewriting core maps. This facilitates hybrid deployments where legacy terminal transactions deliver content to web browsers or mobile devices via transformed outputs.

Application Examples and Use Cases

CICS has been instrumental in , enabling real-time booking transactions that update VSAM files while employing pseudo-conversational dialogs to maintain user sessions efficiently across multiple interactions. For instance, these systems handle seat availability queries and confirmations in high-volume environments, ensuring data consistency without locking resources for extended periods. In banking and ATM networks, CICS supports high-availability transactions for account inquiries, balance checks, and fund transfers, often integrated with (formerly MQSeries) to facilitate asynchronous messaging between distributed systems. The Smarter Banking Showcase demonstrates this through a COBOL-based application on CICS TS 4.1, processing up to 800 transactions per second across channels like s and branches, using DB2 for to manage 6 million client records and 12 million accounts. Event processing in CICS enhances detection, such as correlating check transactions in real time to identify kiting patterns across banks. For retail point-of-sale (POS) operations, CICS powers inventory management and payment processing, utilizing Basic Mapping Support (BMS) to define screen flows for terminal interactions and External CICS Interface (EXCI) for distributed calls to remote regions handling stock updates or authorization. This setup ensures rapid transaction throughput in stores, where sales data syncs with central inventory files, preventing during peak hours. In modern insurance applications, CICS facilitates claims processing via APIs exposed through z/OS Connect Enterprise Edition, which transforms legacy programs into JSON-based services for handling payloads like policy details and claim submissions. A sample demonstrates a CICS application invoking a health insurance claim rule API over z/OS Connect, enabling seamless integration with external systems for without modifying core logic. A notable case study involves a financial firm migrating legacy applications from CICS to microservices using CICS TS 6.1, leveraging its enhanced 11 support via Semeru Runtime and 10 for containerized deployments. This transition, aided by automated refactoring tools, achieved over 50% faster development cycles by streamlining code conversion and reducing manual refactoring efforts.

Advanced Capabilities

Sysplex and Distributed Operations

CICS integrates with the Parallel Sysplex environment through CICSPlex SM (CPSM), enabling scalability and workload distribution across multiple systems. This integration leverages the coupling facility for high-performance , including structures such as coupling facility data tables (CFDTs), named counters, shared temporary storage, and CPSM region status information. CFDTs, in particular, facilitate rapid sharing of working data across sysplex members with update integrity, supporting of transactions and program links to optimal regions. CPSM uses these mechanisms to monitor region states and route workloads efficiently, ensuring balanced processing without single points of failure. Workload management in CICS sysplex environments is handled by CPSM's workload management (WLM) component, which provides transaction affinity, load balancing, and capabilities. Transaction affinity ensures that related transactions, such as those in a user-defined transaction group, remain routed to the same application-owning region (AOR) for session continuity, while allowing overrides for reasons. Load balancing is achieved by dynamically distributing dynamic transactions and program links across available CICS regions in a target scope, based on factors like CPU utilization and response times, to optimize use. For , CPSM supports session affinity and quiescing of target regions during overload, redirecting traffic to healthier systems; external data interfaces, such as those for adapters, enable integration with sysplex-wide monitoring for seamless recovery. Multi-region operation (MRO) in CICS provides LU 6.2-based inter-region communication, allowing distributed processing across multiple CICS regions within or across sysplexes. In MRO setups, terminal-owning regions (TORs) route transactions to AORs for application logic execution, which in turn access file-owning regions (FORs) for , all over SNA LU 6.2 sessions. This architecture supports AOR-FOR interactions for efficient data access without full sysplex coupling, reducing overhead compared to intersystem communication (ISC) while maintaining transaction integrity. The use of sysplex and distributed operations in CICS enables horizontal scaling to manage peak loads by adding regions or systems, distributing workloads across LPARs for near-linear performance gains. In banking environments, this configuration provides through Parallel Sysplex features like and , supporting high-volume transactions such as ATM processing and real-time account updates with minimal downtime. CICS TS 6.3 includes enhancements to sysplex operations, including support for sysplex caching for TLS 1.3 and optimizations for container-based traffic that improve handling of HTTP requests with container data, allowing efficient routing of modern web and workloads in distributed environments.

Recovery and Restart Mechanisms

CICS employs a during syncpoint processing to ensure atomicity and consistency for units of work involving local and remote resources, such as VSAM files and DB2 databases. In the first phase (prepare), the syncpoint manager coordinates with resource managers to confirm readiness to commit changes, logging necessary information in journals for recoverable resources to enable if needed. The second phase (commit) finalizes the updates only after all resources affirmatively respond, with journals capturing after-images to support recovery operations. This mechanism integrates with transaction syncpoints to maintain across distributed environments. CICS supports multiple restart types to address varying failure scenarios, balancing recovery completeness with operational downtime. A cold restart involves full reinitialization of all resources, discarding prior session data and requiring complete from journals and logs, suitable for major failures or . In contrast, a warm restart recovers from the last successful checkpoint, reinstating in-flight units of work and minimizing by replaying logged activities since the checkpoint. An emergency restart provides partial recovery for critical situations, focusing on essential resources while deferring full reintegration, often used after uncontrolled shutdowns to expedite availability. To optimize restart efficiency in multisystem environments, CICS utilizes global checkpointing through the Global Work Area (GWA) for coordinated state synchronization across regions, coupled with dynamic that automates journal allocation and retention based on system activity. This approach reduces restart times by preserving checkpoint data in shared structures and dynamically sizing logs to handle peak loads without manual intervention. Backward recovery, or backout, relies on the CICS System Log (CSL) to apply before-images and undo uncommitted changes during dynamic or emergency restarts, ensuring no partial updates persist. Forward recovery complements this by replaying after-images from resource journals to restore committed data, particularly for VSAM datasets under Record Level Sharing (RLS) in sysplex configurations, where multiple CICS regions access shared files with built-in locking and buffering for concurrent operations.

Security and System Management

CICS provides robust security through integration with external security managers such as RACF and , enabling user and fine-grained resource protection. In RACF environments, users authenticate via the CESN transaction, supplying a userid and password, with options for persistent verification or identification-only modes configured in connection definitions. similarly uses Accessor IDs (ACIDs) for , mapping to RACF equivalents and supporting password substitutes like PassTickets. Resource access is controlled via profiles in classes like TCICSTRN for transactions, where administrators define permissions such as READ or UPDATE for specific transaction IDs, often using generic profiles (e.g., CICSTS54.CICS.**) to manage . Similarly, program and file resources are secured in classes like MCICSPPT and FCICSFCT, ensuring only authorized users can invoke or access them. For secure communications, CICS supports SSL and TLS protocols over TCP/IP connections, including IPIC and web services, with configuration via keyrings and certificates managed by RACF. Application Transparent TLS (AT-TLS) offloads to the stack, simplifying CICS setup while maintaining end-to-end . System management is facilitated by dedicated transactions for administrative tasks. The transaction enables online resource definition and installation, allowing alterations to the CICS system definition file (CSD) without system shutdown. CECI serves as the command-level interpreter, permitting interactive testing of EXEC CICS commands to verify and on a 3270 interface. CSMT handles message switching, routing transient data queues across regions or systems for operational notifications and alerts. Auditing in CICS relies on system logs and SMF records (e.g., type 115 for events) to track user activities and resource access, supporting compliance with standards like PCI-DSS in financial applications. Integration with Security Guardium enhances this by collecting and analyzing CICS transaction and audit events, streaming them for real-time monitoring and regulatory reporting. Performance tuning involves optimizing buffer pools, such as LSR and DSAL, based on monitoring metrics to minimize I/O and allocation overhead. Trace analysis uses CICS internal traces, supplemented by OMEGAMON for CICS, which provides real-time dashboards for task-level diagnostics and application . As of 2025, CICS TS 6.3 enhancements align with zero-trust principles, including management in z/OS Connect for secure exposures, expanded SMF auditing for compliance validation, and additional security features like managed TCP/IP and improvements.

Modern Integrations

Web and API Enablement

CICS has provided web support since 1996 through the CICS Web Interface, which enables the processing of HTTP requests directly within CICS regions, allowing legacy applications to serve without external . This initial capability focused on basic HTTP server functionality, including support for dynamic HTML generation from programs using Basic Mapping Support (BMS) maps. Over time, CICS web support evolved to include more advanced features, such as acting as both an HTTP server and client for broader integration with web technologies. A significant advancement occurred with CICS Transaction Server (TS) Version 5.3 in 2016, which embedded the Liberty profile as a full servlet container within CICS. This integration allows Java EE web applications, including servlets and JavaServer Pages (JSPs), to run alongside traditional transactions, enabling hybrid workloads that bridge mainframe and modern paradigms. The Liberty profile in CICS TS 5.3 supports the Java EE 6 Web Profile, facilitating the deployment of RESTful services and directly in the CICS environment. For API enablement, CICS incorporates DFHAPI utilities, such as the JSON and XML assistants, to handle data transformation between high-level language structures (like COBOL records) and web-friendly formats. These tools generate mappings for JSON schemas, enabling CICS programs to process and respond to API requests without manual parsing, which is essential for RESTful interactions. Additionally, URIMAP resources define URI patterns to route incoming HTTP requests, including REST calls, to specific CICS programs or pipelines, ensuring efficient dispatching based on path, method, and headers. This routing mechanism supports the creation of API endpoints that expose transactional logic as web services. Integration with IBM z/OS Connect, with version 3.0 (generally available 2017) and enhanced by OpenAPI 3.0 support starting in 2022, further enhances capabilities by automating the generation of OpenAPI 3.0 specifications from existing programs in CICS. This allows developers to expose CICS services as RESTful APIs with minimal code changes, including support for asynchronous patterns through event-driven processing. z/OS Connect acts as a facade, translating modern requests into native CICS calls, which simplifies connectivity for distributed applications. API security in CICS is bolstered by support for OAuth 2.0 and (JWT) validation, integrated via and CICS authentication exits. Incoming requests can be secured with access tokens obtained from external authorization servers, where CICS verifies the tokens before invoking programs, ensuring compliance with enterprise security standards. is configurable through z/OS Connect policies to prevent abuse and manage throughput for API endpoints. A practical example involves converting a traditional BMS map-based inquiry transaction into a REST endpoint using URIMAPs and JSON mapping. This transformation allows mobile applications to retrieve data via HTTP GET requests, replacing terminal emulation with lightweight API calls that integrate seamlessly with modern front-ends. Such adaptations enable legacy CICS applications to support mobile clients efficiently, reducing the need for screen-scraping intermediaries.

Cloud and Hybrid Deployments

CICS has been adapted for containerized environments through integration with IBM z/OS Container Extensions (zCX), which enables the deployment of on Z applications as Docker containers directly within a z/OS system. This allows CICS JVM servers to interact seamlessly with containerized components, such as Kafka clients running in CICS for event-driven processing. In CICS Transaction Server (TS) 6.1, enhanced support includes collectives for centralized management of multiple JVM servers and Jakarta EE 9 compatibility, facilitating the packaging of -based applications into pods via the and Cloud Modernization Stack Operator. This operator automates CICS TS provisioning and lifecycle management on z/OS endpoints using Container Platform, enabling hybrid cloud topologies where CICS regions are orchestrated like containerized workloads. Hybrid deployment models for CICS emphasize offloading non-critical transactions to public clouds while retaining core processing on IBM Z for performance and security. CICS Event Processing (EP) adapters capture and emit events from transactional workloads, formatting data for integration with cloud services via the IBM Z Digital Integration Hub (zDIH), which provides sub-second, low-latency access to mainframe data in hybrid environments. For AWS and Azure, this is achieved through connectors like the IBM CICS connector in Azure Logic Apps, allowing workflows to invoke CICS programs remotely, and patterns using z/OS Connect to expose CICS APIs as RESTful services for cloud-native applications on AWS. Offloading read-heavy logic reduces mainframe resource utilization, with caching mechanisms like Db2 Data Gate enabling zIIP-eligible, high-concurrency data sharing across hybrid setups. DevOps practices in CICS deployments are supported by the CICS Build Toolkit, a command-line interface for automating the construction of CICS bundles, applications, and projects, integrating directly into pipelines such as Jenkins or CI. It enables variable substitution in bundle definitions for environment-specific configurations, aligning with principles by treating deployment artifacts as code stored in repositories like . Automated testing is facilitated through integration with broader z/OS toolchains, including Developer for for unit testing of and components before bundling and deployment to hybrid environments. This toolkit runs on , , and Windows, supporting 17+ builds via or Maven plugins to streamline agile development cycles. Scalability in hybrid CICS environments leverages CICSplex for elastic resource provisioning across multi-region topologies, allowing dynamic addition or removal of regions to handle varying workloads without . Cloud extends this capability by routing excess transactions to public clouds via EP adapters and zDIH, enabling seamless overflow from on-premises CICS regions to AWS or Azure instances during peak demand. The 2025 updates to OMEGAMON AI for CICS enhance cross-cloud monitoring with AI-driven anomaly detection for CPU and response times, supporting resource limiting to prevent overloads in burst scenarios and providing into hybrid transaction flows. This setup sustains over 1 billion transactions daily in Parallel Sysplex configurations, with application multiversioning ensuring zero- scaling. Key challenges in CICS hybrid deployments include maintaining and minimizing latency in cross-cloud interactions. is addressed through encrypted channels, such as TLS-secured IPIC and MRO links, ensuring compliance with regional regulations by keeping sensitive mainframe data on while allowing controlled offloading. Latency mitigation employs via zDIH, which caches and replicates data near cloud endpoints for sub-millisecond access, reducing round-trip times in hybrid . These solutions balance the need for regulatory adherence with the performance demands of distributed architectures.

Latest Releases and Enhancements

CICS Transaction Server (TS) 6.1, generally available in June 2022, introduced enhancements to its embedded profile, upgrading support to 9 for improved Java application development and integration. This update enabled developers to leverage modern Java APIs and frameworks, including better compatibility with and Eclipse MicroProfile for building resilient microservices within CICS regions. Additionally, the release expanded bundle support through a new deployment tailored for Java tools like and Maven, streamlining the packaging and deployment of microservices-oriented applications. CICS TS 6.2, released in June 2024, focused on operational resilience and modern networking capabilities. It added support for in sysplex environments, including sysplex caching for TLS 1.3 sessions to optimize performance across distributed CICS regions using TCP/IP workload balancing. For event-driven architectures, the version introduced system rules to monitor and manage queued transaction classes during surges, along with enhanced CICSPlex System Manager handling of type 71 ENF events for improved event processing and automation. Integration with OMEGAMON for CICS received AI optimizations, enabling predictive insights into transaction performance and to anticipate issues before they impact availability. The most recent release, CICS TS 6.3, became generally available in September 2025, emphasizing developer productivity and hardware exploitation on the z17 mainframe. It extended support for from version 18 (introduced in 6.2), allowing developers to build and deploy applications directly in CICS using the ibm-cics-api package for accessing resources like VSAM and Db2. Advanced capabilities were bolstered through integration with Connect Enterprise Edition 3.0, requiring specific APAR PH68476 for seamless HTTP service invocation and response handling in CICS TS 6.3 environments. Furthermore, the release exploits z17 hardware features for enhanced throughput, with optimizations in 21 runtime and MicroProfile 6.1 delivering up to 20% performance gains in mixed-language workloads compared to prior versions. IBM maintains CICS through a model, issuing quarterly APARs to deliver new functions, fixes, and patches without requiring full version upgrades. For instance, the July 2025 update included patches addressing vulnerabilities in embedded components, alongside participation in open beta programs for early access to upcoming features. Looking ahead, 's roadmap for CICS emphasizes deeper AI integration via tools like OMEGAMON AI for , enabling proactive management of transaction flows and resource utilization by 2027. Additionally, alignment with IBM Z's quantum-safe initiatives will incorporate standards into CICS security features, protecting against emerging quantum threats through crypto-agile protocols.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.