Hubbry Logo
Message-oriented middlewareMessage-oriented middlewareMain
Open search
Message-oriented middleware
Community hub
Message-oriented middleware
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Message-oriented middleware
Message-oriented middleware
from Wikipedia

Message-oriented middleware (MOM) is software or hardware infrastructure supporting sending and receiving messages between distributed systems. Message-oriented middleware is in contrast to streaming-oriented middleware where data is communicated as a sequence of bytes with no explicit message boundaries. Note that streaming protocols are almost always built above protocols using discrete messages such as frames (Ethernet), datagrams (UDP), packets (IP), cells (ATM), et al.

MOM allows application modules to be distributed over heterogeneous platforms and reduces the complexity of developing applications that span multiple operating systems and network protocols. The middleware creates a distributed communications layer that insulates the application developer from the details of the various operating systems and network interfaces. Application programming interfaces (APIs) that extend across diverse platforms and networks are typically provided by MOM.[1]

This middleware layer allows software components (applications, servlets, and other components) that have been developed independently and that run on different networked platforms to interact with one another. Applications distributed on different network nodes use the application interface to communicate. In addition, by providing an administrative interface, this new, virtual system of interconnected applications can be made fault tolerant and secure.[2]

MOM provides software elements that reside in all communicating components of a client/server architecture and typically support asynchronous calls between the client and server applications. MOM reduces the involvement of application developers with the complexity of the master-slave nature of the client/server mechanism.

Middleware categories

[edit]

All these models make it possible for one software component to affect the behavior of another component over a network. They are different in that RPC- and ORB-based middleware create systems of tightly coupled components, whereas MOM-based systems allow for a loose coupling of components. In an RPC- or ORB-based system, when one procedure calls another, it must wait for the called procedure to return before it can do anything else. In these mostly synchronous messaging models, the middleware functions partly as a super-linker, locating the called procedure on a network and using network services to pass function or method parameters to the procedure and then to return results.[2] Note that Object request brokers also support fully asynchronous messaging via oneway invocations.[3]

Advantages

[edit]

Central reasons for using a message-based communications protocol include its ability to store (buffer), route, or transform messages while conveying them from senders to receivers.

Another advantage of messaging provider mediated messaging between clients is that by adding an administrative interface, you can monitor and tune performance. Client applications are thus effectively relieved of every problem except that of sending, receiving, and processing messages. It is up to the code that implements the MOM system and up to the administrator to resolve issues like interoperability, reliability, security, scalability, and performance.

Asynchronicity

[edit]

Using a MOM system, a client makes an API call to send a message to a destination managed by the provider. The call invokes provider services to route and deliver the message. Once it has sent the message, the client can continue to do other work, confident that the provider retains the message until a receiving client retrieves it. The message-based model, coupled with the mediation of the provider, makes it possible to create a system of loosely coupled components.

MOM comprises a category of inter-application communication software that generally relies on asynchronous message-passing, as opposed to a request-response architecture. In asynchronous systems, message queues provide temporary storage when the destination program is busy or not connected. In addition, most asynchronous MOM systems provide persistent storage to back up the message queue. This means that the sender and receiver do not need to connect to the network at the same time (asynchronous delivery), and problems with intermittent connectivity are solved. It also means that should the receiver application fail for any reason, the senders can continue unaffected, as the messages they send will simply accumulate in the message queue for later processing when the receiver restarts.

Routing

[edit]

Many message-oriented middleware implementations depend on a message queue system. Some implementations permit routing logic to be provided by the messaging layer itself, while others depend on client applications to provide routing information or allow for a mix of both paradigms. Some implementations make use of broadcast or multicast distribution paradigms.

Transformation

[edit]

In a message-based middleware system, the message received at the destination need not be identical to the message originally sent. A MOM system with built-in intelligence can transform messages and route to match the requirements of the sender or of the recipient.[4] In conjunction with the routing and broadcast/multicast facilities, one application can send a message in its own native format, and two or more other applications may each receive a copy of the message in their own native format. Many modern MOM systems provide sophisticated message transformation (or mapping) tools which allow programmers to specify transformation rules applicable to a simple GUI drag-and-drop operation.

Disadvantages

[edit]

The primary disadvantage of many message-oriented middleware systems is that they require an extra component in the architecture, the message transfer agent (message broker). As with any system, adding another component can lead to reductions in performance and reliability, and can also make the system as a whole more difficult and expensive to maintain.

In addition, many inter-application communications have an intrinsically synchronous aspect, with the sender specifically wanting to wait for a reply to a message before continuing (see real-time computing and near-real-time for extreme cases). Because message-based communication inherently functions asynchronously, it may not fit well in such situations. That said, most MOM systems have facilities to group a request and a response as a single pseudo-synchronous transaction.

With a synchronous messaging system, the calling function does not return until the called function has finished its task. In a loosely coupled asynchronous system, the calling client can continue to load work upon the recipient until the resources needed to handle this work are depleted and the called component fails. Of course, these conditions can be minimized or avoided by monitoring performance and adjusting message flow, but this is work that is not needed with a synchronous messaging system. The important thing is to understand the advantages and liabilities of each kind of system. Each system is appropriate for different kinds of tasks. Sometimes, a combination of the two kinds of systems is required to obtain the desired behavior.

Standards

[edit]

Historically, there was a lack of standards governing the use of message-oriented middleware that has caused problems. Most of the major vendors have their own implementations, each with its own application programming interface (API) and management tools.

One of the long-standing standards for message oriented middleware is X/Open group's XATMI specification (Distributed Transaction Processing: The XATMI Specification) which standardizes API for interprocess communications. Known implementations for this API is ATR Baltic's Enduro/X middleware and Oracle's Tuxedo.

The Advanced Message Queuing Protocol (AMQP) is an approved OASIS[5] and ISO[6] standard that defines the protocol and formats used between participating application components, so implementations are interoperable. AMQP may be used with flexible routing schemes, including common messaging paradigms like point-to-point, fan-out, publish/subscribe, and request-response (these are intentionally omitted from v1.0 of the protocol standard itself, but rely on the particular implementation and/or underlying network protocol for routing). It also supports transaction management, queuing, distribution, security, management, clustering, federation and heterogeneous multi-platform support. Java applications that use AMQP are typically written in Java JMS. Other implementations provide APIs for C#, C++, PHP, Python, Ruby, and other programming languages.

The High Level Architecture (HLA IEEE 1516) is an Institute of Electrical and Electronics Engineers (IEEE) and Simulation Interoperability Standards Organization (SISO) standard for simulation interoperability. It defines a set of services, provided through an API in C++ or Java. The services offer publish/subscribe based information exchange, based on a modular Federation Object Model. There are also services for coordinated data exchange and time advance, based on logical simulation time, as well as synchronization points. Additional services provide transfer of ownership, data distribution optimizations and monitoring and management of participating Federates (systems).

The MQ Telemetry Transport (MQTT) is an ISO standard (ISO/IEC PRF 20922) supported by the OASIS organization. It provides a lightweight publish/subscribe reliable messaging transport protocol on top of TCP/IP suitable for communication in M2M/IoT contexts where a small code footprint is required and/or network bandwidth is at a premium.

The Object Management Group's Data Distribution Service (DDS) provides message-oriented Publish/Subscribe (P/S) middleware standard that aims to enable scalable, real-time, dependable, high performance and interoperable data exchanges between publishers and subscribers.[7] The standard provides interfaces to C++, C++11, C, Ada, Java, and Ruby.

XMPP

[edit]

The eXtensible Messaging and Presence Protocol (XMPP) is a communications protocol for message-oriented middleware based on Extensible Markup Language (XML). Designed to be extensible, the protocol has also been used for publish-subscribe systems, signalling for VoIP, video, file transfer, gaming, Internet of Things applications such as the smart grid, and social networking services. Unlike most instant messaging protocols, XMPP is defined in an open standard and uses an open systems approach of development and application, by which anyone may implement an XMPP service and interoperate with other organizations' implementations. Because XMPP is an open protocol, implementations can be developed using any software license; although many server, client, and library implementations are distributed as free and open-source software, many freeware and proprietary software implementations also exist. The Internet Engineering Task Force (IETF) formed an XMPP working group in 2002 to formalize the core protocols as an IETF instant messaging and presence technology. The XMPP Working group produced four specifications (RFC 3920, RFC 3921, RFC 3922, RFC 3923), which were approved as Proposed Standards in 2004. In 2011, RFC 3920 and RFC 3921 were superseded by RFC 6120 and RFC 6121 respectively, with RFC 6122 specifying the XMPP address format. In addition to these core protocols standardized at the IETF, the XMPP Standards Foundation (formerly Jabber Software Foundation) is active in developing open XMPP extensions. XMPP-based software is deployed widely across the Internet, according to the XMPP Standards Foundation, and forms the basis for the Department of Defense (DoD) Unified Capabilities Framework.[8]

The Java EE programming environment provides a standard API called Java Message Service (JMS), which is implemented by most MOM vendors and aims to hide the particular MOM API implementations; however, JMS does not define the format of the messages that are exchanged, so JMS systems are not interoperable.

A similar effort is with the actively evolving OpenMAMA project, which aims to provide a common API, especially to C clients. As of August 2012, it is mainly appropriate for distributing market-oriented data (e.g. stock quotes) over pub-sub middleware.

Message queuing

[edit]

Message queues allow the exchange of information between distributed applications. A message queue can reside in memory or disk storage. Messages stay in the queue until the time they are processed by a service consumer. Through the message queue, the application can be implemented independently - they do not need to know each other's position, or continue to implement procedures to remove the need for waiting to receive this message.[9]

[edit]

[14]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Message-oriented middleware (MOM) is software or hardware that enables distributed applications to communicate and exchange by sending and receiving messages asynchronously, without requiring connections between the communicating parties. This approach facilitates the integration of heterogeneous systems across different platforms, languages, and protocols by providing a standardized messaging layer that abstracts underlying complexities. The concepts of MOM were developed in the early , with the formation of the Message Oriented Middleware Association (MOMA) in 1993 to promote standards and . MOM emerged as an alternative to tightly coupled models like remote procedure calls (RPC), addressing limitations in and reliability for large-scale distributed environments. At its core, MOM relies on a messaging provider—a central or distributed component that mediates message operations through application programming interfaces (APIs) and administrative tools—to ensure reliable delivery, routing, and storage of messages. Key features include asynchronous messaging, where senders and receivers operate independently without waiting for immediate responses, and message queuing mechanisms that store messages in a durable, ordered fashion (often first-in, first-out) until they are processed. The advantages of MOM make it particularly valuable in modern architectures, such as , cloud-native applications, and hybrid environments, where it decouples components to enhance , , and resilience against network failures or outages. For instance, by buffering messages during disruptions, MOM prevents and allows systems to recover seamlessly, supporting high-throughput scenarios like processing millions of queries per hour. Standardization efforts, such as the Java Message Service (JMS) introduced in 1998, have addressed interoperability issues among proprietary MOM implementations, enabling broader adoption. Overall, MOM serves as a foundational technology for enterprise integration, event-driven architectures, and real-time data exchange in distributed systems.

Introduction

Definition and Purpose

Message-oriented middleware (MOM) is a type of software that enables distributed applications to communicate and exchange by sending and receiving messages asynchronously, without requiring direct of each other's locations, states, or implementations. This approach promotes between sender and receiver applications, allowing them to operate independently and evolve separately without impacting one another. By acting as an intermediary, MOM decouples the producers and consumers of messages, facilitating integration across heterogeneous environments that may involve diverse programming languages, platforms, and network conditions. The primary purpose of MOM is to provide reliable and scalable communication mechanisms for enterprise-level distributed systems, where applications need to handle high volumes of data exchange without synchronous dependencies. It supports key functionalities such as message persistence to ensure against failures, intelligent based on criteria like priority or load balancing, and transformation to adapt message formats between incompatible systems. These features enable fault-tolerant operations, where messages are stored until successfully delivered, enhancing overall system resilience and performance in dynamic, large-scale deployments. In contrast to synchronous middleware paradigms like remote procedure calls (RPC), which require immediate responses and tight coupling between caller and callee, MOM employs a non-blocking model that allows applications to continue processing without waiting for acknowledgments. Similarly, it differs from object request brokers (ORBs), such as those in CORBA, by avoiding the need for shared interfaces or direct object invocations, thus reducing complexity in heterogeneous setups. MOM originated as a response to the limitations of these earlier distributed systems, offering a more flexible alternative for asynchronous interactions in evolving enterprise architectures.

Historical Development

The origins of message-oriented middleware (MOM) trace back to the late and , when early systems laid the groundwork for asynchronous messaging in distributed environments. IBM's Customer Information Control System (), first released in 1969, provided a foundation for handling online transactions through message-driven interactions in mainframe environments, evolving significantly during the to support distributed processing needs in banking and utilities. Similarly, ' NonStop systems, introduced in the 1970s and refined in the , utilized as a core mechanism for fault-tolerant communication across multiprocessor clusters, enabling reliable transaction switching in high-availability applications like . These systems emphasized decoupling applications via messages to manage synchronous and asynchronous workloads, setting precedents for modern MOM. In the 1990s, MOM advanced with the development of dedicated message queuing products and standardization efforts to address interoperability in heterogeneous networks. IBM released MQSeries (later renamed IBM MQ) in 1993, introducing robust queue-based messaging for enterprise integration across platforms like AIX, OS/2, and mainframes, which facilitated decoupled application communication and became a cornerstone for transaction processing. Concurrently, the X/Open Consortium published the preliminary XATMI specification in 1993 as part of its Distributed Transaction Processing framework, defining APIs for message-oriented transaction management that promoted portability and reliability in client-server architectures. These innovations shifted focus from rigid synchronous calls to flexible, store-and-forward messaging models, driven by the growth of client-server computing. The 2000s saw MOM influenced by the rise of and web services, leading to API standards and open protocols that enhanced cross-language integration. Sun Microsystems released the Java Message Service (JMS) specification version 1.0 in 1998, providing a standardized for Java applications to interact with MOM providers, supporting both point-to-point and publish-subscribe patterns to simplify enterprise messaging. In 2003, initiated the development of the (AMQP) as an open, wire-level standard for interoperable messaging, aiming to commoditize enterprise middleware beyond proprietary queues. Post-2010 developments integrated MOM with and ecosystems, extending its scope to high-throughput streaming. , originally developed at and open-sourced in 2011 under , emerged as a distributed event streaming platform that built on MOM principles for scalable, pipelines in large-scale environments. The AMQP 1.0 specification achieved OASIS standardization in 2012, formalizing its role in secure, platform-agnostic messaging. Additionally, the protocol, initially created in 1999 for constrained networks, was adopted as an ISO/IEC standard (ISO/IEC 20922) in 2016, boosting its use in IoT and low-bandwidth scenarios. These milestones reflected MOM's evolution toward resilient, scalable systems for modern distributed architectures.

Core Concepts

Key Components

Message-oriented middleware (MOM) systems rely on several core elements to facilitate asynchronous communication between distributed applications. At the heart of these systems is the message broker, which serves as a central hub for routing messages between producers and consumers, ensuring decoupling and reliable delivery. Producers, which are application components that generate and send messages, interact with the broker to publish content, while consumers retrieve and process those messages. Queues provide temporary storage for messages in point-to-point scenarios, allowing messages to be held until a specific consumer acknowledges receipt, whereas topics enable broadcasting to multiple subscribers in publish-subscribe models. Clients encompass both producers and consumers, utilizing APIs provided by the MOM to connect to the broker and exchange messages. Supporting these core elements are additional components that enhance functionality and reliability. Transformers handle format conversion between disparate message protocols or data structures, ensuring compatibility across heterogeneous systems. Routers implement intelligent delivery logic, directing messages based on predefined rules such as content filtering or priority levels. Persistence stores, often implemented using databases or , maintain message durability by journaling undelivered content, preventing loss during system failures. The interaction model in MOM systems follows a store-and-forward paradigm, where producers send messages to the broker, which stores them in queues or topics and forwards them to consumers upon subscription or polling. Delivery rules, such as acknowledgments, ensure guaranteed receipt, with the broker managing retries and error handling to maintain reliability. This asynchronous approach allows producers and consumers to operate independently, briefly referencing point-to-point queuing for targeted delivery versus publish-subscribe for one-to-many distribution. Security in MOM components is integrated at multiple levels to protect message integrity and confidentiality. Authentication mechanisms verify the identity of producers and consumers before allowing broker access, often using protocols like SASL or . Encryption secures message payloads in transit via TLS/SSL, while access controls restrict queue or topic subscriptions based on roles and permissions. These features collectively safeguard against unauthorized interception or tampering within the MOM .

Message Exchange Patterns

Message exchange patterns in message-oriented middleware (MOM) define the fundamental ways in which messages are sent, received, and processed between distributed applications, enabling asynchronous communication while accommodating various reliability and interaction needs. These patterns leverage MOM components such as to route and manage messages without direct point-to-point connections between sender and receiver. The request-reply pattern facilitates in asynchronous MOM environments by allowing a sender (requestor) to dispatch a request and await a corresponding reply from a receiver (replier). In this pattern, the requestor sends the to a channel, typically using point-to-point or publish-subscribe mechanisms, and the replier processes it before sending a reply back via a dedicated reply channel. To handle the asynchronous nature, correlation identifiers are assigned to requests, enabling the requestor to match incoming replies to specific outgoing , while timeouts prevent indefinite blocking by discarding unfulfilled requests after a predefined period. This pattern is particularly useful for scenarios mimicking synchronous remote procedure calls, such as querying a service for or confirming an operation, without requiring tight between applications. In contrast, the one-way or pattern involves transmitting a to a MOM channel without expecting or waiting for any acknowledgment or reply, allowing immediate continuation of other tasks. The messaging system assumes responsibility for delivery, often retrying until successful, which decouples from receiver availability and processing time. This simple approach suits non-critical notifications, such as events or status updates, where high throughput is prioritized over immediate confirmation. Transactional messaging ensures atomicity in MOM operations by grouping multiple message sends, receives, or related actions into a single unit that either fully succeeds or fully fails, preventing partial updates in distributed systems. In standards like the Java Message Service (JMS), transacted sessions support local transactions via commit and rollback methods, while XA-compliant sessions enable distributed two-phase commits to coordinate MOM with other resources, such as databases, using a transaction manager. For instance, XA transactions prepare all involved resources before committing, ensuring consistency across message queues and external systems. This pattern is essential for use cases requiring , like financial transfers involving message acknowledgments and database writes. Error handling in MOM incorporates mechanisms like retry policies and dead-letter queues (DLQs) to manage failed message deliveries or processing attempts gracefully. Retry mechanisms configure delays and maximum attempt limits, redelivering messages upon failures such as rollbacks or timeouts to allow temporary issues to resolve. If retries exhaust without success, the message is routed to a DLQ—a dedicated queue for undeliverable or "poison" messages—enabling administrators to inspect, reprocess, or discard them manually. These features, configurable in implementations like ActiveMQ, maintain system robustness by isolating errors without halting overall message flow.

Types

Point-to-Point Messaging

Point-to-point messaging, also known as queue-based messaging, is a fundamental pattern in message-oriented (MOM) where producers send to a specific queue, and retrieve them from the front of that queue in a load-balanced fashion across multiple . In this model, multiple producers can submit to the same queue, but each is delivered to and consumed by exactly one , ensuring exclusive and preventing duplication. The acts as an , storing persistently until acknowledged by the , which supports asynchronous and decouples the sender from the receiver. Key characteristics include first-in, first-out (FIFO) ordering, where messages are processed in the sequence they arrive, maintaining reliability for ordered workflows. selectors allow consumers to filter incoming s based on predefined criteria, such as headers or properties, enabling targeted consumption without irrelevant data. Temporary queues facilitate reply patterns by creating short-lived, dynamically generated queues that are deleted after use, ideal for request-response interactions without permanent infrastructure. Exactly-once delivery is typically achieved through acknowledgments, where the consumer confirms and before the message is removed from the queue, minimizing loss or duplication even in failure scenarios. Common use cases involve scenarios requiring reliable, ordered task distribution, such as order processing in systems, where messages represent customer orders routed to available fulfillment services for exactly-once execution. In processing, point-to-point queues ensure secure, fault-tolerant handling of payments or updates, with load balancing across consumers to manage high volumes without bottlenecks. A representative pattern is the work queue, often used in microservices architectures to distribute computational jobs, such as image resizing or data analysis tasks, across a pool of worker consumers that pull messages from a shared queue for parallel execution. This approach scales horizontally by adding consumers, balancing load while preserving message integrity through acknowledgments and FIFO semantics.

Publish-Subscribe Messaging

Publish-subscribe (pub-sub) messaging is a in message-oriented middleware (MOM) where publishers send messages to specific topics without knowledge of individual recipients, while subscribers register interest in those topics to receive copies of relevant messages. In this model, a acts as an intermediary, routing messages from publishers to all matching subscribers asynchronously, enabling one-to-many or many-to-many communication. Publishers broadcast a single message to a topic, and the broker delivers independent copies to each subscriber based on their subscriptions, which can use filters or wildcards for (e.g., "sports.*" to match any sports-related subtopic). Key characteristics of pub-sub messaging include support for hierarchical topics, which organize subjects in a tree-like structure for refined , such as "sports/football/teams" allowing subscriptions at various levels of . Content-based extends this by evaluating message attributes against subscriber predicates, rather than relying solely on topic names, to enable more dynamic filtering. Subscriptions can be shared, where multiple subscribers collectively consume messages from a topic (often for load balancing), or exclusive, where a single subscriber receives all messages for that topic. Durable subscriptions ensure that messages published during a subscriber's offline period are retained and delivered upon reconnection, supporting reliable delivery for intermittent consumers. Common use cases for pub-sub messaging include event notifications in real-time analytics systems, where data streams like sensor readings are disseminated to multiple processing nodes, and stock tickers that broadcast price updates to trading applications and dashboards simultaneously. These scenarios benefit from the pattern's ability to handle high-velocity events and support offline consumers via durable subscriptions, ensuring no in environments. For scalability, pub-sub systems often employ sharding of topics across multiple brokers, partitioning messages into subsets distributed over nodes to manage high fan-out ratios and throughput; for instance, partitions topics to enable parallel processing and horizontal scaling across clusters. This approach allows independent scaling of publishers and subscribers, handling traffic spikes without centralized bottlenecks, as demonstrated in enterprise MOM platforms supporting millions of concurrent operations per hour.

Advantages

Asynchronous Processing

In message-oriented middleware (MOM), asynchronous processing allows producers to send messages without blocking or waiting for an immediate response from , enabling the sender to continue its operations independently. Messages are temporarily stored in intermediary buffers, such as queues, which handle bursts of traffic by queuing them until are ready to retrieve and process them at their own pace. This mechanism decouples the temporal aspects of communication, where the producer and operate without , relying on the MOM provider to manage delivery semantics like at-most-once or exactly-once guarantees when configured. A key benefit of this asynchronous approach is enhanced throughput in high-load environments, as it prevents bottlenecks caused by synchronous waits; for instance, web frontends can dispatch user requests to backend services via MOM without halting the , allowing the system to handle thousands of concurrent operations efficiently. Buffers mitigate overload by absorbing spikes in message volume, ensuring stable performance even when consumers experience variable processing times due to resource constraints or workload variations. This decoupling fosters resilient architectures, where components like communicate reliably without direct dependencies on each other's availability. Implementation of asynchronous processing in MOM typically involves non-blocking APIs that facilitate sending and event-driven receiving. In standards like the Java Message Service (JMS) 2.0 and later, producers can use asynchronous send with a CompletionListener on a MessageProducer to dispatch messages without blocking, providing semantics, while consumers register MessageListener objects that invoke an onMessage() callback upon delivery, integrating seamlessly with threading models like single-threaded sessions or multi-threaded message-driven beans (MDBs) for concurrent handling. These listeners align with event loops in application frameworks, polling or receiving push notifications from the MOM provider to process messages asynchronously, often leveraging container-managed threads to avoid manual . Without enabled in the buffers, however, asynchronous risk message loss during failures, though the core focus remains on non-blocking flow control.

Scalability and Reliability

Message-oriented middleware (MOM) achieves scalability primarily through horizontal scaling mechanisms that distribute workload across multiple nodes. By clustering brokers, MOM systems like allow the addition of servers to handle increased message volumes without , enabling seamless expansion in distributed environments. Partitioning of queues or topics further enhances this by dividing data into subsets that can be processed in parallel across brokers, supporting high-throughput scenarios such as ingestion. Elastic resource allocation is facilitated through dynamic reconfiguration, where resources like partitions can be rebalanced automatically as cluster size changes, ensuring efficient load distribution in cloud-native deployments. Reliability in MOM is underpinned by configurable delivery semantics that guarantee message handling under varying conditions. Systems support at-least-once delivery, where messages are ensured to arrive but may be duplicated; at-most-once, allowing potential loss but no duplicates; and exactly-once semantics, which prevent both loss and duplication through idempotent operations and transactional commits. Persistence to disk ensures messages are durably stored before acknowledgment, mitigating from transient failures, while replication across multiple nodes—often with a configurable factor such as three—provides redundancy for recovery. In clustered setups, such as RabbitMQ's queues, this replication maintains by mirroring queue contents across a majority of nodes. Fault tolerance mechanisms in MOM bolster system resilience against node failures. Heartbeats, sent at regular intervals (e.g., every few seconds), monitor the health of consumers and brokers, triggering alerts or reassignments if connectivity lapses. to backup brokers is automated via in replicated partitions, allowing the cluster to continue operations without interruption. Idempotency support in producers and consumers handles retries gracefully, ensuring that duplicate processing does not alter outcomes in exactly-once scenarios. Performance metrics in MOM emphasize throughput, often measured in messages per second, to quantify in distributed systems. Performance benchmarks demonstrate that MOM systems can achieve throughputs ranging from thousands to millions of messages per second, depending on configuration and hardware. guarantees are validated through replication metrics, where systems like Kafka maintain across failures while sustaining high throughput in geo-distributed clusters.

Disadvantages

Performance Trade-offs

Message-oriented middleware (MOM) introduces additional latency compared to direct synchronous communication methods, primarily due to the overhead of queuing messages, routing them through brokers, and ensuring persistence for reliability. For instance, in broker-based systems like ActiveMQ and OpenMQ, end-to-end latency increases with message size, ranging from milliseconds for small payloads (e.g., 10 KB) to higher values for larger ones (e.g., 2048 KB), as queuing and disk persistence operations add delays not present in direct API calls. Persistence mechanisms, such as logging to disk in systems like IBM WebSphere MQ, can reduce throughput by factors related to access time ratios (around 10^5), exacerbating latency under load. Throughput in MOM is limited by broker bottlenecks during high-volume scenarios, where central components like queue managers become saturated, capping sustainable rates at levels such as 2000 messages per second in analyzed models. Serialization and deserialization costs further constrain , as converting messages to wire formats (e.g., XML or binary) and back imposes CPU overhead that scales with payload and , leading to throughput degradation in publish-subscribe patterns for large messages. Benchmarks using SPECjms2007 show peak throughputs up to 14,000 messages per second, but these drop with increased message destinations due to routing overhead in . MOM systems consume significant resources, including for message buffering in queues, which can reach optimal sizes around 10^5 bytes to minimize drops but still require substantial allocation under saturation. CPU usage rises for transformations and routing decisions, with spikes observed in ActiveMQ during peak loads, while network bandwidth is strained by replication for , adding overhead proportional to the number of replicas. To mitigate these trade-offs, administrators can tune batch sizes to balance latency and throughput, as larger batches in systems like Kafka improve efficiency for high-volume streams but may increase individual message delays. Compression techniques reduce costs and network usage by shrinking sizes, though they introduce minor CPU overhead for encoding/decoding.

Implementation Challenges

Implementing message-oriented middleware (MOM) systems presents several practical difficulties, particularly in large-scale distributed environments where reliability and are paramount. Configuration complexity arises from the need to define numerous parameters for queues, such as names, sizes, sorting algorithms, and quality-of-service (QoS) settings like at-most-once or at-least-once delivery, which must be meticulously tuned to match application requirements. In clustered deployments, additional challenges include establishing secure connections via SSL/TLS and certificates, configuring load balancing across multiple brokers, and integrating persistent storage mechanisms at both sender and receiver ends to ensure . Monitoring setups further complicate this process, requiring real-time oversight of message throughput and queue depths to prevent bottlenecks in high-volume scenarios, such as handling millions of queries per hour. Debugging MOM systems is hindered by their asynchronous and distributed nature, making it difficult to trace message flows across multiple components and diagnose partial failures like network outages or host crashes. Binary protocols in some implementations obscure visibility into payloads and decisions, whereas text-based alternatives like STOMP facilitate easier but may introduce overhead. Without robust diagnostic tools, developers often resort to application-level logging for tracking, which becomes cumbersome in environments spanning thousands of hosts and diverse networks, including unreliable links. Handling non-persistent queues exacerbates these issues, as loss during failures requires manual reconstruction of event correlations, demanding advanced distributed tracing capabilities that are not always natively supported. Integration hurdles in MOM deployments stem from interoperability limitations, such as the lack of direct compatibility between different Java Message Service (JMS) providers, necessitating bridge queues or additional middleware layers to enable cross-vendor communication. Vendor lock-in arises from proprietary extensions required for advanced features like transactional support or atomic grouping with business processes, complicating migrations and schema evolution for evolving message formats. Testing asynchronous behaviors poses further challenges, as simulating distributed failures and ensuring end-to-end integrity often involves porting applications to support producer-consumer patterns, which can tightly couple monitoring and messaging components if not designed modularly. Operational costs for MOM systems are elevated due to the demand for specialized skills in tuning, , and maintaining high-availability clusters, including ongoing infrastructure investments for features like load-balanced brokers. Development expenses are high, as integrating MOM requires expertise in handling database loads from stored procedures and mitigating uneven load distribution across servers, particularly in global networks with varying latency. Maintenance overhead increases with the need for regular configuration rollouts and firewall adjustments during expansions, underscoring the importance of open-source tools to reduce long-term proprietary dependencies.

Standards and Protocols

Java Message Service (JMS)

The Java Message Service (JMS) is a standard that enables applications to create, send, receive, and read messages using enterprise messaging systems, facilitating asynchronous communication in distributed environments. Developed by and first released in 1998 as part of the Java Platform Enterprise Edition (Java EE), JMS provides a portable, vendor-neutral over underlying message-oriented middleware providers. It defines interfaces for both point-to-point messaging via queues and publish-subscribe messaging via topics, allowing developers to interact with messaging systems without being tied to specific implementations. Key features of JMS include connection factories, which serve as entry points for establishing connections to a messaging provider; sessions, which manage the lifecycle of message production and consumption within a single-threaded ; message selectors, which allow consumers to filter incoming based on SQL-like expressions; and delivery modes that offer persistent delivery for guaranteed message retention (e.g., via durable storage) or non-persistent delivery for higher performance without durability guarantees. These elements ensure reliable, scalable messaging while abstracting provider-specific details, such as queue management or topic subscriptions. The JMS API revolves around core components for message handling: message producers (or senders) that publish messages to destinations like queues or topics using the MessageProducer interface, and message consumers (or receivers) that subscribe to and retrieve messages via the MessageConsumer interface, supporting both synchronous and asynchronous receipt patterns. Transactions are supported through JMS sessions, which can operate in either auto-acknowledge, client-acknowledge, or transacted modes, enabling atomic commit or rollback operations across multiple messages to maintain data integrity in enterprise workflows. This design promotes loose coupling between application components, making it suitable for integration patterns in complex systems. JMS has seen widespread adoption in enterprise Java development due to its integration into Java EE (now Jakarta EE) specifications—subsequent versions include Jakarta Messaging 3.0 (October 2020) and 3.1 (September 2022), aligning with Jakarta EE 9 and 10, respectively, featuring package namespace updates from javax.jms to jakarta.jms and compatibility improvements—and its role as a foundational standard for asynchronous processing. It significantly influences frameworks like , where the JmsTemplate and listener containers simplify JMS usage by handling resource management and exception translation, enabling declarative message-driven beans via annotations such as @JmsListener. Similarly, open-source providers like implement the full JMS 1.1 and 2.0 specifications, with support for Jakarta Messaging 3.1, leveraging JMS interfaces for multi-protocol support in enterprise messaging scenarios.

Advanced Message Queuing Protocol (AMQP)

The (AMQP) is an application-layer protocol designed for reliable, interoperable message queuing and delivery across heterogeneous systems in message-oriented middleware environments. It facilitates secure, efficient communication between messaging clients and brokers, supporting patterns like point-to-point and publish-subscribe while ensuring message ordering, durability, and error handling. As a wire-level protocol, AMQP abstracts the complexities of and routing, enabling seamless integration in enterprise applications without . AMQP originated in 2003 when iMatix Corporation, under contract from , developed the initial OpenAMQ broker and protocol specification to address proprietary messaging silos in financial systems. Early iterations, such as AMQP 0-8 (2006) and 0-9/0-9-1 (2008), introduced key concepts like channels and basic routing, with the latter becoming widely implemented. The protocol evolved significantly with AMQP 1.0, approved as an OASIS standard on October 30, 2012, which shifted focus to a more abstract, model for broader . This version was subsequently ratified as the ISO/IEC 19464 in 2014, defining a binary protocol for business messaging over TCP or other transports. At its core, AMQP 1.0 operates as a binary, frame-based protocol where messages are exchanged via structured frames containing performatives for control and transfers for data. These frames establish connections as full-duplex channels over TCP (default port 5672), supporting authentication via SASL and encryption via TLS; sessions multiplex bidirectional communication within a connection, enforcing sequential ordering and flow control; and links provide unidirectional paths for message delivery between endpoints, identified by handles for state recovery. Routing in AMQP is managed through node addresses and filters on links, allowing dynamic path selection, while earlier versions like 0-9-1 incorporate exchanges—such as direct (key-matched routing), topic (pattern-based with wildcards), and fanout (broadcast to all bound queues)—to direct messages to queues based on bindings. Key features enhance reliability and efficiency: credit-based flow control at the link level regulates transmission by granting "credits" to senders, preventing overload and enabling backpressure; heartbeats, enforced through idle timeouts and empty frames, detect connection failures; and supports by allowing messages to relay across multiple brokers or nodes via chained links and sessions. These mechanisms ensure robust operation in distributed setups, with AMQP also supporting to queues for against failures. AMQP is particularly valued for broker-to-broker communication in federated topologies, forming the foundation for implementations like , which leverages AMQP 0-9-1 extensions for advanced queuing.

Message Queuing Telemetry Transport (MQTT)

Message Queuing Telemetry Transport () is a lightweight publish-subscribe messaging protocol designed for efficient communication in resource-constrained environments, particularly over unreliable or low-bandwidth networks. Developed originally by engineers Andy Stanford-Clark and Arlen Nipper in 1999 to monitor oil pipelines via satellite links, MQTT addressed the need for a simple, bandwidth-efficient transport mechanism. It was later standardized by OASIS as version 3.1.1 in October 2014, establishing it as an open specification for (IoT) applications. In 2016, MQTT achieved international recognition as ISO/IEC 20922, further solidifying its role in machine-to-machine communication. Version 5.0, released as an OASIS standard on March 7, 2019, introduced enhancements such as detailed reason codes, user properties for metadata, shared subscriptions, and built-in request-response patterns, while maintaining with 3.1.1; both versions coexist in deployments as of 2025. At its core, MQTT employs a publish-subscribe model over TCP/IP, where clients connect to a central broker to send (publish) messages on specific topics or receive (subscribe) messages matching those topics, enabling decoupled one-to-many distribution. The protocol supports three (QoS) levels to balance reliability and efficiency: QoS 0 provides "at most once" delivery with no acknowledgment, suitable for non-critical data; QoS 1 ensures "at least once" delivery via a PUBACK response from the receiver; and QoS 2 guarantees "exactly once" delivery through a four-way involving PUBREC, PUBREL, and PUBCOMP packets to eliminate duplicates. This design minimizes overhead while allowing applications to select the appropriate assurance level based on network conditions and data importance. 5.0 extends these with improved error reporting and flow control. Key features contribute to MQTT's minimalism and robustness. The protocol maintains a small footprint, with control packets featuring a fixed header of at least 2 bytes (one for packet type and flags, plus the minimum remaining length field), enabling efficient transmission even on devices with limited processing power. Retained messages allow publishers to set a RETAIN flag, prompting the broker to store the last such message per topic for delivery to new subscribers, ensuring immediate state awareness without repeated publications. Additionally, the Last Will and Testament (LWT) mechanism lets a client specify a message during connection; if the client disconnects unexpectedly, the broker publishes this will on the designated topic to notify subscribers of the failure. MQTT finds primary application in IoT ecosystems, where its low overhead facilitates connectivity for sensors, actuators, and remote devices in scenarios like smart metering and industrial monitoring. Its adaptability has extended usage to mobile applications, leveraging the protocol's efficiency for battery-constrained environments, and to web-based systems through , which encapsulates messages in WebSocket frames for browser-native real-time interactions without custom plugins.

Implementations

Open-Source Examples

Apache ActiveMQ, particularly its Artemis implementation, is a multi-protocol open-source message broker that provides enterprise features for reliable messaging. It offers full compliance with the Java Message Service (JMS) 1.1 and 2.0 specifications, enabling seamless integration with Java-based applications. Additionally, ActiveMQ Artemis supports multiple protocols including AMQP 1.0 for interoperability with diverse clients, MQTT versions 3.1, 3.1.1, and 5 for IoT scenarios, and STOMP versions 1.0, 1.1, and 1.2 for web and cross-language messaging. In the ActiveMQ Classic variant, virtual topics serve as logical destinations that map to physical queues, allowing multiple consumers to receive topic messages via queue semantics for improved scalability with durable subscriptions. Network connectors in Classic facilitate distributed setups by enabling brokers to form networks that support federated queues and topics across multiple nodes, enhancing fault tolerance and load distribution. RabbitMQ is an open-source message broker primarily based on the AMQP 0-9-1 protocol, which defines a robust model for routing messages through exchanges to queues. It excels in flexible routing via exchange types such as direct, topic, fanout, and headers, where bindings determine how messages with specific routing keys are directed to queues or other exchanges, supporting complex patterns like pattern matching with wildcards. RabbitMQ includes plugins to extend support for MQTT, allowing it to handle lightweight pub-sub messaging for resource-constrained devices by mapping MQTT topics to AMQP exchanges and queues. Built on the Erlang/OTP platform, RabbitMQ supports clustering across multiple nodes to share queues, exchanges, and virtual hosts, providing high availability through features like mirrored queues and automatic failover during network partitions. Apache Kafka functions as a distributed event streaming platform that is frequently utilized as message-oriented middleware for high-volume, real-time data pipelines. Its architecture organizes messages into topics, which are append-only logs partitioned across brokers to enable horizontal scaling and ordered event processing. Partitions distribute the workload, with messages assigned to specific partitions based on keys to preserve ordering, while replication ensures durability across multiple brokers. Consumer groups allow multiple consumers to collaboratively process partitions in parallel, balancing load and enabling fault-tolerant consumption without duplicating efforts. Kafka achieves high throughput—often exceeding millions of messages per second—through its log-based storage and zero-copy techniques, making it suitable for decoupling producers and consumers in large-scale systems. NATS is a lightweight, high-performance open-source messaging system designed for cloud-native environments, emphasizing simplicity and speed in distributed communications. At its core, NATS implements a publish-subscribe model using subject-based addressing, where publishers send messages to subjects and subscribers receive them without , supporting many-to-many interactions efficiently. It also facilitates request-reply patterns, allowing synchronous communication by treating replies as publications to temporary subjects, ideal for service-to-service calls in architectures. NATS's minimal footprint and single-binary deployment enable sub-millisecond latency and throughput in the millions of messages per second, with no need for brokers or complex configurations in basic setups.

Commercial Solutions

Commercial message-oriented middleware (MOM) solutions are products designed for enterprise environments, offering support, agreements (SLAs), and advanced features tailored for high-stakes applications such as and large-scale integrations. These solutions emphasize robustness, , and seamless integration with existing enterprise , distinguishing them from open-source alternatives by providing dedicated and optimization services. IBM MQ, formerly known as WebSphere MQ, is a robust queuing system that supports both point-to-point and publish-subscribe messaging patterns across multiple platforms, including for mainframe environments. It provides deep transactional support through features like two-phase commit, ensuring message reliability in distributed systems. 's multi-platform compatibility enables seamless communication between heterogeneous systems, making it suitable for complex enterprise architectures. TIBCO Enterprise Message Service (EMS) is a JMS-compliant messaging platform optimized for high-performance real-time data exchange, particularly in financial services where low-latency order management and event processing are critical. It supports fault-tolerant clustering through primary and backup server configurations that share data stores, ensuring continuous availability and message delivery even during failures. This clustering capability enhances reliability in mission-critical deployments. Solace PubSub+ offers a hybrid hardware and software event broker , delivering ultra-low latency messaging for high-throughput scenarios. Its appliances maintain predictable low latency at high rates, while the software variant runs on commodity hardware for flexible deployments. The platform supports multiple protocols, including and AMQP, facilitating interoperability in diverse ecosystems. Oracle Advanced Queuing (AQ) is tightly integrated with the Oracle Database, providing persistent messaging capabilities that leverage database features for storage and propagation of across queues. It enables developers to use for enqueue and dequeue operations, supporting transactional consistency within Oracle environments. This integration allows AQ to function as a native extension for database-centric applications requiring reliable handling.

Applications

Enterprise Integration

Message-oriented middleware (MOM) plays a pivotal role in enterprise integration by facilitating asynchronous communication between disparate business systems, enabling seamless data flow across organizational boundaries. In (SOA), MOM supports the orchestration of services through standardized messaging protocols, allowing applications to invoke and respond to services without . This is achieved via patterns such as message channels and routers, which route service requests reliably across heterogeneous environments. Similarly, in event-driven architectures (EDA), MOM enables real-time updates by publishing events to queues or topics, where subscribers process them independently, thus supporting reactive business processes like inventory adjustments triggered by sales events. A key application of MOM in enterprise integration involves linking () and (CRM) systems, such as integrating with using message queues. For instance, order data captured in Salesforce can be queued via MOM platforms like MuleSoft's Anypoint MQ, transformed, and routed to for inventory and fulfillment processing, ensuring synchronized updates without synchronous dependencies. In order fulfillment workflows, MOM coordinates multi-step processes by queuing messages for stages like verification, allocation, and shipping notifications, allowing systems to handle variable loads and failures gracefully. These examples demonstrate MOM's utility in automating end-to-end business operations, reducing manual interventions. MOM provides significant benefits in enterprise contexts by decoupling legacy systems from modern applications, permitting independent evolution without disrupting existing infrastructure. This extends to communication, where services exchange messages via brokers instead of tight APIs, enhancing scalability and in distributed environments. Additionally, MOM addresses challenges in handling heterogeneous data formats through built-in transformation capabilities, such as message translators that convert payloads between formats like XML and JSON during transit. These features mitigate integration complexities, ensuring across diverse protocols and schemas.

Internet of Things (IoT)

Message-oriented middleware (MOM) plays a pivotal role in IoT ecosystems by facilitating the efficient handling of data generated by sensors across vast networks of devices. This involves asynchronous communication patterns, particularly the publish-subscribe (pub-sub) model, where sensors publish streams to topics, and subscribed applications or systems consume them without , enabling scalable decoupling in resource-constrained environments. For command dissemination, such as instructions to actuators, pub-sub allows brokers to route messages to multiple devices efficiently, supporting low-latency interactions essential for responsive IoT operations; protocols like , optimized for bandwidth-limited networks, are commonly employed in this context. In practical applications, MOM integrates sensors in smart cities to stream data for real-time urban management, where readings on flow are published to central systems that analyze and issue control commands to lights, optimizing congestion and safety. Similarly, in industrial IoT (IIoT), MOM supports by aggregating and from machinery, enabling pub-sub distribution to analytics engines that detect anomalies and trigger preemptive alerts or adjustments, thereby minimizing downtime in settings. IoT deployments demand MOM solutions capable of processing billions of messages per day to accommodate the explosive growth of connected devices, estimated at 21.1 billion globally as of , which generates massive volumes requiring robust queuing and mechanisms. To address this, edge processing within MOM architectures filters and aggregates locally at gateways or edge nodes before forwarding to central brokers, significantly reducing the computational load on cloud-based intermediaries and mitigating latency in bandwidth-constrained scenarios. Security in MOM for IoT emphasizes device authentication through mechanisms like certificate-based mutual TLS to verify identities in distributed networks, preventing unauthorized access to sensitive streams. Additionally, encrypted channels, such as those using TLS over pub-sub protocols, ensure and integrity of messages traversing heterogeneous IoT setups, safeguarding against interception in untrusted environments.

Cloud and Microservices Integration

Message-oriented middleware (MOM) has evolved to support -native architectures through fully managed, serverless services that eliminate infrastructure management while enabling scalable, decoupled communication. (SQS) provides a serverless queuing solution as MOM, supporting unlimited queues and messages with features like standard and FIFO queues for at-least-once or exactly-once delivery, integrating seamlessly with for event-driven workflows. Similarly, (SNS) functions as a serverless pub/sub MOM, allowing publishers to fan out messages to multiple subscribers, including AWS services like SQS and , with support for up to 12.5 million subscriptions per topic. Azure Service Bus offers a managed enterprise MOM with queues for point-to-point messaging and topics for publish-subscribe patterns, incorporating advanced capabilities such as duplicate detection, dead-letter queues, and transactional support via AMQP 1.0 and JMS 2.0. In environments, MOM facilitates resilient asynchronous communication by decoupling services and enabling non-blocking interactions, often integrated with es for enhanced observability and traffic management. For instance, tools like Istio provide a layer that manages inter-service communication in , allowing MOM protocols to handle async APIs while enforcing policies for retries, circuit breaking, and secure routing. This integration promotes fault-tolerant designs where can use MOM for event-driven patterns, reducing latency dependencies and improving overall system resilience in containerized deployments. Developments in the 2020s have focused on Kubernetes-native operations for MOM brokers, with operators like Strimzi enabling declarative management of clusters on , including automated rolling upgrades, topic provisioning, and security configurations via TLS and OAuth 2.0. Strimzi supports auto-scaling through node pool adjustments and for partition rebalancing, facilitating dynamic in cloud environments. In multi-cloud setups, it leverages Kafka MirrorMaker 2 for data replication across clusters, supporting active/active configurations to ensure without . Key benefits of MOM in and include support for , where services can employ diverse data stores—such as relational databases for transactions and for high-volume events—while MOM handles asynchronous data flows between them. Additionally, cross-region replication enhances disaster recovery; for example, Azure Service Bus provides geo-replication for automatic across regions, ensuring message durability and minimal downtime during outages. These features collectively enable scalable, resilient architectures that align with principles.

Real-Time and Edge Computing

In message-oriented middleware (MOM), real-time enhancements focus on enabling immediate and low-latency message delivery to support time-sensitive applications. Streaming integrations such as Kafka Streams allow for continuous, in-stream computations on incoming messages, facilitating real-time and decision-making without requiring separate systems. This capability is particularly vital in , where low-latency protocols integrated with MOM handle network demands by streaming data at sub-millisecond latencies, enabling ultra-reliable low-latency communication (URLLC) for applications like remote surgery or autonomous vehicles. For instance, Apache Kafka's architecture supports end-to-end latencies under 100 milliseconds when optimized for event-driven processing, making it a cornerstone for real-time MOM in distributed systems. Edge computing extends MOM capabilities to decentralized environments by deploying lightweight protocols closer to data sources, reducing reliance on central infrastructure. Protocols like MQTT-SN (MQTT for Networks) provide a compact variant of optimized for resource-constrained edge devices, using gateways to support publish-subscribe messaging over non-TCP networks such as UDP to minimize overhead in IoT edge scenarios. In paradigms, MOM facilitates intermediate processing layers between edge devices and the , enabling local message aggregation and filtering to alleviate bandwidth constraints and enhance responsiveness. This approach is evident in deployments where fog nodes use MOM to process data on-site, reducing dependency in latency-sensitive IoT networks while maintaining reliability through store-and-forward mechanisms. Self-adaptive distributed MOM based on further supports dynamic edge topologies by automatically adjusting to node failures or load variations, ensuring resilient messaging in heterogeneous environments. By 2025, trends in MOM emphasize AI-driven and hybrid edge-cloud topologies to power autonomous systems. AI algorithms integrated into MOM brokers enable intelligent by predicting traffic patterns and optimizing paths in real-time, improving efficiency in edge AI workloads where decisions must occur locally without cloud round-trips. Hybrid topologies combine edge MOM for immediate processing with synchronization, supporting autonomous applications like robotic fleets by distributing flows across layers for seamless . These advancements align with broader 2025 shifts toward agentic AI at the edge, where MOM facilitates event-driven coordination in decentralized autonomous networks. Despite these progresses, MOM in real-time and edge contexts faces key challenges, including synchronizing across distributed edge nodes and central systems amid varying network conditions. Edge devices often struggle with consistency due to asynchronous messaging, requiring advanced reconciliation protocols to merge local updates with states without conflicts. Intermittent connectivity exacerbates this, as message queues may accumulate offline , leading to backlogs that can delay real-time operations; solutions like resilient queuing in MQTT-based MOM mitigate this by buffering and retrying deliveries upon reconnection. In fog-enhanced setups, handling these issues involves QoS-aware that prioritizes critical messages during disruptions, ensuring overall system reliability.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.