Thanks for visiting the site !!! Visit below intrested Ads to support us if you like the site .Sharing is caring .keep distance and keep safe . Happy Learning ... 😀

what is RDQM (Replicated Data Queue Manager) and its system requirements ?

•RDQM (Replicated Data Queue Manager) is an HA solution that is available on Linux platforms.
•An RDQM consists of 3 servers configuration in an HA group, each with an instance of the queue manager.
•One instance is the running queue manager, which synchronously replicates its data to the other two instances.


•If the server running this queue manager fails, another instance of the queue manager starts and has current data to operate with.
•The three instances of the queue manager can optionally share a floating IP address, so clients only need to be configured with a single IP address.


•Only one instance of the queue manager can run at any one time.•RDQM uses a three node system with quorum to avoid the split-brain situation.Nodes that can communicate with at least one of the other nodes form a quorum.


•The queue manager cannot run on a node which is not connected to at least one other node, which means two nodes must be running all the time.

•MQ Console (Web) does not support replicated data queue managers.
•MQ Explorer can be used with RDQM but this does not display information specific to the RDQM features.

•The group configuration of the 3 nodes is handled by Pacemaker (https://clusterlabs.org/pacemaker/)
•The replication between the three nodes is handled by DRBD (Distributed Replicated Block Device). (https://linbit.com/drbd-user-guide/drbd-guide-9_0-en/)

RDQM Architecture

•DRBD is a software-based, shared-nothing, replicated storage solution mirroring the content of block devices (hard disks, partitions, logical volumes etc) between hosts.
•DRBD mirrors data
–in real time: Replication occurs continuously while applications modify the data on the device
–transparently: Applications need not be aware that the data is stored on multiple hosts


•DRBD’s core functionality is implemented by way of a Linux kernel module.
•DRBD provides a driver for a virtual block device, so DRDB is situated right near the bottom of a system’s I/O stack.

•Pacemaker is a high-availability cluster resource manager.

•It achieves maximum availability for your cluster services (a.k.aresources) by detecting and recovering from node and resource level failures by making use of the messaging and membership capabilities provided by Corosync.

Each node requires a volume group named drbdpool. This storage volume is allocated as a separate logical volume per queue manager. For best performance SSDs are recommended to host this logical volume.

System Requirements for RDQM

•Each node may need up to three network interfaces that are used for configuring the RDQM support:
–Primary interface for Pacemaker to monitor the HA group
–Alternate interface for Pacemaker to monitor the HA group
–Synchronous data replication interface. This should have sufficient bandwidth to support the replication requirements given the expected workload of all of the replicated data queue managers running in the HA group
•The above interfaces can be configured as:
–Same IP address for all 3 interfaces–Separate IP for each interface
–Same IP address for primary and alternate and a separate IP for the replication interface
•For maximum fault tolerance, these interfaces should be independent NICs.

•DRBD requires that each node in the HA group has a valid internet host name (uname=n), as defined by RFC 952 (https://datatracker.ietf.org/doc/html/rfc952) and amended by RFC 1123 (https://datatracker.ietf.org/doc/html/rfc1123)
•If there is a firewall between nodes in the HA group, firewall must allow traffic between the nodes on a range of ports
–7000-7100 (DRBD Ports)
–5404-5407 (Pacemaker Ports)
–1414 (MQ default port) [this will change depending on queue manager configuration]
•If the system uses SELinuxin a mode other than permissive, the DRBD must be allowed:–semanagepermissive =a drbd_t

•It is recommended for 3 nodes in the HA group to be collocated on the same data center•IBM will support a latency of up to 5ms. However some application performance cannot tolerate more than 1 to 2 msof latency
•RDQM can be optionally configured to have a floating IP address to enable clients to use the same IP address for RDQM regardless of which node is active.
•The mqmuser must be able to use sudoto run commands on each of the 3 nodes.
–Configure sudofor mqmuser to run the following commands with root authority:
•/opt/mqm/bin/crtmqm
•/opt/mqm/bin/dltmqm
•/opt/mqm/bin/rdqmadm
•/opt/mqm/bin/rdqmstatus
•It is recommended to configure password-less SSH for mqmuser, which means all 3 servers must have the same UID for mqmon all three servers.•MQ admin user (usually mqm) must be members of both mqmand haclientgroups (haclientgroup is created during installation of Pacemaker).

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *