The JMS API provides a separate domain for each messaging approach, point-to-point or publish/subscribe. The point-to-point domain is built around the concept of queues, senders and receivers. The publish/subscribe domain is built around the concept of topic, publisher and subscriber. Additionally it provides an unified domain with common interfaces that enable the use of queue and topic. This domain defines the concept of producers and consumers. The classic sample uses a very simple configuration (centralized) made of one server hosting a queue and a topic. The server is administratively configured for accepting connections requests from the anonymous user.
JMS clustering aims to offer a solution for both the scalability and the high availability for the JMS accesses. This document gives an overview of the JORAM capabilities for clustering a JMS application in the J2EE context. The load-balancing and fail-over mechanisms are described and a user guide enabling to build a such configuration is provided. You can find further information in the JORAM documentation here.
We will begin with a little approach :
Getting started :
We will begin with some generalities:
The following scenario and general settings are proposed:
But why should we use clustered Topic?
A non hierarchical topic might also be distributed among many servers. Such a topic, to be considered as a single logical topic, it made of topics representatives, one per server. Such an architecture allows a publisher to publish messages on a representative of the topic. In our example, the publisher works with the representative on server 0. If a subscriber subscribed to any other representative (on server 1 in our example), it will get the messages produced by the publisher.
Load balancing of topics is very usefull because you can have distribute topic subscriptions across the cluster.
Now more in details we will see how to configure everything.
<?xml version="1.0"?> <config <domain name="D1"/> <server id="0" name="S0" hostname="localhost"> <network domain="D1" port="16301"/> <service class="org.objectweb.joram.mom.proxies.ConnectionManager" args="root root"/> <service class="org.objectweb.joram.mom.proxies.tcp.TcpProxyService" args="16010"/> </server> <server id="1" name="S1" hostname="localhost"> <network domain="D1" port="16302"/> <service class="org.objectweb.joram.mom.proxies.ConnectionManager" args="root root"/> <service class="org.objectweb.joram.mom.proxies.tcp.TcpProxyService" args="16020"/> </server> </config>
<Cluster> <Topic name="mdbTopic" serverId="0"> <freeReader/> <freeWriter/> <jndi name="mdbTopic"/> </Topic> <Topic name="mdbTopic" serverId="1"> <freeReader/> <freeWriter/> <jndi name="mdbTopic"/> </Topic> <freeReader/> <freeWriter/> <reader user="anonymous"/> <writer user="anonymous"/> </Cluster>
#!/bin/ksh export JONAS_BASE=$PWD/jb1 cp $JONAS_ROOT/examples/output/ejbjars/newsamplemdb.jar $JONAS_BASE/ejbjars/ jonas admin -a newsamplemdb.jar -n node1 export JONAS_BASE=$PWD/jb2 cp $JONAS_ROOT/examples/output/ejbjars/newsamplemdb.jar $JONAS_BASE/ejbjars/ jonas admin -a newsamplemdb.jar -n node2
jclient newsamplemdb.MdbClient
ClientContainer.info : Starting client... JMS client: tcf = TCF:localhost-16010 JMS client: topic = topic#1.1.1026 JMS client: tc = Cnx:#0.0.1026:5 MDBsample is OkIn addition you can see on your different JOnAS:
Message received: Message6 MdbBean onMessage Message received: Message10 MdbBean onMessage Message received: Message9
The fact you have both of the messages on the 2 different JOnAS you have launched shows the load balancing. It is more effectiv for Queue because you will have only one part of the whole messages in the first jonas base and the other part in the second jonas base.
Globally, the load balancing in the context of queues may be meaningless in comparison of load balancing topic. It would be a bit like load balancing a stateful session bean instance (which just requires failover). But the JORAM distributed architecture enables to distribute the load of the queue access between several JORAM server nodes.
Here is a little picture of what is going to happen for the Queue and the message:A load balancing message queue may be needed for a high rate of messages. A clustered queue is a cluster of queues exchanging messages depending on their load. We have a cluster of two queues. An heavy producer accesses its local queue and sends messages. It quickly becomes loaded and decides to forward messages to the other queue of its cluster which is not under heavy load.
For this case some parameters must be set:
If you want further information you can look at the joram documentation
The scenario for Queue is similar to the previous one. In fact step 1) and step 2) are the same. When you arrive to step 3) do the following changes:
<Cluster> <Queue name="mdbQueue" serverId="0" className="org.objectweb.joram.mom.dest.ClusterQueue"> <freeReader/> <freeWriter/> <property name="period" value="10000"/> <property name="producThreshold" value="50"/> <property name="consumThreshold" value="2"/> <property name="autoEvalThreshold" value="false"/> <property name="waitAfterClusterReq" value="1000"/> <jndi name="mdbQueue"/> </Queue> <Queue name="mdbQueue" serverId="1" className="org.objectweb.joram.mom.dest.ClusterQueue"> <freeReader/> <freeWriter/> <property name="period" value="10000"/> <property name="producThreshold" value="50"/> <property name="consumThreshold" value="2"/> <property name="autoEvalThreshold" value="false"/> <property name="waitAfterClusterReq" value="1000"/> <jndi name="mdbQueue"/> </Queue> <freeReader/> <freeWriter/> <reader user="anonymous"/> <writer user="anonymous"/> </Cluster>
A HA server is actually a group of servers, one of wich is the master server that coordinates the other slave servers. An external server that communicate with the HA server is actually connected to the master server.
Each replicated Joram server executes the same code as a standard server except for the communication with the clients.
In our case, the collocated clients use a client module (newsamplemdb). If the server replica is the master then the connection is active enabling the client to use the HA Joram server. If the server replica is the master then the connection is active enabling the client to use th HA Joram server. If the replica is a slave then the connection opening is blocked until the replica becomes the master.
When you want to create a Joram HA configuration you will have to change several files.
A clustered server is defined by the element "cluster". A cluster owns an identifier and a name defined by the attributes "id" and "name" (exactly like a standard server). Two properties must be defined:
<?xml version="1.0"?/> <config> <domain name="D1"/> <property name="Transaction" value="fr.dyade.aaa.util.NullTransaction"/> <cluster id="0" name="s0"> <property name="Engine" value="fr.dyade.aaa.agent.HAEngine" /> <property name="nbClusterExpected" value="1" />
For each replica, an element "server" must be added. The attribute "id" defines the identifier of the replica inside the cluster. The attribute "hostname" gives the address of the host where the replica is running. The network is used by the replica to communicate with external agent servers, i.e. servers located outside of the cluster and not replicas.
Here is the whole configuration for the a3servers.xml file of our first jonas instance jb1:
<?xml version="1.0"?> <config< <domain name="D1"/> <property name="Transaction" value="fr.dyade.aaa.util.NullTransaction"/> <cluster id="0" name="s0"> <property name="Engine" value="fr.dyade.aaa.agent.HAEngine" /> <property name="nbClusterExpected" value="1" /> <server id="0" hostname="localhost"> <network domain="D1" port="16300"/> <service class="org.objectweb.joram.mom.proxies.ConnectionManager" args="root root"/> <service class="org.objectweb.joram.mom.proxies.tcp.TcpProxyService" args="16010"/> <service class="org.objectweb.joram.client.jms.ha.local.HALocalConnection"/> </server> <server id="1" hostname="localhost"> <network domain="D1" port="16301"/> <service class="org.objectweb.joram.mom.proxies.ConnectionManager" args="root root"/> <service class="org.objectweb.joram.mom.proxies.tcp.TcpProxyService" args="16020"/> <service class="org.objectweb.joram.client.jms.ha.local.HALocalConnection"/> </server> </cluster> </config>Here we have a cluster id = 0 and the name S0. It is exactly the same file for the second instance of JOnAS.
<?xml version="1.0"?> <JoramAdmin> <AdminModule> <collocatedConnect name="root" password="root"/> </AdminModule> <ConnectionFactory className="org.objectweb.joram.client.jms.ha.tcp.HATcpConnectionFactory"> <hatcp url="hajoram://localhost:16010,localhost:16020" reliableClass="org.objectweb.joram.client.jms.tcp.ReliableTcpClient"/> <jndi name="JCF"/> </ConnectionFactory> <ConnectionFactory className="org.objectweb.joram.client.jms.ha.tcp.QueueHATcpConnectionFactory"> <hatcp url="hajoram://localhost:16010,localhost:16020" reliableClass="org.objectweb.joram.client.jms.tcp.ReliableTcpClient"/> <jndi name="JQCF"/> </ConnectionFactory> <ConnectionFactory className="org.objectweb.joram.client.jms.ha.tcp.TopicHATcpConnectionFactory"> <hatcp url="hajoram://localhost:16010,localhost:16020" reliableClass="org.objectweb.joram.client.jms.tcp.ReliableTcpClient"/> <jndi name="JTCF"/> </ConnectionFactory>
Each connection factory has his own specification. One is in case of the Queue, one for Topic, and one for no define arguments. Each time you have to put in the hatcp url the url of the 2 instance. In our case it is localhost:16010 and localhost:16020. It allows the client to change the instance when the first one is dead.
After this definition you can create the user, the queue and topic you want to have.
First, in order to recognize the cluster you will have to declare a new parameter in those file.
<config-property> <config-property-name>ClusterId</config-property-name> <config-property-type>java.lang.Short</config-property-type> <config-property-value>0</config-property-value> </config-property>
Here the name is not really appropriate but in order to keep some coherence we use this name. In fact it represent a replica so it would have been better to call it replicaId.
Consequently, for the first jonas instance you will have to put the little code just above and for the second instance you will have to change the value to 1 (in order to significate this is an other replica).
First you will have to launch the two jonas base. You can create a runHa.sh in which you will put the following code:
export JONAS_BASE=$PWD/jb1 export CATALINA_BASE=$JONAS_BASE rm -f $JONAS_BASE/logs/* jonas start -win -n node1 -Ddomain.name=HA
Then do the same for the second jonas base. After that launch your script.
You will see that one of the two jonas base (the one which is the slowest) will be in a waiting state when reading the joramAdmin.xml
JoramAdapter.start : - Collocated JORAM server has successfully started. JoramAdapter.start : - Reading the provided admin file: joramAdmin.xml
whereas the other one is launched successfully.
Then you will have to launch (through a script or not) the newsamplemdb example
jclient -cp /JONAS_BASE/jb1/ejbjars/newsamplemdb.jar:/JONAS_ROOT/examples/classes -carolFile clientConfig/carol.properties newsamplemdb.MdbClient
Now you can see that messages are sent on the jonas base which was launched before. Launch it again and kill your current jonas. You will see that automatically the second jonas will wake up and take care of the other messages. This is what we wanted to have!
This is a proposal for building a MDB clustering based application.
This is like contested Queues. i.e. you have more than one receiver on different machines receiving from the queue. This load balances the work done by the queue receiver not the queue itself.
The HA mechanism can be mixed with the load balancing policy based on clustered destinations. The load is balanced between several HA servers. Each element of a clustered destination is deployed on a separate HA server.
Here is the supposed configuration you will have to make (supposed because it has not been verifed).
It has to be tested but here are the different tests which have to be made: