Overview
In a recent blog-post Clustering in JBoss AS7/EAP 6 we showed how basic clustering in the new EAP 6 and JBoss AS 7 can be used. The EAP 6 is basically an AS 7 with official RedHat-support. Our cluster we described in that post was small and simple. This post will cover much more complex cluster structures, how to build them and how we can utilize the new domain-mode for our clusters. There are multiple ways to build and manage bigger JBoss cluster environments. We will describe two ways to do so: One using separating techniques also applicable to older JBoss versions and the other way using an Infinispan feature called distribution.
Scalability vs. Availability
The main challenge when building a cluster is to make it both highly available and scalable.
Availability for a cluster means: If one node fails, all the sessions on that node will be seamlessly served by another node. This can be achieved through session-replication. Session-replication is preconfigured and enabled in the ha
profile in the domain.xml
. Flat replication means that all sessions are copied to all other nodes: If you have got four nodes with 1GB memory for each of them, your cluster can only use 1GB of memory because basically all nodes store copies from each other. I. e. your cluster will not have 4*1GB=4GB memory. If you would add more nodes to this cluster you would not get more memory, you will even lose some memory due to overhead for replication. But you will get more availability and more important more network traffic due to replication overhead (all changes need to be redistributed to all other nodes). Let us call this cluster topology full-replication.
Scalability means if you add more nodes to your cluster you get more computing power from your cluster. With computing power we mean both: CPU-power and memory. Consider a cluster with a bunch of nodes which are identical but do not know about each other. Some load-balancer will ensure that every node got work to do. That concept will scale very well but if a node crashes all its data is lost – bad luck for the user who just filled a big shopping-cart. This cluster concept has another advantage. You could drain all sessions from one node, update the application, JBoss or the operating system, then put it back up and continue with another node. This would not work with the ha-cluster due to serialversionUIDs and a lot of other possible incompatibilities. Ok, for more complex live-updates including database-scheme changes or other nasty things it is not that easy. But there is a tendency that updates on this cluster-topology are easier than on the first topology. Let us call this cluster topology no-replication.
As the names full-replication and no-replication already mention both cluster topologies are extremes but it shows a simple fact: Increasing availability will not increase the computing-power of a cluster, at least not in terms of memory. And just increasing the computing power will not increase the availability of a cluster. In this way the dimensions scalability (computing power) and availability are orthogonal. Increasing both aspects at the same time is more complex and will be covered in the next sections.
Scalable HA Clusters
As already mentioned we will present two ways to build a cluster that is both: highly availably and scalable. The first way will use a concept we call sub-clusters and the second way will use a feature of the Infinispan-cache called distribution.
Using Sub-Clusters
Topology Concept
This cluster will be a scalable cluster which is built of multiple sub-clusters. These sub-clusters will be highly-available, i. e. the nodes of one sub-cluster will replicate each other. But the nodes of different sub-clusters will not replicate each other. The complete cluster will scale up by adding additional sub-clusters.
How many sub-clusters you use and how many nodes your sub-clusters will contain, depends on the application you will be running. The first thing to observe is that the size of the sub-clusters is bounded from below by your availability requirements. And the amount of sub-clusters is bound from below by your needs for computational power. The different sub-clusters can be distributed far over the internet. Maybe you have got a few sub-clusters local at you company and some more at multiple cloud-providers or other server-farms. A sub-cluster reaching over bigger infrastructure borders is a bad idea due to the lack of performance you will experience.
We recommend to make use of the domain-mode. The sub-clusters will then be represented by server-groups.
Setting up an example cluster
For this example we will be using two sub-clusters with two nodes each sub-cluster. Let us start to set up a domain with 5 servers. The first as domain-controller and four normal hosts. You can read the last post of this series on how to do this. Each sub-cluster will be represented by a server-group. So let us build two server-groups: subcluster1
and subcluster2
.
Now edit the host.xml
on your nodes and add servers to your group. If you start your servers you will observe a bad thing: All your four servers replicate each other – your cluster is not scalable but more available than we intended.
Preventing uncontrolled replication
By default clustered JBoss servers within the same network will find each other and replicate all sessions of all applications they got in common. If we want to form multiple sub-clusters we need to prevent that behaviour. We only want specific servers to replicate each other. JBoss servers stick together because they use the same multicasts so we only need to change these. This is the standard ha-sockets
socket-binding-group from the domain.xml
:
<socket-binding-group name="ha-sockets" default-interface="public"> <!-- Needed for server groups using the 'ha' profile --> <socket-binding name="ajp" port="8009"/> <socket-binding name="http" port="8080"/> <socket-binding name="https" port="8443"/> <socket-binding name="jgroups-diagnostics" port="0" multicast-address="224.0.75.75" multicast-port="7500"/> <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/> <socket-binding name="jgroups-tcp" port="7600"/> <socket-binding name="jgroups-tcp-fd" port="57600"/> <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/> <socket-binding name="jgroups-udp-fd" port="54200"/> <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/> <socket-binding name="osgi-http" interface="management" port="8090"/> <socket-binding name="remoting" port="4447"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group>
As you see there are four multicasts configured: jgroups-diagnostics
, jgroups-mping
, jgroups-udp
and modcluster
. jgroups-diagnostics
and modcluster
should keep their values for all sub-clusters. jgroups-mping
and jgroups-udp
on the other hand need to be different for each sub-cluster. As you see their multicast-address is set to the value of the property jboss.default.multicast.address
. We will set a different value for that property in each server-group (i. e. sub-cluster) later.
Mod-Cluster
When you leave the modcluster-multicast on its default value all nodes of all sub-clusters will be seen by the apache-side of mod_cluster. So mod_cluster will do load-balancing and it sees many nodes which all have the same application deployed. It assumes that it can do fail-over and all that stuff on all nodes of all sub-clusters. What we need to configure the modcluster subsystem of the JBoss servers to put each sub-cluster in a different load-balancing-group. We can do that in the domain.xml
in our relevant profile as follows:
<subsystem xmlns="urn:jboss:domain:modcluster:1.1"> <mod-cluster-config advertise-socket="modcluster" connector="ajp" load-balancing-group="${mycluster.modcluster.lbgroup:StdLBGroup}"> <!-- some more stuff--> </mod-cluster-config> </subsystem>
Setting the properties
We use a property as value for the load-balancing-group as we did for the multicasts. That leaves it up to us to use the same profile or socket-binding-group in different versions. I. e. we do not need to create a new profile/socket-binding-group for each sub-cluster, which would be annoying. Setting properties for a server-group is really easy and self-explaining:
<server-groups> <server-group name="subcluster1" profile="ha"> <system-properties> <property name="jboss.default.multicast.address" value="230.0.1.1"/> <property name="mycluster.modcluster.lbgroup" value="LBGroup1"/> </system-properties> <socket-binding-group ref="ha-sockets"/> </server-group> <server-group name="subcluster2" profile="ha"> <system-properties> <property name="jboss.default.multicast.address" value="230.0.1.2"/> <property name="mycluster.modcluster.lbgroup" value="LBGroup2"/> </system-properties> <socket-binding-group ref="ha-sockets"/> </server-group> </server-groups>
Note: You can set properties in various places of the configuration. Look here for more information.
How to scale the cluster
First how to scale up: If you have not done so in advance you need to add a new server-group on your domain-controller. In the last blog-post we showed how to deploy an application through the command-line-interface (cli). Now we will use the cli to create a new server-group on a running domain-controller. So connect the cli to your domain-controller and execute the following three commands:
/server-group=server-group3:add(profile=ha,socket-binding-group=ha-sockets) /server-group=server-group3/system-property=jboss.default.multicast.address:add(value=230.0.1.3) /server-group=server-group3/system-property=mycluster.modcluster.lbgroup:add(value=230.0.1.3)
This will create a new server-group named server-group3
which looks just like server-group2
or server-group1
from above.
Now there are only two things to do:
- Create and configure our sub-cluster members. Do not forget to configure servers into our new server-group.
- Add the demanded deployments to the new server-group.
That’s it, you just scaled your cluster up by adding a new sub-cluster.
When scaling the cluster down you will need to take care of existing sessions. That is why you can not just kill all the servers of that sub-cluster. The first step is to ensure that the sub-cluster(s) we want to take down will not get any fresh sessions. When using mod_cluster this is pretty simple: There are various disable-buttons. The big marked “Disable Nodes”-button of the screenshot below is the right one. If you click that button mod_cluster will not direct new sessions into our sub-cluster anymore. But existing sessions will still be processed in our sub-cluster. Now you will need to wait until all these sessions vanish. Then you can stop the servers via cli or web-console. The existing but unused server-group does not hurt anybody and can maybe get reused when you scale the cluster up later, but you can also delete them.
A different approach: Infinispan-distribution
Infinispan is a distributed cache and plays the major role in clustering. As explained in the first post of this series the standard ha-profile or standalone-ha.xml uses Infinispan for four purposes:
- distributing and caching web-sessions over the cluster (container: web)
- distributing and caching stateful-session-beans over the cluster (container: ejb)
- 2nd level cache for hibernate (container: hibernate)
- distribution of some general objects over the cluster (container: cluster)
Infinispan has multiple different modes. For this post there are two relevant modes: Replication is the standard mode and means that if some object in a cache-container changed, that object will be redistributed to all cluster-nodes. Every cluster-node has the same data. That mode does not scale up in the dimension of memory. The second mode is called distribution. This mode does scale up: You can define how many copies of an object the cluster will hold. That number is a constant and thus it will not increase with the number of cluster-nodes. So your cluster will scale up in the dimension of memory.
Infinispan is a key-value-based cache and in the distributed mode it uses consistent-hashing to determine on which cluster-nodes the constant number of copies, num_copies, are/will be located. If an object O has been put into the the distributed cache on node N, it will not necessarily get put into the local cache of N. Consistent-hashing in general does not ensure that an object will be put into the local cache. It only ensures that it will be put into num_copies caches. On the other hand you can activate a 1st-level-cache for the distributed cache. That 1st-level-cache will cache remote objects for a configurable amount of time (l1-lifespan
).
Topology Concept
This cluster will not have a very complex topology. You should set up two independent domains to keep yourself the possibility of live-updates. Each domain will have one productive server-group containing all nodes. The nodes will have Infinispan-distribution enabled on the cache-containers cluster, web and ejb. That’s it. The cluster will be scaled up by simply adding more nodes to that server-group.
There are two important tuning parameters for this cluster: num_copies, and the number of cluster-nodes. num_copies is bounded from below by your availability requirements. That is because the bigger num_copies, the more nodes can die without data-loss. The number of cluster nodes on the other hand is bound from below by num_copies because there can not be at least num_copies copies in the cluster without at least num_copies cluster-nodes. The number of cluster-nodes is also bound from below by your requirements on computational power for your cluster.
How to configure distribution
This approach is rather easy to configure in theory. Just open your domain.xml locate the Infinispan-subsystem in the relevant profile and
- For the container cluster convert the replicated-cache to a distributed-cache.
- For the container web change the default-cache to dist
- For the container ejb change the default-cache to dist
Note that you can control the number of copies of a cached object with the attribute owners
of the distributed-cache
tag. That attribute’s default-value is "2"
.
With the AS 7 this currently does not work because of a bug we found: AS7-4881. During the writing of this post this bug has been fixed on the trunk. You can download the source from github and compile the AS 7 or just download a nightly build from here to try it out by yourself.
Summary
Clustering of a bigger environment requires more detailed configurations. We covered different approaches to build a more complex cluster. One thing that you should keep in mind that simply adding nodes to a cluster does not always make the cluster more powerful. Another important thing which you should keep in mind are third-party systems like databases. If you just use one database-server then at some point your cluster will not scale up any more because the database-server cannot handle more requests. Watch out for these thrid-party dependencies, even a mail-server could get to be a problem at some point.
We still have not covered two important things: messaging and ejb calls from a java-client (i. e. not from a web-application through modcluster). The next post will cover load-balancing and fail-over of standalone remote EJB clients.
Any questions or feedback? If so feel free to comment on this post or contact us via email:
- heinz.wilming (at) akquinet.de
- immanuel.sims (at) akquinet.de