Enabling Cluster Link automatically activates distributed caching. The cache is
distributed across multiple Liferay DXP nodes running concurrently. Cluster Link
replication. The Ehcache global settings are in the
By default Liferay does not copy cached entities between nodes. If an entity is deleted or changed, for example, Cluster Link sends a remove message to the other nodes to invalidate this entity in their local caches. Requesting that entity on another node results in a cache miss; the entity is then retrieved from the database and put into the local cache. Entities added to one node’s local cache are not copied to local caches of the other nodes. An attempt to retrieve a new entity on a node which doesn’t have that entity cached results in a cache miss. The miss triggers the node to retrieve the entity from the database and store it in its local cache.
To enable Cluster Link, add this property to
Cluster Link depends on JGroups and provides an API for nodes to communicate. It can
- Send messages to all nodes in a cluster
- Send messages to a specific node
- Invoke methods and retrieve values from all, some, or specific nodes
- Detect membership and notify when nodes join or leave
When you start @portal@ in a cluster, a log file message shows your cluster’s name (e.g.,
------------------------------------------------------------------- GMS: address=oz-52865, cluster=liferay-channel-control, physical address=192.168.1.10:50643 -------------------------------------------------------------------
Cluster Link contains an enhanced algorithm that provides one-to-many type communication between the nodes. This is implemented by default with JGroups’s UDP multicast, but unicast and TCP are also available.
When you enable Cluster Link, Liferay DXP’s default clustering configuration is
enabled. This configuration defines IP multicast over UDP. Liferay DXP uses two
groups of channels from JGroups
to implement this: a control group and a transport group. If you want to
customize the channel properties, you can do so in
cluster.link.channel.name.control=[your control channel name] cluster.link.channel.properties.control=[your control channel properties]
Please see JGroups’s documentation for channel properties. The default configuration sets many properties whose settings are discussed there.
Multicast broadcasts to all devices on the network. Clustered environments on
the same network communicate with each other by default. Messages and
information (e.g., scheduled tasks) sent between them can lead to unintended
consequences. Isolate such cluster environments by either separating them
logically or physically on the network, or by configuring each cluster’s
portal-ext.properties to use different sets of
multicast group address and port values.
JGroups sets a bind address automatically. If you want to set a manual address,
you can do this. By default, these are set to
In some configurations, however,
localhost is bound to the internal loopback
::1), rather than the host’s real address. If for some
reason you need this configuration, you can make Liferay DXP auto detect its real
address with this property:
Set it to connect to some other host that’s contactable by your server. By default, it points to Google, but this may not work if your server is behind a firewall. If you set the address manually using the properties above, you don’t need to set the auto-detect address.
Your network configuration may preclude the use of multicast over TCP, so below are some other ways you can get your cluster communicating. Note that these methods are all provided by JGroups.
If you are binding the IP address instead of using
localhost, make sure the right IP addresses are declared using these properties:
Test your load and then optimize your settings if necessary.
If your network configuration or the sheer distance between nodes prevents you from using UDP Multicast clustering, you can configure TCP Unicast. You must use this if you have a firewall separating any of your nodes or if your nodes are in different geographical locations.
Add a parameter to your app server’s JVM:
Use the node’s IP address.
Now you must determine the discovery protocol the nodes should use to find each other. You have four choices:
- TCPPing - JDBCPing - S3_Ping - Rackspace_Ping
If you aren’t sure which one to choose, use
TCPPing. This is used in the rest of these steps; the others are covered below.
$LIFERAY.HOME/osgi/marketplace/Liferay Foundation - Liferay Portal - Impl.lpkg/com.liferay.portal.cluster.multiple-[version].jar/lib/jgroups-[version].Final.jar/tcp.xmlto a location accessible to Liferay DXP. to a location accessible to Liferay DXP. Use this file on all your nodes.
If you’re vertically clustering (i.e., you have multiple servers running on the same physical or virtual system), you must change the port on which discovery communicates for all nodes other than the first one, to avoid TCP port collision. To do this, modify the TCP tag’s
<TCP bind_port="[some unused port]" ... />
Since the default port is
7800, provide some other unused port.
Add to the same tag the parameter
singleton_name="liferay_cluster". This merges the transport and control channels to reduce the number of thread pools. See JGroups documentation for further information.
Usually, no further JGroups configuration is required. However, in a very specific case, if (and only if) cluster nodes are deployed across multiple networks, then the parameter
external_addrmust be set on each host to the external (public IP) address of the firewall. This kind of configuration is usually only necessary when nodes are geographically separated. By setting this, clustered nodes deployed to separate networks (e.g. separated by different firewalls) can communicate together. This configuration may be flagged in security audits of your system. See JGroups documentation for more information.
Save the file. Modify that node’s
portal-ext.propertiesfile to point to it:
You’re now set up for Unicast over TCP clustering! Repeat this process for each node you want to add to the cluster.
Rather than use TCP Ping to discover cluster members, you can use a central
database accessible by all the nodes to help them find each other. Cluster
members write their own and read the other members’ addresses from this
database. To enable this configuration, replace the
TCPPING tag with the
<JDBC_PING connection_url="jdbc:mysql://[DATABASE_IP]/[DATABASE_NAME]?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false" connection_username="[DATABASE_USER]" connection_password="[DATABASE_PASSWORD]" connection_driver="com.mysql.jdbc.Driver"/>
The above example uses MySQL as the database. For further information about JDBC Ping, please see the JGroups Documentation.
Amazon S3 Ping can be used for servers running on Amazon’s EC2 cloud service. Each node uploads a small file to an S3 bucket, and all the other nodes read the files from this bucket to discover the other nodes. When a node leaves, its file is deleted.
To configure S3 Ping, replace the
TCPPING tag with the corresponding
<S3_PING secret_access_key="[SECRETKEY]" access_key="[ACCESSKEY]" location="ControlBucket"/>
Supply your Amazon keys as values for the parameters above. For further information about S3 Ping, please see the JGroups Documentation.
JGroups supplies other means for cluster members to discover each other, including Rackspace Ping, BPing, File Ping, and others. Please see the JGroups Documentation for information about these discovery methods.
It’s recommended to test your system under a load that best simulates the kind of traffic your system must handle. If you serve a lot of message board messages, your script should reflect that. If web content is the core of your site, your script should reflect that too.
As a result of a load test, you may find that the default distributed cache settings aren’t optimized for your site. In this case, tweak the settings using a module. You can install the module on each node and change the settings without taking down the cluster.
We’ve made this as easy as possible by
creating the project
for you. Download the project and unzip it into a
in the workspace’s
modules folder. To override your cache settings, you must
only modify one Ehcache configuration file, which you’ll find in this folder
In the sample project, this file contains a configuration for the
object which handles sites. You may wish to add other objects to the cache; in
fact, the default file caches many other objects. For example, if you have
a vibrant community, a large portion of your traffic may be directed at the
message boards portlet, as mentioned above. To cache the threads on the message
boards, configure a block with the
<cache eternal="false" maxElementsInMemory="10000" name="com.liferay.portlet.messageboards.model.impl.MBMessageImpl" overflowToDisk="false" timeToIdleSeconds="600" > </cache>
You can preserve the default settings while customizing them with your own
by extracting Liferay’s cluster configuration file and putting it into
your module project. You’ll find it in
com.liferay.portal.cache.ehcache.impl.jar file the
[Liferay Home]/osgi/portal folder. The file you want is
liferay-multi-vm-clustered.xml, in the
/ehcache folder inside the
com.liferay.portal.cache.ehcache.impl.jar file. Once you have the file,
replace the contents of the
override-liferay-multi-vm-clustered.xml file above
with the contents of this file. Now you’ll be using the default configuration as
a starting point.
Once you’ve made your changes to the cache, save the file, build, and deploy the module, and your settings override the default settings. In this way, you can tweak your cache settings so that your cache performs optimally for the type of traffic generated by your site. You don’t have restart your server to change the cache settings. This is a great benefit, but beware: since Ehcache doesn’t allow for changes to cache settings while the cache is alive, reconfiguring a cache while the server is running flushes the cache.