Managing Distributed Cache in Liferay DXP

This article documents the configuration options for managing distributed cache within Liferay Digital Experience Platform (DXP).

Table of Contents 

  1. Distributed Cache Basics
    1. How Liferay Does Distributed Caching
  2. Specific Configurations
    1. Default (MPING + UDP/IP Multicast)
    2. TCP Transports (Unicast)
      1. JDBC PING
      2. TCP PING
      3. S3 PING

Depending on the needs of an environment, Liferay DXP has two ways of implementing a cache.

  1. SingleVM Pool (for environments in which the cache is tied to a single instance of Liferay) 
  2. MultiVM Pool (for environments in which the cache is distributed among different nodes in a cluster) 

Continue reading below to discover the configuration options for the latter option; distributed caching. 

Distributed Cache Basics

The distributed cache requires two distinct and important steps: 

  1. Discovery: Finding out how many members are in the cluster so that communication links can be created between clustered servers. 
  2. Transport: Sending cache change events between servers. 

The Liferay platform's default distributed cache implementation is Ehcache. Ehcache can be configured to use an efficient algorithm for replication, called ClusterLink layer for both discovery and transport. The Liferay platform's ClusterLink internally uses JGroups for its underlying technology. 

In this overview, we've created specific instructions for configuring the Liferay platform's distributed cache. By default, two protocols (UDP or TCP) can be used to send messages to and receive messages from the network. If an environment is unable to utilize UDP/multicasting, then a different unicast protocol stack must be utilized (e.g. JDBC for discovery and TCP for transport). 

Note: In previous versions of the Liferay platform, you used to need the Liferay Ehcache Cluster and the property ehcache.cluster.link.replication.enabled=trueBoth of these are unnecessary in DXP, so please undeploy the Ehcache Cluster portlet and remove the property from your portal-ext.properties as they may interefere with DXP's clustering mechanisms.

Specific Configurations

The following are specific configurations for different methods of implementing a distributed cache. Example files are included, but keep in mind that they will likely need to be tweaked for your specific environment.

Default (MPING + UDP/IP Multicast)

  1. Configure portal-ext.properties with this property: cluster.link.enabled=true
  2. Tweak multicast addresses/ports according to network specifications

See an example here. This is taken from the portal.properties file which is in operation by default. As stated above, the IP addresses may need to be tweaked depending on the network. 

##
## Multicast
##

    # Consolidate multicast address and port settings in one location for easier
    # maintenance. These settings must correlate to your physical network
    # configuration (i.e. firewall, switch, and other network hardware matter)
    # to ensure speedy and accurate communication across a cluster.
    #
    # Each address and port combination represent a conversation that is made
    # between different nodes. If they are not unique or correctly set, there
    # will be a potential of unnecessary network traffic that may cause slower
    # updates or inaccurate updates.
    #
    #
    # See the property "cluster.link.channel.properties.control".
    #
    multicast.group.address["cluster-link-control"]=239.255.0.1
    #multicast.group.address["cluster-link-control"]=ff0e::8:8:1
    multicast.group.port["cluster-link-control"]=23301
    #
    # See the properties "cluster.link.channel.properties.transport.0" and
    # "cluster.link.channel.system.properties".
    #
    multicast.group.address["cluster-link-udp"]=239.255.0.2
    #multicast.group.address["cluster-link-udp"]=ff0e::8:8:2
    multicast.group.port["cluster-link-udp"]=23302
    #
    # See the property "cluster.link.channel.system.properties".
    #
    multicast.group.address["cluster-link-mping"]=239.255.0.3
    #multicast.group.address["cluster-link-mping"]=ff0e::8:8:3
    multicast.group.port["cluster-link-mping"]=23303
    #
    # See the properties "ehcache.multi.vm.config.location" and
    # "ehcache.multi.vm.config.location.peerProviderProperties".
    #
    multicast.group.address["multi-vm"]=239.255.0.5
    multicast.group.port["multi-vm"]=23305

This configuration uses MPING for discovery and UDP for transport. In general, if UDP can be used, there is no need to use or configure any other protocol.

TCP Transports (Unicast)

To use TCP for transport, you must select a custom discovery protocol: JDBC_PING, TCP_PING, MPING, S3_PING (Amazon only), RACKSPACE_PING (Rackspace only). 

JDBC_PING

  1. Configure portal-ext.properties with this property: cluster.link.enabled=true

    (we’ll be adding cluster.link.channel.properties in a later step)

  2. Add the IP address of all cluster nodes via JVM parameters: -Djgroups.bind_addr=<node_address>
    1. For Windows environments these properties will be added to setenv.bat
    2. For Linux / Unix environments, the properties can be added to setenv.sh
  3. Create JDBC Discovery configuration file
    1. Extract tcp.xml and place it somewhere on the classpath. Use a file archiver tool like 7zip to extract the tcp.xml file.
      1. Navigate to $liferay_home/osgi/marketplace
      2. Open the Liferay Foundation.lpkg package
      3. Open the com.liferay.portal.cluster.multiple-[version number].jar
      4. In the /libfolder, open the jgroups-3.6.4.Final.jar
      5. The tcp.xml file is located here
    2. Rename the copied tcp.xml file to jdbc_ping_config.xml 
    3. In the file, replace:
      <TCPPING async_discovery="true"
      initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800],localhost[7801]}"
      port_range="2"/>
      with:
      <JDBC_PING
      connection_url="jdbc:mysql://[DATABASE_IP]/[DATABASE_NAME]?useUnicode=true&amp;characterEncoding=UTF-8&amp;useFastDateParsing=false"
      connection_username="DATABASE_USER"
      connection_password="[DATABASE_PASSWORD]"
      connection_driver="com.mysql.jdbc.Driver"/>
  4. Sharing a transport between multiple channels: (Note: this should not be used for Liferay DXP 7.2 fixpack 1+ bundles)
    1. Add singleton_name="liferay_tcp_cluster" into the TCP tag in the tcp.xml
  5. Point Liferay at the configuration files by adding the following portal-ext.properties:
    cluster.link.channel.properties.control=[CONFIG_FILE_PATH]/jdbc_ping_config.xml
    cluster.link.channel.properties.transport.0=[CONFIG_FILE_PATH]/jdbc_ping_config.xml

JDBC_PING Note: If the database user does not have the ability to create tables, they will need to manually create the JGROUPSPING table in advance. For more information, see the JDBCPING documentation.

TCP_PING

  1. Configure portal-ext.properties with this property: cluster.link.enabled=true

    (we’ll be adding cluster.link.channel.properties in a later step)

  2. Add the IP addresses of all cluster nodes via JVM parameters: -Djgroups.bind_addr=<node_address> and -Djgroups.tcpping.initial_hosts=<node1address>[port1],<node2address>[port2]...
    1. For Windows environments these properties will be added to the setenv.bat
    2. For Linux/Unix environments, the properties can be added to setenv.sh
  3. Create TCP discovery configuration files
    1. Extract the  tcp.xml file and place it in a convenient location.
      1. Navigate to $liferay_home/osgi/marketplace
      2. Open the Liferay Foundation.lpkg package
      3. Open the com.liferay.portal.cluster.multiple-[version number].jar
      4. In the /libfolder, open the jgroups-3.6.4.Final.jar
      5. The tcp.xml file is located here
    2. Because TCP_PING is the default discovery method, the only thing you need to modify is TCP bind_port, if you are running several nodes on the same machine.
  4. Sharing a transport between multiple channels: (Note: this should not be used for Liferay DXP 7.2 fixpack 1+ bundles)
    1. Add singleton_name="liferay_tcp_cluster" into the TCP tag in the tcp.xml
  5. Point Liferay at the configuration files by adding the following portal-ext.properties:
    cluster.link.channel.properties.control=[CONFIG_FILE_PATH]/tcp.xml
    cluster.link.channel.properties.transport.0=[CONFIG_FILE_PATH]/tcp.xml

This configuration uses TCP_PING for discovery and TCP for transport. For TCP_PING, you must pre-specify all members of the cluster. This is not an auto-discovery protocol (e.g. adding/removing members of cluster not supported).

S3_PING

  1. Configure portal-ext.properties with this property: cluster.link.enabled=true

    (we’ll be adding cluster.link.channel.properties in a later step)

  2. Add the IP addresses of the cluster nodes via JVM parameters: -Djgroups.bind_addr=<node_address>
    1. For Windows environments this property will be added to the setenv.bat
    2. For Linux/Unix environments, this property can be added to setenv.sh
  3. Create JDBC Discovery configuration file
    1. Extract the tcp.xml file and place it in a convenient location.
      1. Navigate to $liferay_home/osgi/marketplace
      2. Open the Liferay Foundation.lpkg package
      3. Open the com.liferay.portal.cluster.multiple-[version number].jar
      4. In the /libfolder, open the jgroups-3.6.4.Final.jar
      5. The tcp.xml file is located here
    2. Rename the copied tcp.xml file to s3_ping_config.xml
    3. In the file, replace:
      <TCPPING async_discovery="true"
      initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800],localhost[7801]}"
      port_range="2"/>
      with:
      <S3_PING secret_access_key="SECRETKEY" access_key="ACCESSKEY"     location="ControlBucket"/>
  4. Sharing a transport between multiple channels: (Note: this should not be used for Liferay DXP 7.2 fixpack 1+ bundles)
    1. Add singleton_name="liferay_tcp_cluster" into the TCP tag in the tcp.xml
  5. Point Liferay at the configuration files by adding the following portal-ext.properties:
    cluster.link.channel.properties.control=[CONFIG_FILE_PATH]/s3_ping_config.xml 
    cluster.link.channel.properties.transport.0=[CONFIG_FILE_PATH]/s3_ping_config.xml

Note: A sample configuration file is attached below. This configuration uses S3_PING for discovery and TCP for transport. S3_PING is only applicable for Amazon Web Services.

Additional Information

Debugging Tips

Test High-Level, then Test Low-Level
As a general rule, the best way to troubleshoot a clustering issue is to test at a high level to verify functionality and then walk through the stack at a low-level.

  1. High-Level Verification
    1. Change content in one node through the UI (add a portlet, change a field in a user profile) and see if it shows up in another node after a page refresh. If it works, you're done.
  2. Low-Level Testing
    1. The MulticastServerTool may be used to determine if there is a heartbeat on the network. To utilize this tool:
      1. Create a folder called Multicast in your [LIFERAY_HOME] folder.
      2. Copy the following three classes to your Multicast folder:
        1. CATALINA_BASE/lib/ext/portal-kernel.jar
        2. CATALINA_BASE/webapps/ROOT/WEB-INF/lib/util-java.ja
        3. CATALINA_BASE/webapps/ROOT/WEB-INF/lib/commons-logging.jar
      3. From a command prompt in the Multicast folder, call the multicast server tool. If you're using the default settings, the command will be:
        1. java -cp util-java.jar;portal-kernel.jar;commons-logging.jar com.liferay.util.transport.MulticastServerTool 239.255.0.5 23305 5000
      4. If the network is equipped for UDP, you should see "heartbeats" appear in the output.
    2. Similarly, the MultiCastClientTool may be called from the same folder in different node on the cluster.
      1. From the unzipped multicast folder, call the MulticastClientTool. For default settings this will be:
        1. java -cp util-java.jar;portal-kernel.jar;commons-logging.jar com.liferay.util.transport.MulticastClientTool -h 239.255.0.5 -p 23305
      2. For output, you should see com.liferay.util.transport.MulticastDatagramHandler process as well as changing characters underneath it.

Add JGroups Logging

  1. Turn log4j logging to ALL (through the UI) on org.jgroups.protocols.pbcast
  2. This will add verbose statements to the logs regarding both control channel (heartbeat) and transport traffic. For more on logging within the Liferay platform, see Liferay DXP Clustering in our user guide.
  3. If issues persist and the network is still a potential cause, the built-in jgroups test client/sender classes may be used to verify that Liferay is functioning properly.

Check Proxy & Firewall Settings

  1. Usually, no further JGroups configurations are required. However, in a very specific case, ifand only ifcluster nodes are deployed across multiple networks, then the parameter external_addr must be set on each host to the external (public IP) address of the firewall. By setting this, you will allow clustered nodes that are deployed to separate networks (such as those separated by different firewalls) to communicate together. However, this can cause more InfoSec audits. See JGroups documentation.

Helpful Links

  1. Reliable Multicasting with JGroups Toolkit
  2. Unicast with JGroups
  3. Ehcache: http://www.ehcache.org/documentation/
¿Fue útil este artículo?
Usuarios a los que les pareció útil: 2 de 2