Managing Distributed Cache in Liferay Portal EE

Liferay Support does not recommend or endorse specific third-party products over others. Liferay is not responsible for any instructions herein or referenced regarding these products. Any implementation of these principles is the responsibility of the subscriber.

Depending on the needs of an environment, Liferay Portal has two ways of implementing a cache:

  1. SingleVM Pool: For environments in which the cache is tied to a single instance of Liferay.
  2. MultiVM Pool: For environments in which the cache is distributed among different nodes in a cluster.

This article discusses configuration options for the latter option; MultiVM Distributed Caching.

Contents

  1. Distributed Cache Basics
    1. How Liferay Does Distributed Caching
  2. Configuration Overview
    1. 6.1 GA3 and 6.2
    2. 6.1 GA2 and earlier
  3. Specific Configurations
    1. Default (MPING + UDP)
    2. TCP Transports
      1. JDBC PING
      2. TCP PING
      3. S3 PING

Affected Products

Liferay Portal 6.0 EE SP2; 6.1.x EE;  6.2.x EE

Resolution

Distributed Cache Basics

The distributed cache requires 2 distinct and important steps:

Discovery: Finding out how many members are in the cluster so that communication links can be created between clustered servers.

Transport: Sending cache change events and cached objects between servers.

Liferay's default distributed cache implementation is Ehcache. Ehcache can be configured to use one of a few algorithms:

  1. Algorithm 1 (default)
    1. Discovery: UDP/IP multicast
    2. Transport: RMI/TCP/IP unicast
  2. Algorithm 2 (JGroups):
    1. Discovery: User selectable
    2. Transport: User selectable
      • Note that Algorithm 2 requires special configuration to cache configuration files

Within Liferay Enterprise Edition, there is a third, more efficient algorithm for replication. This algorithm utilizes Liferay's ClusterLink layer for both discovery and transport. Liferay's ClusterLink internally uses JGroups for its underlying technology.

Configuration Overview

In this overview, we've created general instructions for configuring Liferay's distributed cache. For more specific instructions, see the Specific Configurations heading below.

I. Liferay Portal 6.1 GA3 and 6.2

  1. Configure portal-ext.properties (in accordance with the Liferay User Guide)
    • ehcache.cluster.link.replication.enabled=true
    • cluster.link.enabled=true
  2. Deploy ehcache-cluster-web.war
    • For multicast communication, these steps are all that is required. For unicast, however, continue with steps 3-5.
  3. Add necessary JVM parameters
    • -Djgroups.bind_addr=<node_address>
    • -Djgroups.tcpping.initial_hosts=<node1address>[port1],<node2address>[port2]...
  4. Create configuration files based on Discovery Protocol
    • This is achieved by extracting the tcp.xml file from CATALINA_BASE\webapps\ROOT\WEB-INF\lib\jgroups.jar and tweaking it in accordance with the desired discovery protocol (specifics for each method below). If using relative paths, make sure the tcp.xml is copied on the classpath (CATALINA_BASE\webapps\ROOT\WEB-INF\classes\).
  5. Point Liferay at the configuration files with the following portal-ext.properties:
    • cluster.link.channel.properties.control=[CONFIG_FILE_PATH]/control_file.xml
    • cluster.link.channel.properties.transport.0=[CONFIG_FILE_PATH]/transport_file.xml

II. Liferay Portal 6.1 GA2 and Earlier

  1. Configure portal-ext.properties (in accordance with the Liferay User Guide)
    • ehcache.cluster.link.replication.enabled=true
    • cluster.link.enabled=true
  2. Deploy ehcache-cluster-web
    • For multicast communication, these steps are all that is required. For unicast, however, continue with steps 3-5.
  3. Add necessary JVM parameters.
    • -Djgroups.bind_addr=<node_IP_address> 
    • -Djgroups.tcpping.initial_hosts=<node1address>[port1],<node2address>[port2]...
    • For Liferay 6.1 and below, you will need to add -Djava.net.preferIPv4Stack=true due to earlier JGroups-IPv6 compatibility.
  4. Create configuration files based on Discovery Protocol
    • This is achieved by extracting the tcp.xml file from CATALINA_BASE\webapps\ROOT\WEB-INF\lib\jgroups.jar and tweaking it in accordance with the desired discovery protocol (specifics for each method below). If using relative paths, make sure the tcp.xml is copied on the classpath (CATALINA_BASE\webapps\ROOT\WEB-INF\classes\).
  5. Point Liferay at the configuration files with the following portal-ext.properties:
    • cluster.link.channel.properties.control=[CONFIG_FILE_PATH]/control_file.xml
    • cluster.link.channel.properties.transport.0=[CONFIG_FILE_PATH]/transport_file.xml
  6. (For Liferay 6.1): Create a cache configuration hook to modify liferay-multi-vm-clustered.xml and hibernate-clustered.xml files.
    • Note that there may be double periods in certain versions of liferay-multi-vm-clustered as described on LPS-28163.
      (e.g: You may see ".." between EntityCache and com as specified below

      <cache
      eternal="false"
      maxElementsInMemory="100000"
      name="comliferayportalkerneldaoormEntityCachestrongstrongcomliferayportalmodelimplResourceActionImpl"
      overflowToDisk="false"
      timeToIdleSeconds="600"
      >

    • SocialEquitySettingLocalServiceImpl should be called in Liferay 6.0 and below while SocialActivitySettingLocalServiceImpl should be called in Liferay 6.1 EE and above.

Setup Summary

To summarize the steps above, if your environment supports UDP/multicast, then the only configuration that is required is the port settings for the multicast communication. However, if an environment is unable to utilize UDP/multicasting, then a different unicast protocol stack must be utilized (e.g. JDBC for discovery and TCP for transport).

Specific Configurations

The following are specific configurations for different methods of implementing a distributed cache. Example files are included, but keep in mind that they will likely need to be tweaked for your specific environment. 

Default (MPING + UDP/IP Multicast)

  1. Configure portal-ext.properties
    • cluster.link.enabled=true
    • ehcache.cluster.link.replication.enabled=true
  2. Deploy the ehcache-cluster-web
  3. Tweak multicast addresses/ports according to network specifications

See an example here (this is taken from the portal.properties file which is in operation by default. As stated above, the IP addresses may need to be tweaked depending on the network).

This configuration uses MPING for discovery and UDP for transport. In general, if UDP can be used, there is no need to use or configure any other protocol.

TCP Transports

To use TCP for transport, you must select a custom discovery protocol: JDBC_PING, TCP_PING, MPING, S3_PING (Amazon only), RACKSPACE_PING (Rackspace only)

JDBC_PING

Note that JDBC PING is only available for Liferay 6.2 and above (and also Liferay 6.1 EE GA3 with the platform-12-6130 and above fix pack installed) as these versions will utilize JGroups 3.2.6.

  1. Configure portal-ext.properties
    1. cluster.link.enabled=true
    2. ehcache.cluster.link.replication.enabled=true
    3. (We will be adding cluster.link.channel.properties in a later step.)
  2. Deploy ehcache-cluster-web plugin
  3. Add in necessary JVM parameters
    1. -Djgroups.bind_addr=<node_address>
    2. -Djgroups.tcpping.initial_hosts=<node1address>[port1],<node2address>[port2]...
    3. For Windows environments these properties will be added to setenv.bat. For Linux/Unix environments, the properties can be added to setenv.sh.
  4. Create JDBC Discovery configuration file
    1. Extract tcp.xml from CATALINA_BASE\webapps\ROOT\WEB-INF\lib\jgroups.jar and place it somewhere on the classpath.
    2. Rename the copied tcp.xml file to jdbc_ping_config.xml.
    3. In the file, replace:
      <TCPPING timeout="3000"
      initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800],localhost[7801]}"
      port_range="1
      num_initial_members="3"/>
      					
      with:
      <JDBC_PING
      connection_url="jdbc:mysql://[DATABASE_IP]/[DATABASE_NAME]?useUnicode=true&amp;characterEncoding=UTF-8&amp;useFastDateParsing=false"
      connection_username="DATABASE_USER"
      connection_password="[DATABASE_PASSWORD]"
      connection_driver="com.mysql.jdbc.Driver"/>
      				
  5. Point Liferay at the configuration files by adding the following portal-ext.properties:
    • cluster.link.channel.properties.control=[CONFIG_FILE_PATH]/jdbc_ping_config.xml
    • cluster.link.channel.properties.transport.0=[CONFIG_FILE_PATH]/jdbc_ping_config.xml

Lastly, if the database user does not have the ability to create tables, they will need to manually create the JGROUPSPING table in advance. For more information, see the JDBCPING documentation.

TCP_PING

  1. Configure portal-ext.properties
    1. cluster.link.enabled=true
    2. ehcache.cluster.link.replication.enabled=true
    3. (We will be adding cluster.link.channel.properties in a later step.)
  2.  Deploy ehcache-cluster-web plugin
  3.  Add in necessary JVM parameters
    1. -Djgroups.bind_addr=<node_address>
    2. -Djgroups.tcpping.initial_hosts=<node1address>[port1],<node2address>[port2]...
    3. For Windows environments these properties will be added to setenv.bat. For Linux/Unix environments, the properties can be added to setenv.sh.
  4. Create TCP discovery configuration files
    1. Extract tcp.xml from CATALINA_BASE\webapps\ROOT\WEB-INF\lib\jgroups.jar and place it somewhere on the classpath.
    2. Copy the file and rename one copy tcp_ping_control.xml. Rename the other copy tcp_ping_transport.xml.
    3. Because TCP_PING is the default discovery method, the only things to modify are the settings below. HostA and HostB are given as examples
      1. TCP bind_port
      2. The IP addresses/ports in
        1. <TCPPING timeout="3000"
        2. initial_hosts="HostA[7800],HostB[7801]"
        3. port_range="1"
        4. num_initial_members="3"/>
    4. Add singleton_name="liferay_tcp_cluster" into the TCP tag
  5. Point Liferay at the configuration files by adding the following portal-ext.properties:
    1. cluster.link.channel.properties.control=[CONFIG_FILE_PATH]/tcp_ping_control.xml
    2. cluster.link.channel.properties.transport.0=[CONFIG_FILE_PATH]/tcp_ping_transport.xml

This configuration uses TCP_PING for discovery and TCP for transport. For TCP_PING, you must pre-specify all members of the cluster. This is not an auto-discovery protocol (e.g. adding/removing members of cluster not supported).

S3_PING

  1. Configure portal-ext.properties
    1. cluster.link.enabled=true
    2. ehcache.cluster.link.replication.enabled=true
    3. (We will be adding cluster.link.channel.properties in a later step.)
  2. Deploy ehcache-cluster-web plugin
  3. Add in necessary JVM parameters
    1. -Djgroups.bind_addr=<node_address>
    2. -Djgroups.tcpping.initial_hosts=<node1address>[port1],<node2address>[port2]...
    3. For Windows environments these properties will be added to setenv.bat. For Linux/Unix environments, the properties can be added to setenv.sh.
  4. Create S3 Discovery configuration file:
    1. Extract tcp.xml from CATALINA_BASE\webapps\ROOT\WEB-INF\lib\jgroups.jar and place it in a convenient location.
    2. Rename the copied tcp.xml file s3_ping_config.xml.
    3. In the file, replace:
      1. <TCPPING timeout="3000" initial_hosts="HostA[7800],HostB[7801]" port_range="1" num_initial_members="3"/>
    4. with
      1. <S3_PING secret_access_key="SECRETKEY" access_key="ACCESSKEY" location="ControlBucket"/>
  5. Point Liferay at the configuration files by adding the following portal-ext.properties:
    1. cluster.link.channel.properties.control=[CONFIG_FILE_PATH]/s3_ping_config.xml
    2. cluster.link.channel.properties.transport.0=[CONFIG_FILE_PATH]/s3_ping_config.xml

Note the following sample configuration file for Liferay 6.1: s3_ping_config.xml

This configuration uses S3_PING for discovery and TCP for transport. S3_PING is only applicable for Amazon Web Services.

Additional Information

Debugging Tips

Test High-Level, then Test Low-Level

As a general rule, the best way to troubleshoot a clustering issue is to test at a high level to verify functionality and then walk through the stack at a low-level.

  1. High-Level Verification:
    1. Change content in one node through the UI (add a portlet, change a field in a user profile) and see if it shows up in another node after a page refresh. If it works, you're done.
  2. Low-Level Testing:
    1. The MulticastServerTool may be used to determine if there is a heartbeat on the network. To utilize this tool:
      1. Create a folder called Multicastcode in your ~[LIFERAY_HOME~] folder
      2. Copy the following three classes to your Multicastcode folder:
        1. CATALINA_BASE~/webapps~/ROOT~/WEB-INF~/lib~/commons-logging.jar
        2. CATALINA_BASE~/webapps~/ROOT~/WEB-INF~/lib~/util-java.ja
        3. CATALINA_BASE~/lib~/ext~/portal-service.jar
      3. From a command prompt in the Multicastcode folder, call the multicast server tool. If you're using the default settings, the command will be:
        1. java -cp util-java.jar;portal-service.jar;commons-logging.jar com.liferay.util.transport.MulticastServerTool 239.255.0.5 23305 5000code
      4. ​If the network is equipped for UDP, you should see "heartbeats" appear in the output:
      5. Similarly, the MultiCastClientTool may be called from the same folder in different node on the cluster.
      6. From the unzipped multicast folder, call the MulticastClientTool. For default settings this will be:
        1. java -cp util-java.jar;portal-service.jar;commons-logging.jar com.liferay.util.transport.MulticastClientTool -h 239.255.0.5 -p 23305
      7. ​For output, you should see com.liferay.util.transport.MulticastDatagramHandler process as well as changing characters underneath it.
    2. Add Debugging Properties
      1. cluster.executor.debug.enabled~=truecode
      2. This property logs very useful information about cluster JOIN~/DEPART eventsb. In a smaller cluster these events should be rare enough that this property may left on in production.
    3. Add jgroups Logging
      1. Turn log4jcode logging to ALL (through the UI) on org.jgroups.protocols.pbcastcode
      2. This will add verbose statements to the logs regarding both control channel (heartbeat) and transport traffic. For more on Liferay logging, see the user guide.
      3. If issues persist and the network is still a potential cause, the built-in jgroups test client~/sender classes may be used to verify that the portal is functioning properly.
    4. Check Proxy and Firewall Settings
      1. Usually, no further JGroups configuration are required. However, in a very specific case, if (and only if) cluster nodes are deployed across multiple networks, then the parameter external_addrcode must be set on each host to the external (public IP) address of the firewall. By setting this, this will allow clustered nodes that are deployed to separate networks (such as those separated by different firewalls) to communicate together. However, this can cause more InfoSec audits.JGroups documentation.
    5. Helpful Links
      1. Reliable Multicasting with JGroups Toolkit
      2. Known Issue: NACKACK could potentially cause memory leaks.
      3. Ehcache: http:~//www.ehcache.org~/documentation~/user-guide~/configuration
这篇文章有帮助吗?
0 人中有 0 人觉得有帮助