Installing Liferay Portal in a Clustered Environment

Many enterprise environments utilize clustering for both scalability and availability. This article provides specific instructions for installing a basic configuration of Liferay Portal in a pre-existing clustered environment.

A common misconception is that by configuring Liferay Portal, a high-availability / clustered environment is created automatically. However, by definition, a clustered environment includes load balancers, clustered application servers, and databases. Once the clustered environment is set up, Liferay Portal can then be installed into that environment. This article extends the Liferay Clustering section of the User Guide by giving further instructions.

Users can also determine whether the cluster uses multicast or unicast settings. By default, Liferay Portal uses multicast clustering. In the, users can change the multicast port numbers so that they do not conflict with other instances running. If the user decides to use unicast cluster, users have several options available that are supported by Liferay: TCP, Amazon S3, File Ping, and JDBC Ping. The last optionJDBC Pingis available only for Liferay Portal 6.2.x EE and above.


To set up a fully clustered environment:

  1. Cluster Activation Keys need to be deployed on each node.
  2. All nodes are pointing to the same Liferay database or database cluster.
  3. Documents and Media repositories is accessible to all nodes of the cluster.
  4. Search indexes are configured for replication or use a separate search server.
  5. The cache is distributed.
  6. Hot deploy folders are configured for each node if not using server farms.

Cluster Activation Keys

Each node in the cluster needs to have a cluster activation key deployed in order for Liferay Portal to run properly. Click the following link for more on obtaining a Cluster Activation Key.

Additionally, Cluster Link must be enabled for cluster activation keys to work. To do this, set the following in


Make sure all nodes are pointed to the same Liferay database. Configure the JDBC from or directly on the application server.

To Test:

  1. Start both Tomcats (Nodes 1 and 2) sequentially. The reason is so that the Quartz Scheduler can elect a master node!
  2. Log in and add a portlet (e.g. Hello Velocity) to Node 1.
  3. On Node 2, refresh the page.

The addition should show up on Node 2. Repeat with the nodes reversed to test the other node.

Document and Media Library Sharing

Please note that the following properties are specifically for use with AdvancedFileSystemStore.

Set the following in

For 6.0.x



For 6.1.x and 6.2.x

The nodes in the cluster should reflect the same properties between one another when referencing the Document Library. Otherwise, data corruption and indexing issues may occur if each node is referencing separate Document Library repositories.

To Test:

  1. On Node 1, upload a document to the Document Library.
  2. On Node 2, download the document.

If successful, the document should download. Repeat with the nodes reversed.

Note 1: Advanced File System Store is an available option for high availability environments. Besides Advanced File System Store, there are other options of sharing the Document and Media Library. Keep in mind that the different types of file stores cannot communicate with each other, so changing from one to the other will cause the portal to be unable to read previously uploaded files. If the user needs to change the type of store and preserve previously uploaded files, execute a File Store migration.

Note 2: If storing your documents in file system is not an option then from 6.1.x and above DBStore storage method is available by using the following portal property:

Note 3: JCRStore on a database is another option. Because Jackrabbit does not create indexes on its own tables, over time this may be a performance penalty. Users must change manually the index for the primary key columns for all the Jackrabbit tables. Other configuration to take note of is the limit of the amount of connections to your database.

Note 4: The number of connections to the database is another factor. Consider increasing the number of database connections to the application server.

Note 5: For an in-depth description of each type of file store, see the Administrator's Guide for Liferay Portal 6.0.x (page 354). Also see Liferay Portal 6.2.x Clustering documentation, or the Guide for Document and Media Library article.

Search and Index Sharing

Set the following in


Every node will keep a local index that needs to be synced to other nodes in the cluster. The above two properties must be set on every node.

To Test:

  1. On Node 1, go to Control Panel -> Users and create a new user
  2. On Node 2, go to Control Panel -> Users. Verify that the new user has been created 

If successful, the new user will display in the other node without needing to re-index. Do the same test with the nodes reversed.

Note 1: Solr is an option for index sharing since the indexes would be located on a dedicated enterprise search engine and server. In this case the lucene.replicate.write=true property shouldn't be used.

Note 2: By default Liferay uses Lucene as its indexing engine. There are many advanced configurations available in ## Lucene Search section. Choose the options that is best suited. For advanced configurations please refer to Lucene's Performance documentation

Distributed Caching

Distributed caching allows a Liferay cluster to share cache content among multiple cluster nodes via Ehcache.

Enabling the default Liferay cluster link mechanism is done by enabling the following portal property and deploying the Ehcache Cluster Web Plugin (available in Liferay Marketplace)

Liferay has a specific article on Managing Liferay Portal's Distributed Cache.

If you are considering using unicast clustering, please note the following: Liferay Portal uses jgroups.jar version 3.2.10 from Liferay Portal 6.1 EE GA3 SP1 and above. Before that Liferay Portal version the jgroups.jar version is 2.8.1. There are differences between the two versions so please ensure that the appropriate version of tcp.xml is used as a base when you configure the channels.

For users on previous versions of Liferay Portal such as 6.0 EE SP2, 6.1 EE GA1, and 6.1 EE GA2, it is highly recommended to configure the hibernate-cluster.xml and the liferay-multi-vm-clustered.xml files in the production environments since the performance will be better than Ehcache's default cache replication techniques.

Note 1: Ehcache has a lot of different modifications that can be done to cache certain objects. Users should tune these settings for their needs. Please see more about the caching settings in the Distributed Caching section of the User Guide. For advanced optimization and configuration, please refer to the Ehcache documentation.

Note 2: To learn more about Ehcache's default cache replication techniques or to learn how to deploy the tuning cache to the portal, please see the Advanced Ehcache Configuration knowledge base article.

Hot Deploy Folders

Keep in mind that by default all deployable plugins must be deployed separately to all nodes.

However, every application server has a way of configuring "server farms" so that deploying to one location causes deployment to all nodes. Please see each application server's documentation for instructions.

Other Issues to Check

On some operating systems, IPv4 and IPv6 addresses are mixed so clustering will not work. To solve this, add the following JVM startup parameter:

Several PropertySettingJobFactory WARN messages may appear in the logs. Liferay, by default, stores information to JobDataMap within the QuartzSchedulerEngine class. However, only fields are expected within org.quartz.simpl.PropertySettingJobFactory. To resolve the warning messages, enabling the logging levels for the class should be set to the ERROR level.

Additional Information

Was this article helpful?
0 out of 0 found this helpful