If you’re running Liferay DXP as a clustered environment and you want to use remote staging, you must configure it properly for a seamless experience. In this tutorial, you’ll learn how to set up remote staging in an example clustered environment scenario. The example environment assumes you have
- A Staging instance with database configurations and a file repository different from the cluster nodes
- A balancer responsible for managing the traffic flow between the cluster’s nodes
- Two nodes that call two Liferay app servers (e.g., App Server 1 and App Server 2), both of which are connected to the same database
The steps below also assume your web tier, application tier, and cluster environment are already configured. You may need to adjust the configurations in this tutorial to work with your specific configuration.
You must secure the communication made between your nodes and Staging server. Add the following property to both app servers’ and Staging server’s
This secret key denies other portals access to your configured portal servers. If you’d like to set your secret key using hexadecimal encoding, also set the following property in your
You must allow the connection between the configured IPs of your app servers and the Staging server. Open both of your app servers’
portal-ext.propertiesfiles and add the following properties:
[STAGING_IP]variable must be replaced by the staging server’s IP addresses. The
SERVER_IPconstant can remain set for this property; it’s automatically replaced by the Liferay server’s IP addresses.
If you’re validating IPv6 addresses, you must configure the app server’s JVM to not force the usage of IPv4 addresses. For example, if you’re using Tomcat, add the following attribute in the
Restart both app servers for the new properties to take effect.
Configure the TunnelAuthVerifier property for your nodes’ app servers. There are two ways to do this:
.configfile (recommended): In the
$LIFERAY_HOME/osgi/configsfolder of one of your node Liferay DXP instances, create (if necessary) a
com.liferay.portal.security.auth.verifier.tunnel.module.configuration.TunnelAuthVerifierConfiguration-default.configfile and insert the properties listed below. Creating one
.configfile configures all cluster nodes the same way. For more information on
.configfiles, see the Understanding System Configuration Files article.
enabled=true hostsAllowed=127.0.0.1,SERVER_IP,STAGING_IP serviceAccessPolicyName=SYSTEM_USER_PASSWORD urlsIncludes=/api/liferay/do
Via System Settings: Navigate to the Control Panel → Configuration → System Settings → Foundation → Tunnel Auth Verifiers. Click on the /api/liferay/do configuration entry and add the Staging IP address to the Hosts allowed field. If you choose to configure the TunnelAuthVerifier this way, you must do this for all nodes (e.g., App Server 1 and App Server 2).
On your Staging instance, navigate to the Site Administration portion of the Product Menu and select Publishing → Staging. Then select Remote Live.
For the Remote Host/IP field, insert the balancer’s IP of your web tier. Configuring the Staging instance with the balancer’s IP ensures the availability of the environment at the time of publication from staging to live.
Enter the port on which the balancer is running into the Remote Port field.
Insert the remote site ID of your app servers into the Remote Site ID field. The site ID of all your app servers are the same since they are configured for the same database and are shared between nodes.
Navigate to the Site Administration portion of the Product Menu and select Site Settings to find the site ID.
Save the Remote Live settings.
That’s it! You’ve configured remote staging in your clustered environment.