Step 1: Restart and Check Cluster Services

After upgrading MapR Core using either a manual offline or rolling upgrade method (not upgrading with the MapR Installer) and upgrading your ecosystem components, configure and restart the cluster and services.

Note: This task is applicable only to manual offline and rolling upgrade methods.
Important: Before restarting cluster services, upgrade any existing ecosystem packages to versions compatible with the upgraded MapR release. For more information, see MEP Components and OS Support.

This procedure configures and restarts the cluster and services, including ecosystem components, remounts the NFS share, and checks that all packages have been upgraded on all nodes.

After finishing this procedure, run non-trivial health checks, such as performance benchmarks relevant to the cluster’s typical workload or a suite of common jobs. It is a good idea to run these types of checks when the cluster is idle. In this procedure, you configure each node in the cluster without changing the list of services that will run on the node. If you want to change the list of services, do so after completing the upgrade. After you have upgraded packages on all nodes, perform this procedure on all nodes to restart the cluster. Upon completion of this procedure, MapR Core services are running on all nodes.

  1. Merge any custom edits that you made to your cluster environment variables into the new /opt/mapr/conf/ file before restarting the cluster. This is because the upgrade process replaces your original /opt/mapr/conf/ file with a new copy of that is appropriate for the MapR version to which you are upgrading. The new does not include any custom edits you might have made to the original However, a backup of your original file is saved as /opt/mapr/conf/<timestamp>. Before restarting the cluster, you must add any custom entries from /opt/mapr/conf/<timestamp> into /opt/mapr/conf/, and copy the updated to all other nodes in the cluster. See About
  2. On each node in the cluster, remove the mapruserticket file. For manual upgrades, the file must be removed to ensure that impersonation works properly. The mapruserticket file is re-created automatically when you restart Warden. For more information, see Installation and Upgrade Notes (MapR 6.1.0).
    # rm /opt/mapr/conf/mapruserticket
  3. If you are upgrading from MapR Core 6.0.x to MapR 6.1 or later, create the ssl_truststore.pem and ssl_keystore.pem files. These files are used by the MapR Data Access Gateway, Grafana, and Hue components. This step is necessary only for manual upgrades because upgrades performed with the MapR Installer distribute the files automatically. Use these commands:
    1. Use the utility to generate the files:
      /opt/mapr/server/ convert -N /opt/mapr/conf/ssl_truststore /opt/mapr/conf/ssl_truststore.pem
      /opt/mapr/server/ convert -N /opt/mapr/conf/ssl_keystore /opt/mapr/conf/ssl_keystore.pem
    2. Copy the files to the /opt/mapr/conf directory on all nodes in the cluster.
  4. On each node in the cluster, run with the -R option
    # /opt/mapr/server/ -R
  5. If ZooKeeper is installed on the node, start it.
    # service mapr-zookeeper start
  6. Start Warden.
    # service mapr-warden start
  7. Run a simple health-check targeting the file system and MapReduce services only. Address any issues or alerts that might have come up at this point.
  8. Set the new cluster version in the /opt/mapr/MapRBuildVersion file by running the following command on any node in the cluster.
    # maprcli config save -values {mapr.targetversion:"`cat /opt/mapr/MapRBuildVersion`"}
  9. Verify the new cluster version.
    For example:
    # maprcli config load -keys mapr.targetversion
  10. Remount the MapR NFS share.
    The following example assumes that the cluster is mounted at /mapr: # mount -o hard,nolock <hostname>:/mapr /mapr
  11. Run commands, as shown in the example, to check that the packages have been upgraded successfully.
    Check the following:
    • All expected nodes show up in a cluster node list, and the expected services are configured on each node.
    • A master CLDB is active, and all nodes return the same result.
    • Only one ZooKeeper service claims to be the ZooKeeper leader, and all other ZooKeepers are followers.
    For example:
    # maprcli node list -columns hostname,csvc
    hostname configuredservice ip
    centos55 nodemanager,cldb,fileserver,hoststats
    centos56 nodemanager,cldb,fileserver,hoststats
    centos57 fileserver,nodemanager,hoststats,resourcemanager
    centos58 fileserver,nodemanager,webserver,nfs,hoststats,resourcemanager
    ...more nodes...
    # maprcli node cldbmaster
    ServerID: 8851109109619685455 HostName: centos56
    # service mapr-zookeeper qstatus
    JMX enabled by default
    Using config: /opt/mapr/zookeeper/zookeeper-3.4.5/conf/zoo.cfg
    Mode: follower