MapR 5.0 Documentation : Perform the MapR Installer Post-Upgrade Steps

After using the MapR Installer to upgrade the cluster, complete the post-upgrade steps for each component that was upgraded: 

MapR Core

Complete the following post-upgrade step for MapR Core:

  •  Manually update configuration files:
    • On all nodes, manually merge new configuration settings from the /opt/mapr/conf.new/warden.conf file into the /opt/mapr/conf/warden.conf file.

    • On all nodes, manually merge new configuration settings from the files in the /opt/mapr/conf/conf.d.new/ directory to the files in the /opt/mapr/conf/conf.d/ directory.

      When you upgrade hadoop common, a new directory is created for the new hadoop common version and the configuration files in the existing /opt/mapr/hadoop/hadoop-2.x.x directory are automatically copied into the active directory associated with the new hadoop 2.x.x directory. For example, when you upgrade to 4.1 from 5.1, configuration files that were in the hadoop-2.5.1 directory are copied into the hadoop-2.7.0 directory.

Drill

Complete the following post-upgrade step for Drill:

  1. If you had advanced configurations, complete the following steps to restore the configuration:
    • Restore the Storage Plugins

      1. Start the Web Console. The Drill node that you use to access the Web Console must be a node that is currently running the Drillbit process.

      2. Click Storage.

      3. Click Update next to a storage plugin.

      4. Copy the configuration for the storage plugin from the text file that you saved into the Configuration window, and click Update.

      5. Repeat steps c and d for each storage plugin configuration that you want to restore.

    • Restore the drill-override.conf
      In most cases, to restore drill-override.conf you can just replace the new version in Drill’s /conf directory with the version of the file that you previously backed up.
      To restore drill-override.conf , navigate to the directory where you saved drill-override.conf, and copy drill-override.conf to the Drill /conf directory, replacing the existing file.

      cp drill-override.conf /opt/mapr/drill/drill-<version>/conf
    • Restore the drill-env.sh
      The latest version of drill-env.sh may contain some new configurations for Drill. If you backed up this file, you can merge the saved version with the latest version in /opt/mapr/drill/drill-<version>/conf to preserve your modifications and the new configurations.
       

  2. If the drillbit down alarm is raised in the MCS, start the Drillbit.

    maprcli node services -name drill-bits -action start -nodes <node host names separated by a space>

Flume

Complete the following post-upgrade step for Flume:

  • Migrate Custom Configurations (optional)
    Migrate any custom configuration settings into the new default files in the conf directory (/opt/mapr/flume/flume-<version>/conf/). 

HBase

Complete the following post-upgrade steps for HBase:

  1. Migrate Custom Configurations (optional)
    Migrate any custom configuration settings into the new default files in the conf directory (/opt/mapr/hbase/hbase-<version>/conf/).

  2. If the Hbase Thrift Server is installed, you will need to restart it. 

    maprcli node services -name hbasethrift -action restart -nodes <ThriftServer_node_hostname>
  3. If HBase REST Server is installed, perform the following:
    1. Update the values of the following properties in /opt/mapr/conf/conf.d/warden.hbaserest.conf to reflect the new HBase version:
      • service.command.start

      • service.command.stop

      • service.command.monitorcommand
      • service.logs.location
        For example, if you upgraded to HBase 0.98.12, the values should be set as follows:
      service.command.start=/opt/mapr/hbase/hbase-0.98.12/bin/hbase-daemon.sh start rest -p 8080 --infoport 8085
      service.command.stop=/opt/mapr/hbase/hbase-0.98.12/bin/hbase-daemon.sh stop rest
      service.command.monitorcommand=/opt/mapr/hbase/hbase-0.98.12/bin/hbase-daemon.sh status rest
      service.logs.location=/opt/mapr/hbase/hbase-0.98.12/logs
    2. Restart the HBase REST Server:

      maprcli node services -name hbaserest -action restart -nodes <node_hostname> 

Hive

Complete the following post-upgrade steps for Hive:

  1. Kill the old Hive process.
    1. Run the following command to check for Hive processes that are running:

      ps -aux |grep hive
    2. Kill any processes associated with older Hive versions. For example, if you upgraded to Hive 1.0 and a process is still running a hive-webhcat-0.13.0-mapr-1508.jar, you need to kill that process.

      kill <pid>
  2. Update the Hive Metastore.
    1. Refer to the README file in the /opt/mapr/hive/hive-<version>/scripts/metastore/upgrade/<metastore_database> directory after upgrading Hive for directions on updating your existing metastore_db schema to work with the new Hive version. 
      When you complete the step to run the schema upgrade scripts, run the following scripts:

      1. For upgrades from Hive 0.13 to 1.0:

        1. upgrade-0.13.0-to-0.14.0.<metastore_database>.sql

        2. upgrade-0.14.0-to-1.1.0.<metastore_database>.sql

      2. For upgrades from Hive 0.13 to 1.2.1:
        1. upgrade-0.13.0-to-0.14.0.<metastore_database>.sql

        2. upgrade-0.14.0-to-1.1.0.<metastore_database>.sql

        3. upgrade-1.1.0-to-1.2.0.<metastore_database>.sql

      3. For upgrades from Hive 1.0 to 1.2.1:
        • upgrade-1.1.0-to-1.2.0.<metastore_database>.sql

      Run the metastore upgrade script from the /opt/mapr/hive/hive-<version>/scripts/metastore/upgrade/<metastore_database> directory. The script sources files from this directory. If you run the script from another location, it will fail. 
    2. After the upgrade, verify that the metastore database update completed successfully. For example, use these diagnostic tests:

      • Run the show tables command in Hive and make sure it returns a complete list of all your Hive tables.
      • Perform simple SELECT operations on Hive tables that existed before the upgrade.
      • Perform filtered SELECT operations on Hive tables that existed before the upgrade.
  3. Migrate Hive Configuration (optional)
    Migrate any custom configuration settings into the new default files in the conf directory (/opt/mapr/hive/hive-<version>/conf/)
     
  4. Restart Hive Services

    1. Make a list of nodes on which Hive Metastore and HiveServer2 Services are configured.

    2. Issue the maprcli node services command, specifying the nodes on which the Hive services are configured. 

       maprcli node services -name hivemeta -action restart -nodes <space delimited list of nodes>
       maprcli node services -name hs2 -action restart -nodes <space delimited list of nodes>

HttpFS

Complete the following post-upgrade steps for HttpFS:

  1. Migrate Httpfs Configuration (optional)
    Migrate any custom configuration settings into the new default files. For example, update the following files:
    •  /opt/mapr/httpfs/httpfs-1.0/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/web.xml
    • /opt/mapr/httpfs/httpfs-1.0/share/hadoop/httpfs/tomcat/conf/server.xml

    • /opt/mapr/httpfs/httpfs-1.0/share/hadoop/httpfs/tomcat/conf/tomcat-users.xml

     
  2. Start the HttpFS Service

    sudo -u mapr /opt/mapr/httpfs/httpfs-1.0/sbin/httpfs.sh start

Hue

Complete the following post-upgrade steps for Hue:

  1. For upgrades to Hue 3.8.1, kill the old Hue processes.

    1. Run the following command to check for Hue processes that are running:

      ps -aux |grep hue
    2. Kill any processes associated with older Hue versions. For example, if any Hue 3.7 process is running, you need to kill that process.

      kill <pid>
  2. Update the hue.ini (/opt/mapr/hue/hue-<latest version>/desktop/conf/hue.ini) file with the following:

    1. Changes that you in your existing or backed up hue.ini file.

    2. If you have Hive 0.13 installed:

      • Change thrift_version to 5 in the beeswax section.

      • Change use_get_log_api to true in the beeswax section.
         

  3. If Hue is configured to use the SQLite database, perform the following steps:

    1. If the Hue node runs on Ubuntu, install sqlite3.

      apt-get install sqlite3
    2. Run the following commands:

      cd /opt/mapr/hue/hue-<new_version>/desktop
      sudo sqlite3 desktop.db
      DELETE FROM django_content_type;
  4.  Upload the JSON dump file to the Hue database: 

    For MySQL, PostgreSQL, or Oracle
    cd /opt/mapr/hue/hue-<new_version>/build/env/bin/
    source /opt/mapr/hue/hue-<new_version>/build/env/bin/activate
    ./hue loaddata dump-hue-<old_version>.json
    For SQLite
    cd /opt/mapr/hue/hue-<new_version>/desktop
    mv desktop.db desktop.db.old
    sqlite3 desktop.db < ~/dump-hue-<old_version>-sqlite.bak

    For example, run the following commands to upload the Hue 3.8.1 JSON dump file for MySQL into the Hue 3.9.0 installation directory: 

    cd /opt/mapr/hue/hue-3.9.0/build/env/bin/
    source /opt/mapr/hue/hue-3.9.0/build/env/bin/activate 
    ./hue loaddata dump-hue-3.8.1.json


  5. Update the old database schema so that it is compatible with the new upgraded version:

    source /opt/mapr/hue/hue-<new_version>/build/env/bin/activate
    /opt/mapr/hue/hue-<new_version>/build/env/bin/hue syncdb --noinput
    /opt/mapr/hue/hue-<new_version>/build/env/bin/hue migrate --merge

    For example, run the following commands to update the database schema so that it is compatible with Hue 3.9.0:

    source /opt/mapr/hue/hue-3.9.0/build/env/bin/activate
    /opt/mapr/hue/hue-3.9.0/build/env/bin/hue syncdb --noinput
    /opt/mapr/hue/hue-3.9.0/build/env/bin/hue migrate --merge
  6. If you are using Hadoop MRv1, complete the following steps to establish communication between Hue and the JobTracker processes:

    1. Remove existing Hue plugins from the MapReduce lib directory:

      rm /opt/mapr/hadoop/hadoop-0.20*/lib/hue-plugins-*.jar
    2. Copy new Hue plugins to the MapReduce lib directory:

      cp /opt/mapr/hue/hue-<version>/desktop/libs/hadoop/java-lib/hue-plugins-*.jar /opt/mapr/hadoop/hadoop-0.20*/lib/

      For example, run the following commands to copy the Hue plugin for Hue 3.9.0:

      cp /opt/mapr/hue/hue-3.9.0/desktop/libs/hadoop/java-lib/hue-plugins-*.jar
      /opt/mapr/hadoop/hadoop-0.20*/lib/
  7. Restart the Hue service:

    maprcli node services -name hue -action restart -nodes <ip_address>

Mahout

Complete the following step for Mahout:

  • Migrate Mahout Configuration (optional)
    Migrate any custom configuration settings into the new default files in the conf directory (/opt/mapr/mahout/mahout-<version>/conf/) 

Oozie

Complete the following post-upgrade steps for Oozie:

  1. Migrate Oozie Configuration  (optional)
    Migrate any custom configuration settings into the new default files in the conf directory (/opt/mapr/oozie/oozie-<version>/conf/). For example, if you use a MySQL database with Oozie, make sure that the oozie-site.xml contains the correct properties for MySQL. 
     

  2. For upgrades from Oozie 4.0.x to Oozie 4.2.0, run the following command to upgrade the database schema:

    # /opt/mapr/oozie/oozie-<version>/bin/ooziedb.sh upgrade -run
  3. If your Oozie installation is configured to use a MySQL or Oracle database and you selected a new Oozie version:
    1. Copy the JDBC driver jar file to the following directory:

      /opt/mapr/oozie/oozie-<oozie version>/libext
    2. Run the following command to upgrade the database schema:

      # /opt/mapr/oozie/oozie-<version>/bin/ooziedb.sh upgrade -run

      This step is not required if you ran this command in step 2.

       

    3. Execute the oozie-setup.sh script to add the driver jar file to the Oozie WAR file:

      # sudo /opt/mapr/oozie/oozie-<version>/bin/oozie-setup.sh -hadoop <version> /opt/mapr/hadoop/hadoop-<version> prepare-war -extjs /tmp/ext-2.2.zip
  4. Start any Oozie coordinators that you stopped before the upgrade.
  5. As of Oozie 4.1.0-1601 and Oozie 4.2.0-1601, if the oozie.service.WorkflowAppService.system.libpath property in oozie-site.xml does not use the default value (/oozie/share/), you must run perform the following steps to update the shared libraries:

    1. Based on the cluster MapReduce mode, run one of the following commands to copy the new Oozie shared libraries to MapR-FS: 

      Cluster MapReduce ModeCommand
      YARN
      sudo -u mapr {OOZIE_HOME}/bin/oozie-setup.sh sharelib create -fs maprfs:/// -locallib /opt/mapr/oozie/oozie-<version>/share2
      
      Classic
      sudo -u mapr {OOZIE_HOME}/bin/oozie-setup.sh sharelib create -fs maprfs:/// -locallib /opt/mapr/oozie/oozie-<version>/share1
      
    2. Run the following command to update the Oozie classpath with the new shared libraries:

      sudo -u mapr {OOZIE_HOME}/bin/oozie admin -sharelibupdate

Pig

Complete the following post-upgrade step for Pig:

  • Migrate Custom Configurations (optional)
    Migrate any custom configuration settings into the new default files in the conf directory (/opt/mapr/pig/pig-<version>/conf/).

Sentry

Complete the following post-upgrade step for Sentry:

  • Migrate Custom Configurations (optional)
    Migrate any custom configuration settings into the new default files in the conf directory (/opt/mapr/sentry/sentry-<version>/conf/).

Spark

Follow the post-upgrade steps for the Spark mode that you use.

Spark Standalone Mode

Complete the following post-upgrade steps for Spark Standalone:

  1. Migrate Custom Configurations (optional) 
    Migrate any custom configuration settings into the new default files in the conf directory(/opt/mapr/spark/spark-<version>/conf). 
     
  2. If Spark SQL is configured to work with Hive, copy hive-site.xml file in the conf directory(/opt/mapr/spark/spark-<version>/conf).

  3. Run the following commands to configure the slaves:
    1. Copy the /opt/mapr/spark/spark-<version>/conf/slaves.template into /opt/mapr/spark/spark-<version>/conf/slaves
    2. Add the hostnames of the Spark worker nodes. Put one worker node hostname on each line. For example:

      localhost
      worker-node-1
      worker-node-2
  4. Restart all the spark slaves as the mapr user:

    /opt/mapr/spark/spark-<version>/sbin/start-slaves.sh spark://<comma-separated list of spark master hostname: port> 

Spark on YARN

Complete the following post-upgrade steps for Spark on YARN:

  1. Migrate Custom Configurations (optional) 
    Migrate any custom configuration settings into the new default files in the conf directory(/opt/mapr/spark/spark-<version>/conf).  Also, if you previously configured Spark to use the Spark JAR file from a location on the MapR-FS,  you need to copy the latest JAR file to the MapR-FS and reconfigure the path to the JAR file in the spark-defaults.conf file. See Configure Spark JAR Location.

  2. If Spark SQL is configured to work with Hive, copy hive-site.xml file into the conf directory(/opt/mapr/spark/spark-<version>/conf).

  3. Start the spark-historyserver services (if installed):

    maprcli node services -nodes <node-ip> -name spark-historyserver -action start

Sqoop1

Complete the following post-upgrade step for Sqoop1:

  • Migrate Custom Configurations
    Migrate any custom configuration settings into the new default files in the conf directory (/opt/mapr/sqoop/sqoop-<version>/conf/)

Sqoop2

Complete the following post-upgrade step for Sqoop2:

  1. If Sqoop was upgraded to a newer component version, copy the repository backup into the sqoop directory ( /opt/mapr/sqoop/sqoop-<version>) for the new version.
     

  2. On each Sqoop2 server node, run the upgrade utility.

    /opt/mapr/sqoop/sqoop-<version>/bin/sqoop-tool upgrade

    When the upgrade utility completes successfully, the following message displays: Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
     

  3. If Sqoop is not running, start the Sqoop Server.

    maprcli node services -name sqoop2 -action start -nodes <space delimited list of Sqoop2 server nodes> 

Storm

Complete the following post-upgrade step for Storm:

  1. Migrate Custom Configurations
    Migrate any custom configuration settings into the new default files in the conf directory (/opt/mapr/storm/storm-<version>/conf/).

  2. Restart the Storm services

    1. Make a list of nodes on which Storm is configured (including the nimbus daemon and the supervisor daemons).
    2. Issue the maprcli node services command, specifying the nodes on which Storm is configured.

      maprcli node services -name nimbus -action restart -nodes <space delimited list of nodes>
      maprcli node services -name storm-ui -action restart -nodes <space delimited list of nodes>
      maprcli node services -name supervisor -action restart -nodes <space delimited list of nodes>
  3. Resubmit the topology

    storm jar <jar-path> <main class>

-->Back to Upgrading With the MapR Installer