Post-Upgrade Steps for Spark

After you upgrade Spark with the MapR Installer, perform the following steps.

Post-Upgrade Steps for Spark Standalone Mode

  1. Migrate Custom Configurations (optional).
    Migrate any custom configuration settings into the new default files in the conf directory(/opt/mapr/spark/spark-<version>/conf).
  2. If Spark SQL is configured to work with Hive, copy hive-site.xml file in the conf directory(/opt/mapr/spark/spark-<version>/conf).
  3. Run the following commands to configure the slaves:
    1. Copy the /opt/mapr/spark/spark-<version>/conf/slaves.template into /opt/mapr/spark/spark-<version>/conf/slaves.
    2. Add the hostnames of the Spark worker nodes. Put one worker node hostname on each line.
      For example:
      localhost
      worker-node-1
      worker-node-2
  4. Restart all the spark slaves as the mapr user:
    /opt/mapr/spark/spark-<version>/sbin/start-slaves.sh spark://<comma-separated list of spark master hostname: port>

Post-Upgrade Steps for Spark on YARN

  1. Migrate Custom Configurations (optional).
    Migrate any custom configuration settings into the new default files in the conf directory(/opt/mapr/spark/spark-<version>/conf). Also, if you previously configured Spark to use the Spark JAR file from a location on the MapR-FS, you need to copy the latest JAR file to the MapR-FS and reconfigure the path to the JAR file in the spark-defaults.conf file. See Configure Spark JAR Location.
  2. If Spark SQL is configured to work with Hive, copy hive-site.xml file into the conf directory(/opt/mapr/spark/spark-<version>/conf).
  3. Start the spark-historyserver services (if installed):
    maprcli node services -nodes <node-ip> -name spark-historyserver -action
        start