MapR 5.0 Documentation : Upgrade Spark Standalone

 

If you installed Spark with the MapR Installer, use the latest version of the MapR Installer to perform the upgrade. 

The following instructions explain how to upgrade an existing installation of Spark 1.x. Spark will be installed in a new subdirectory under /opt/mapr/spark.

  1. Update repositories.
    MapR's rpm and deb repositories always contain the Spark version recommended for the release of the MapR core associated with that repository. You can connect to an internet repository or prepare a local repository with any version of Spark you need. You can also manually download packages. 

    If you plan to install from a repository, complete the following steps each node where Spark is installed:

    1. Verify that the repository is configured correctly. See Preparing Packages and Repositories for information about setting up your ecosystem repository.

    2. Update the repository cache.

      On RedHat and CentOS...

      yum clean all

      On Ubuntu...

      apt-get update

  2. Back up any custom configuration files in your Spark environment. These cannot be upgraded automatically. For example, if Spark SQL is configured to work with Hive, copy the /opt/mapr/spark/spark-<version>/conf/hive-site.xml file to a backup directory.
  3. Shut down the spark-master and spark-historyserver services (if the spark-historyserver is running):

    maprcli node services -nodes <node-ip> -name spark-master -action stop
    maprcli node services -nodes <node-ip> -name spark-historyserver -action stop
  4. As the mapr user, stop the slaves:

    /opt/mapr/spark/spark-<version>/sbin/stop-slaves.sh
  5. Install the Spark packages.

    On Ubuntu...
    apt-get install mapr-spark mapr-spark-master mapr-spark-historyserver
    On RedHat / CentOS...
    yum update mapr-spark mapr-spark-master mapr-spark-historyserver
  6. Run the following commands to configure the slaves:
    1. Copy the /opt/mapr/spark/spark-<version>/conf/slaves.template into /opt/mapr/spark/spark-<version>/conf/slaves
    2. Add the hostnames of the Spark worker nodes. Put one worker node hostname on each line. For example:

      localhost
      worker-node-1
      worker-node-2
  7. Run configure.sh:

    /opt/mapr/server/configure.sh -R
  8. Migrate Custom Configurations (optional). 

    Migrate any custom configuration settings into the new default files in the conf directory(/opt/mapr/spark/spark-<version>/conf).

  9. Start spark-master services and spark-historyserver services (if installed): 

    maprcli node services -nodes <node-ip> -name spark-master -action start
    maprcli node services -nodes <node-ip> -name spark-historyserver -action start
  10. Restart all the spark slaves as the mapr user:

    /opt/mapr/spark/spark-<version>/sbin/start-slaves.sh