MapR 5.0 Documentation : Offline MapR Core Upgrade Using Manual Steps

This topic contains the following sections for upgrades to 5.0:

Before you Upgrade

Before you upgrade the packages, perform the following steps:

Verify JDK Version Support

If you are upgrading from 3.0 to 5.0, upgrade the JDK version to version 7 or 8 before upgrading the packages. See the JDK Support Matrix.

Prepare Packages and Repositories

When upgrading you can install packages from:

  • MapR’s Internet repository
  • A local repository
  • Individual package files.

Prepare the repositories or package files on every node, according to your chosen upgrade method. If keyless SSH is set up for the root user, you can prepare the repositories or package files on a single node instead.

When setting up a repository for the new version, leave in place the repository for the existing version because you might still need it as you prepare to upgrade.

 2a. Configure Repositories or Download Packages

Using MapR's Internet Repository

The MapR repository on the internet provides all of the packages required to install a MapR cluster using native tools such as yum on Red Hat or CentOS, apt-get on Ubuntu, or zypper on SUSE. Installing from MapR's repository is generally the easiest installation method, but requires the greatest amount of bandwidth. With this method, each node is connected to the internet to download the required packages.

To set up repositories, complete the steps listed for your Linux distribution: 

Adding the MapR repository on Red Hat or CentOS
  1. Change to the root user or use sudo.
  2. Create a text file called maprtech.repo in the /etc/yum.repos.d/ directory with the following content, replacing <version> with the version of MapR that you want to install:

    [maprtech]
    name=MapR Technologies
    baseurl=http://package.mapr.com/releases/<version>/redhat/
    enabled=1
    gpgcheck=0
    protect=1
    
    [maprecosystem]
    name=MapR Technologies
    baseurl=http://package.mapr.com/releases/ecosystem-5.x/redhat
    enabled=1
    gpgcheck=0
    protect=1
    

    (See the Release Notes for the correct paths for all past releases.)

  3. If your connection to the Internet is through a proxy server, you must set the http_proxy environment variable before installation:

    http_proxy=http://<host>:<port>
    export http_proxy
    

    You can also set the value for the http_proxy environment variable by adding the following section to the /etc/yum.conf file:

    proxy=http://<host>:<port>
    proxy_username=<username>
    proxy_password=<password>
    

The EPEL (Extra Packages for Enterprise Linux) repository contains dependencies for the mapr-metrics package on Red Hat/CentOS. If your Red Hat/CentOS cluster does not use the mapr-metrics service, you can skip EPEL configuration.

 

To enable the EPEL repository on CentOS or Red Hat 6.x:

 

  1. Download the EPEL repository:

    wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
  2. Install the EPEL repository:

    rpm -Uvh epel-release-6*.rpm

To enable the EPEL repository on CentOS or Red Hat 7.x:

  1. Download the EPEL repository:

    wget http://download.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-2.noarch.rpm
  2. Install the EPEL repository:

    rpm -Uvh epel-release-7*.rpm
Adding the MapR repository on SUSE
  1. Change to the root user or use sudo.
  2. Use the following command to add the repository for MapR packages, replacing <version> with the version of MapR that you want to install:

    zypper ar http://package.mapr.com/releases/<version>/suse/ maprtech
    
  3. Use the following command to add the repository for MapR ecosystem packages:

    zypper ar http://package.mapr.com/releases/ecosystem-5.x/suse/ maprecosystem
    

    (See the MapR Release Notes for the correct paths for all past releases.)

  4. If your connection to the Internet is through a proxy server, you must set the http_proxy environment variable before installation:

    http_proxy=http://<host>:<port>
    export http_proxy
    
  5. Update the system package index by running the following command:

    zypper refresh
  6. MapR packages require a compatibility package in order to install and run on SUSE. Execute the following command to install the SUSE compatibility package:

    zypper install mapr-compat-suse
Adding the MapR repository on Ubuntu
  1. Change to the root user or use sudo.
  2. Add the following lines to /etc/apt/sources.list, replacing <version> with the version of MapR that you want to install:

    deb http://package.mapr.com/releases/<version>/ubuntu/ mapr optional
    deb http://package.mapr.com/releases/ecosystem-5.x/ubuntu binary/
    

    (See the MapR Release Notes for the correct paths for all past releases.)

  3. Update the package indexes.

    apt-get update
    
  4. If your connection to the Internet is through a proxy server, add the following lines to /etc/apt/apt.conf:

    Acquire {
    Retries "0";
    HTTP {
    Proxy "http://<user>:<password>@<host>:<port>";
    };
    };
    
Using a Local Repository

You can set up a local repository on each node to provide access to installation packages. With this method, nodes do not require internet connectivity. The package manager on each node installs from packages in the local repository. To set up a local repository, nodes need access to a running web server to download the packages.

The following instructions create a single repository that includes both MapR components and the Hadoop ecosystem components:

Creating a local repository on Red Hat or CentOS
  1. Log in as root on the node or use sudo.
  2. Create the following directory if it does not exist: /var/www/html/yum/base
  3. On a computer that is connected to the internet, download the following files, substituting the appropriate <version> and <datestamp>:

    http://package.mapr.com/releases/v<version>/redhat/mapr-v<version>GA.rpm.tgz
    http://package.mapr.com/releases/ecosystem-5.x/redhat/mapr-ecosystem-5.x-<datestamp>.rpm.tgz
    

    (See MapR Repositories and Package Archives for the correct paths for all past releases.)

  4. Copy the files to /var/www/html/yum/base on the node, and extract them there.

    tar -xvzf mapr-v<version>GA.rpm.tgz
    tar -xvzf mapr-ecosystem-5.x-<datestamp>.rpm.tgz
    
  5. Create the base repository headers:

    createrepo /var/www/html/yum/base
    

    When finished, verify the content of the new /var/www/html/yum/base/repodata directory: filelists.xml.gz, other.xml.gz, primary.xml.gz, repomd.xml

To add the repository on each node

Create a text file called maprtech.repo in the /etc/yum.repos.d directory with the following content:

[maprtech]
name=MapR Technologies, Inc.
baseurl=http://<host>/yum/base
enabled=1
gpgcheck=0

The EPEL (Extra Packages for Enterprise Linux) repository contains dependencies for the mapr-metrics package on Red Hat/CentOS. If your Red Hat/CentOS cluster does not use the mapr-metrics service, you can skip EPEL configuration.

To enable the EPEL repository on CentOS or Red Hat 6.x:

  1. On a computer that is connected to the internet, download the EPEL repository:

    wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
  2. Install the EPEL repository:

    rpm -Uvh epel-release-6*.rpm

To enable the EPEL repository on CentOS or Red Hat 7.x:

  1. Download the EPEL repository:

    wget http://download.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-2.noarch.rpm
  2. Install the EPEL repository:

    rpm -Uvh epel-release-7*.rpm
Creating a local repository on SUSE
  1. Login as root on the node or use sudo.
  2. Create the following directory if it does not exist: /var/www/html/zypper/base
  3. On a computer that is connected to the internet, download the following files, substituting the appropriate <version> and <datestamp>:

    http://package.mapr.com/releases/v<version>/suse/mapr-v<version>GA.rpm.tgz
    http://package.mapr.com/releases/ecosystem-5.x/suse/mapr-ecosystem-5.x-<datestamp>.rpm.tgz
    

    (See MapR Repositories and Package Archives for the correct paths for all past releases.)

  4. Copy the files to /var/www/html/zypper/base on the node, and extract them there.

    tar -xvzf mapr-v<version>GA.rpm.tgz
    tar -xvzf mapr-ecosystem-5.x-<datestamp>.rpm.tgz
    
  5. Create the base repository headers:

    createrepo /var/www/html/zypper/base
    

    When finished, verify the content of the new /var/www/html/zypper/base/repodata directory: filelists.xml.gz, other.xml.gz, primary.xml.gz, repomd.xml

To add the repository on each node

Issue the following command to add the repository for MapR packages and the MapR ecosystem packages, substituting the appropriate <host>:

zypper ar http://<host>/zypper/base/ maprtech
Creating a local repository on Ubuntu

To create a local repository

  1. Login as root on the machine where you will set up the repository.
  2. Change to the directory /root and create the following directories within it:

    ~/mapr
    .
    ├── dists
    │   └── binary
    │       └── optional
    │           └── binary-amd64
    └── mapr
    
  3. On a computer that is connected to the Internet, download the following files, substituting the appropriate <version> and <datestamp>.

    http://package.mapr.com/releases/v<version>/ubuntu/mapr-v<version>GA.deb.tgz
    http://package.mapr.com/releases/ecosystem-5.x/ubuntu/mapr-ecosystem-5.x-<datestamp>.deb.tgz
    

    (See MapR Repositories and Package Archives for the correct paths for all past releases.)

  4. Copy the files to /root/mapr/mapr on the node, and extract them there.

    tar -xvzf mapr-v<version>GA.rpm.tgz
    tar -xvzf mapr-ecosystem-5.x-<datestamp>.rpm.tgz
    
  5. Navigate to the /root/mapr/ directory.
  6. Use dpkg-scanpackages to create Packages.gz in the binary-amd64 directory:

    dpkg-scanpackages . /dev/null | gzip -9c > ./dists/binary/optional/binary-amd64/Packages.gz
  7. Move the entire /root/mapr directory to the default directory served by the HTTP server (for example, /var/www) and make sure the HTTP server is running.

To add the repository on each node

  1. Add the following line to /etc/apt/sources.list on each node, replacing <host> with the IP address or hostname of the node where you created the repository:

    deb http://<host>/mapr binary optional
    
  2. On each node update the package indexes (as root or with sudo).

    apt-get update

    After performing these steps, you can use apt-get to install MapR software and Hadoop ecosystem components on each node from the local repository.

Using a Local Path with rpm or deb Package Files

You can download package files, store them locally, and then install MapR from the files. This option is useful for clusters that are not connected to the internet.

This method requires that you pre-install the MapR package dependencies on each node in order for MapR installation to succeed. See Packages and Dependencies for MapR 5.0.0 for a list of the dependency packages required for the MapR services that you are installing. Manually download the packages and install them.

To install MapR from downloaded package files, complete the following steps:

  1. Using a machine connected to the internet, download the tarball for the MapR components and the Hadoop ecosystem components, substituting the appropriate <platform><version>, and <datestamp>:
  2. Extract the tarball to a local directory, either on each node or on a local network accessible by all nodes.

    tar -xvzf mapr-v<version>GA.rpm.tgz
    tar -xvzf mapr-ecosystem-5.x-<datestamp>.rpm.tgz

 2b. Update Repository Cache

If you plan to install from a repository, update the repository cache on all nodes.

On RedHat and CentOS
# yum clean all
On Ubuntu
# apt-get update
On SUSE
# zypper refresh

Upgrading MapR Core

When you perform an offline upgrade from 3.x, 4.0.x, or 4.1 to 5.0, the package upgrade process follows the sequence below.

Perform these steps on all nodes in the cluster as the root user or use sudo.

For larger clusters, these steps are commonly performed on all nodes in parallel using scripts and/or remote management tools.

1. Halt Jobs and Applications

As defined by your upgrade plan, halt activity on the cluster in the following sequence before you begin upgrading packages:

  1. Notify stakeholders.
  2. Stop accepting new jobs and applications.
  3. Terminate any running jobs and applications.
    The following commands can be used to list and terminate MapReduce jobs:

    # mapred job -list
    # mapred job -kill <job-id>
    # mapred job -kill-task <task-id>

    The following commands can be used to list and terminate YARN applications:

    # yarn application -list 
    # yarn application -kill <ApplicationId>

    You might also need specific commands to terminate custom applications.

At this point the cluster is ready for maintenance but still operational. The goal is to perform the upgrade and get back to normal operation as safely and quickly as possible.

2. Stop Cluster Services

The following sequence will stop cluster services gracefully. When you are done, the cluster will be offline. The maprcli commands used in this section can be executed on any node in the cluster.

2a. Disconnect NFS Mounts

Unmount the MapR NFS share from all clients connected to it, including other nodes in the cluster. This allows all processes accessing the cluster via NFS to disconnect gracefully. Assuming the cluster is mounted at /mapr:

# umount /mapr

 

2b. Stop Ecosystem Component Services

Stop ecosystem component services on each node in the cluster.

  1. Run  maprcli node list command to display the services on each node in the cluster.

    # maprcli node list -columns hostname,csvc
  2. Stop ecosystem component services.
    For example, you can use the following command to stop Hue, Oozie , and Hive:  

    # maprcli node services -multi '[{ "name": "hue", "action": "stop"}, { "name": "oozie", "action": "stop"}, { "name": "hs2", "action": "stop"}]' -nodes <hostnames>

2c. Stop MapR Core Services

Stop MapR core services in the following sequence.

  1. Note where CLDB and ZooKeeper services are installed.

    # maprcli node list -columns hostname,csvc
    centos55 tasktracker,hbmaster,cldb,fileserver,hoststats 10.10.82.55
    centos56 tasktracker,hbregionserver,cldb,fileserver,hoststats 10.10.82.56
    ...more nodes...
    centos98 fileserver,zookeeper 10.10.82.98
    centos99 fileserver,webserver,zookeeper 10.10.82.99
    
    
  2. Stop Warden on all nodes with CLDB installed:

    # service mapr-warden stop
    
  3. Stop Warden on all remaining nodes:

    # service mapr-warden stop
    
  4. Stop the ZooKeeper on all nodes where it is installed:

    # service mapr-zookeeper stop
    
  5. Verify that MapR services are not running.
    At this point maprcli commands will not work and the browser-based MapR Control System will be unavailable.

3. Upgrade to a Supported JDK Version

As if Version 4.1, JDK 7 or 8 are required. See JDK Support Matrix.

4. Upgrade Packages and Configuration Files

Perform the following steps to upgrade the MapR core packages on every node.

  1. Install the following MapR package key: http://package.mapr.com/releases/pub/maprgpg.key

  2. Upgrade the following MapR core component and MapR hadoop common packages on all nodes where they exist:

     

    If the upgrade is from 3.x to 5.0, the packages mapr-core-internal, mapr-hadoop-core, mapr-mapreduce1 and mapr-mapreduce2 are not available in 3.x.

     

    • mapr-cldb
    • mapr-core
    • mapr-core-internal
    • mapr-fileserver
    • mapr-hadoop-core
    • mapr-historyserver
    • mapr-jobtracker
    • mapr-mapreduce1
    • mapr-mapreduce2
    • mapr-metrics
    • mapr-nfs
    • mapr-nodemanager
    • mapr-resourcemanager
    • mapr-tasktracker
    • mapr-webserver
    • mapr-zookeeper
    • mapr-zk-internal

      Do not use a wildcard such as "mapr-*" to upgrade all MapR packages, which could erroneously include Hadoop ecosystem components such as mapr-hive and mapr-pig.

    • On Red Hat / CentOS:
      yum update mapr-cldb mapr-core mapr-core-internal mapr-fileserver mapr-hadoop-core mapr-historyserver mapr-jobtracker mapr-mapreduce1 mapr-mapreduce2 mapr-metrics mapr-nfs mapr-nodemanager mapr-resourcemanager mapr-tasktracker mapr-webserver mapr-zookeeper mapr-zk-internal
    • On Ubuntu, run the following commands. The first command returns a list of the MapR packages installed on the node. Then, run the apt-get install command on the listed packages:
      dpkg --list | grep "mapr" | grep -P "^ii"| awk '{ print $2}'|tr "\n" " "
      apt-get install <package-list>
    • On SUSE:
      zypper update mapr-cldb mapr-compat-suse mapr-core mapr-core-internal mapr-fileserver mapr-hadoop-core mapr-historyserver mapr-jobtracker mapr-mapreduce1 mapr-mapreduce2 mapr-metrics mapr-nfs mapr-nodemanager mapr-resourcemanager mapr-tasktracker mapr-webserver mapr-zookeeper mapr-zk-internal

      The command above only upgrades the packages that are installed on the node. On Red Hat / CentOS or SUSE, the command may issue warnings that certain packages are not installed; however, the packages on the node will be upgraded correctly and no additional packages will be installed.

  3. Verify that packages were installed successfully on all nodes. Confirm that there were no errors during installation, and check that /opt/mapr/MapRBuildVersion contains the expected value.
    Example:

    # cat /opt/mapr/MapRBuildVersion
    5.0.0.xxxxx.GA

  4. Manually update configuration files and license text files:

    • On all nodes, manually merge new configuration settings from the /opt/mapr/conf.new/warden.conf file into the /opt/mapr/conf/warden.conf file.

    • On all nodes, manually merge new configuration settings from the files in the /opt/mapr/conf/conf.d.new/ directory to the files in the /opt/mapr/conf/conf.d/ directory.

      On CLDB nodes where MapR licenses are enforced, the BaseLicense and BaseLicenseNfsClient licenses are automatically picked up after upgrading.


      When you upgrade hadoop common, a new directory is created for the new hadoop common version and the configuration files in the existing /opt/mapr/hadoop/hadoop-2.x.x directory are automatically copied into the active directory associated with the new hadoop 2.x.x directory. For example, when you upgrade to 4.0.1 from 5.0, configuration files that were in the hadoop-2.4.1 directory are copied into the hadoop-2.5.1 directory.
  5. Correct the value of mapred.tasktracker.ephemeral.tasks.ulimit in the mapred-site.xml file.

    1. Open the mapred-site.xml file which is located in the /opt/mapr/hadoop/hadoop-0.20.2/conf directory.
    2. Locate the following entry:
       <property>
      <name>mapred.tasktracker.ephemeral.tasks.ulimit</name>
      <value>4294967296></value>
      <description>Ulimit (bytes) on all tasks scheduled on an ephemeral slot MapRConf</description>
      </property>

    3. Remove the > after the numeric value, if it exists.
      If you do not remove this value and the property is not commented out, the TaskTracker service may fail to start.
    4. Save any changes to the file.

Be sure to upgrade any existing ecosystem packages to versions compatible with the upgraded MapR before configuring and restarting cluster services. See Ecosystem Support Matrix for more information.

 

4. Restart Cluster Services

After you have upgraded packages on all nodes, perform the following sequence on all nodes to restart the cluster.

4a. Restart MapR Core Services

  1. Run the configure.sh script with the -R option.

    # /opt/mapr/server/configure.sh -R

    The -R option configures the node without changing the list of services that will run on the node. Add or remove service after you upgrade the cluster.

  2. If ZooKeeper is installed on the node, start it:

    # service mapr-zookeeper start
    
  3. Start Warden:

    # service mapr-warden start
    

At this point, MapR core services are running on all nodes.

4b. Run Simple Health Check

Run simple health-checks targeting the file system and MapReduce services only. Address any issues or alerts that might have come up at this point.

4c. Set the New Cluster Version

After restarting MapR services on all nodes, issue the following command on any node in the cluster to update and verify the configured version. The version of the installed MapR software is stored in the file /opt/mapr/MapRBuildVersion.

# maprcli config save -values {mapr.targetversion:"`cat /opt/mapr/MapRBuildVersion`"}

You can verify that the command worked, as shown in the example below.

# maprcli config load -keys mapr.targetversion
mapr.targetversion
5.0.0.xxxxx.GA

4d. Restart Compatible Ecosystem Components

Restart ecosystem component services that are compatible with the upgraded version of MapR. See the Interoperability Matrix for details.

For example, you can use the following command to start Hue, Oozie , and Hive:  

# maprcli node services -multi '[{ "name": "hue", "action": "start"}, { "name": "oozie", "action": "start"}, { "name": "hs2", "action": "start"}]' -nodes <hostnames>

4e. Mount the NFS

Remount the MapR NFS share. The following example assumes that the cluster is mounted at /mapr:

# mount -o hard,nolock <hostname>:/mapr /mapr

5. Verify Success on Cluster Nodes

Below are some simple checks to confirm that the packages have upgraded successfully:

  • All expected nodes show up in a cluster node listing, and the expected services are configured on each node.
    For example:

    # maprcli node list -columns hostname,csvc
    hostname configuredservice ip
    centos55 tasktracker,hbmaster,cldb,fileserver,hoststats 10.10.82.55
    centos56 tasktracker,hbregionserver,cldb,fileserver,hoststats 10.10.82.56
    centos57 fileserver,tasktracker,hbregionserver,hoststats,jobtracker 10.10.82.57
    centos58 fileserver,tasktracker,hbregionserver,webserver,nfs,hoststats,jobtracker 10.10.82.58
    ...more nodes...
    
  • A master CLDB is active, and all nodes return the same result.
    For example:

    # maprcli node cldbmaster
    cldbmaster
    ServerID: 8851109109619685455 HostName: centos56
    
  • Only one ZooKeeper service claims to be the ZooKeeper leader, and all other ZooKeepers are followers.
    For example:

    # service mapr-zookeeper qstatus
    JMX enabled by default
    Using config: /opt/mapr/zookeeper/zookeeper-3.4.5/conf/zoo.cfg
    Mode: follower
    

At this point, MapR packages have been upgraded on all nodes.

6. Run Non-Trivial Health Checks

At this point, you can run non-trivial health checks such as performance benchmarks relevant the cluster’s typical workload or a suite of common jobs. It is a good idea to run these types of checks when the cluster is idle.

After you Upgrade from 3.x

Consider the following items after you upgrade from 3.x to 5.0:

Prepare Cluster to Run MapReduce v1 Jobs

The MapR 4.1 cluster includes the Hadoop 2.x architecture and it starts up with MapReduce V2 (YARN) as the default operating mode. Before you run MapReduce v1 jobs in the cluster, you may need to recompile the job due to API changes and you may want to consider changing the default MapReduce operating mode.

  • Determine if you must recompile MapReduce V1 jobs. See Recompiling MapReduce V1 Applications for information about when a recompile is required.
  • MapReduce V1 jobs will not run unless you change the default MapReduce mode or submit them with the appropriate command. If you only plan to run MapReduce v1 jobs in the cluster, consider changing the default MapReduce mode to classic.  
    To change the default MapReduce mode to classic , run maprcli cluster mapreduce set -mode classic

 For more information about the MapReduce mode and how it affects job submission, see Managing the MapReduce Mode

Prepare Cluster to Run YARN Applications (optional)

YARN services are required to run YARN applications such as MapReduce v2 an other applications that can run on YARN.

If you want to run YARN applications on the cluster, complete the following steps:

  1. Install YARN roles such as ResourceManager, NodeManager, and the HistoryServer on cluster nodes. See Adding Roles to a Node and Planning the Cluster.
    You may want to install ResourceManager on existing JobTracker nodes and NodeManager on existing TaskTracker nodes.  
  2. Run configure.sh to re-configure each cluster and client node.
    For example, the following the configure.sh script syntax can be used on cluster nodes to configure three ResourceManager nodes (one active and two standby) and one HistoryServer node: 

    /opt/mapr/server/configure.sh -C <CLDB node list> -Z <ZK node list> -RM <hostname1,hostname2,hostname3> -HS <hostname1> [additional parameters]

    You may choose to use a different syntax based on your requirements. For example, on a 4.1 cluster, you do not need to specify the -RM option if you want to use zero configuration failover for the ResourceManager. See configure.sh for more details.

  3. If you want YARN to be the default MapReduce mode, run the following command to check the MapReduce mode:
    maprcli cluster mapreduce get
    If the MapReduce mode is not YARN, run the following command to change the mode to yarn: maprcli cluster mapreduce set -mode yarn 

Review Other Upgrade Considerations for This Upgrade Path

Upgrade all ecosystem packages, regardless of the ecosystem version you are running. See Upgrade Ecosystem with Manual Steps

  • Install the same version (or a later 4.1 supported version) of the ecosystem package on the upgraded cluster. 

  • MapR re-releases major versions of ecosystem products on a monthly basis. If you have an existing version of a component that matches a major supported version of that product, you still need to upgrade to the latest ecosystem release of that major version. For example, you may have hive-12-1403 (mapr-hive-0.12.23716-1.noarch.rpm) installed, but you need to upgrade to a more recently released version: hive-12-1410 (mapr-hive-0.12.201411191459-1.noarch.rpm).

After you Upgrade from 4.0.1

Consider the following upgrade item after you upgrade from 4.0.1 to 5.0:

  • When you upgrade hadoop common, a new directory is created for the new hadoop common version and the configuration files in the existing /opt/mapr/hadoop/hadoop-2.x.x directory are automatically copied into the active directory associated with the new hadoop 2.x.x directory. For example, when you upgrade to 4.0.1 from 5.0, configuration files that were in the hadoop-2.4.1 directory are copied into the hadoop-2.5.1 directory.

  • Any script that points to one or more files within the /opt/mapr/hadoop-2.4.1 directory must be updated to point to the latest file in the hadoop 2.x directory.

--> Back to Upgrading Without the MapR Installer