MapR 5.0 Documentation : Mounting a Single-Node Cluster via NFS

With MapR, you can export and mount the Hadoop cluster as a read/write volume via NFS from the machine where you installed MapR, or from a different machine.

  • If you are mounting from the machine where you installed MapR, replace <host> in the steps below with localhost
  • If you are mounting from a different machine, make sure the machine where you installed MapR is reachable over the network and replace <host> in the steps below with the hostname of the machine where you installed MapR.

Try the following steps to see how it works:

  1. Change to the root user (or use sudo for the following commands).
  2. See what is exported from the machine where you installed MapR:
    showmount -e <host>
  3. Set up a mount point for the NFS share:
     mkdir /mapr
  4. Mount the cluster via NFS:
     mount <host>:/mapr /mapr
    Tips

    If you get an error such as RPC: Program not registered it likely means that the NFS service is not running. See Setting Up MapR NFS for tips.

  5. Notice that the directory you created is there:
    # ls /mapr
    Found 3 items
    drwxr-xr-x   - pconrad supergroup          0 2011-01-03 13:50 /foo
    drwxr-xr-x   - pconrad supergroup          0 2011-01-04 13:57 /user
    drwxr-xr-x   - mapred supergroup           0 2010-11-25 09:41 /var
    
  6. Try creating a directory via NFS:
    mkdir /mapr/foo/bar
  7. List the contents of /foo:
    hadoop fs -ls /foo
    Notice that Hadoop can see the directory you just created with NFS.

If you are already running an NFS server, MapR will not run its own NFS gateway. In that case, you will not be able to mount the single-node cluster via NFS, but your previous NFS exports will remain available.

Next: A Tour of the MapR Control System