Removing one or more Nodes

Describes how to decommission a node from service.

Before you start:
  1. Drain the node of data by moving the node to the /decommissioned physical topology. All the data on a node in the /decommissioned topology is migrated to volumes and nodes in the /data topology.
  2. Run the following command to check if a given volume is present on the node:
    maprcli dump volumenodes -volumename <volume> -json | grep <ip:port>
    Run this command for each non-local volume in your cluster to verify that the node being removed is not storing any volume data.
  3. Install CLDB or ZooKeeper on another node (only) if the node you are removing is a CLDB or ZooKeeper node and run with -C and -Z options.

    This is to ensure that ZooKeeper quorum is maintained and that an optimal number of CLDB is available for high availability.

You can remove one or more nodes using MCS and the CLI.

Removing Multiple Nodes Using the MapR Control System

To remove one or more nodes:

  1. Log in to MCS and click Nodes.
  2. Select the nodes from the list of nodes in the Nodes pane and click Remove Node(s).
    The Remove Node(s) dialog displays.
  3. Verify the list of nodes to remove and click Remove Nodes.

Removing a Node Using the MapR Control System

To remove a node:

  1. Go to the Viewing Node Details page and click Remove Node.
    The Remove Node(s) confirmation dialog displays.
  2. Click Remove Node.

Removing one or more Nodes Using the CLI or REST API

Use the node remove command to remove one or more server nodes from the cluster. To run this command, you must have full control (fc) or administrator (a) permission. The syntax is:

maprcli node remove -nodes <node names> ]

After you issue the node remove command, wait several minutes to ensure that the nodes have been completely removed.

Tip: To ensure that a node that is removed does not rejoin the cluster on reboot, either remove all MapR packages from the node, or remove the cluster configuration that is present in /opt/mapr/conf/mapr-clusters.conf on the node.