Volumes, Snapshots, and Mirrors

Volumes are a management entity that logically organizes a cluster’s data. Since a container always belongs to exactly one volume, that container’s replicas all belong to the same volume as well. Volumes do not have a fixed size and they do not occupy disk space until MapR-FS writes data to a container within the volume. A large volume may contain anywhere from 50-100 million containers.

The CLI and REST API provide functionality for volume management. Typical use cases include volumes for specific users, projects, development, and production environments. For example, if an administrator needs to organize data for a special project, the administrator can create a specific volume for the project. MapR-FS organizes all containers that store the project data within the project volume. A cluster can have many volumes.

A volume’s topology defines which racks or nodes a volume includes. The topology describes the locations of nodes and racks in the cluster.

The following image represents a volume that spans a cluster:

Node Topology

Volume topology is based on node topology. You define volume topology after you define node topology. When you set up node topology, you can group nodes by rack or switch. MapR-FS uses node topology to determine where to replicate data for continuous access to the data in the event of a rack or node failure.

Distributed Metadata

MapR-FS creates a Name container for each volume that stores the volume’s namespace and file chunk locations, along with inodes for the objects in the filesystem. The file system stores the metadata for files and directories in the Name container, which is updated with each write operation. When a volume has more than 50 million inodes, the system raises an alert that the volume is reaching the maximum recommended size.

Local Volumes

Local volumes are confined to one node and are not replicated. Local volumes are part of the cluster’s global namespace and are accessible on the path /var/mapr/local/<host>.


Snapshots enable you to roll back to a known good data set. A snapshot is a read-only image of a volume that provides point-in-time recovery. Snapshots only store changes to the data stored in the volume, and as a result make extremely efficient use of the cluster’s disk resources. Snapshots preserve access to historical data and protect the cluster from user and application errors. You can create a snapshot manually or automate the process with a schedule.

The following image represents a mirror volume and a snapshot created from a source volume:

New write operations on a volume with a snapshot are redirected to preserve the original data. Snapshots only store the incremental changes in a volume’s data from the time the snapshot was created. The storage used by a volume's snapshots does not count against the volume's quota.

Mirror Volumes

MapR provides built-in mirroring to set recovery time objectives and automatically mirror data for backup. You can create local or remote mirror volumes to mirror data between clusters, data centers, or between on-premise and public cloud infrastructures.

Mirror volumes are read-only copies of a source volume. You can control the schedule for mirror refreshes from the MapR Control System or with the command-line tools. Local (on the same cluster) or remote (on a different cluster) mirror volumes can be created from the MCS or from the command line.

When a mirror volume is created, MapR-FS creates a temporary snapshot of the source volume. The mirroring process reads content from the snapshot into the mirror volume. The source volume remains available for read and write operations during the mirroring process. The initial mirroring operation copies the entire source volume. Subsequent mirroring operations only update the differences between the source volume and the mirror volume.

The mirroring operation never consumes all of the available network bandwidth, and throttles back when other processes need more network bandwidth. Mirrors are atomically updated at the mirror destination. The mirror does not change until all bits are transferred, at which point all the new files, directories, and blocks are atomically moved into their new positions in the mirror volume. MapR-FS replicates source and mirror volumes independently of each other.

Mirror volumes can be promoted to read-write volumes. The main use case for this feature is to support disaster-recovery scenarios in which a read-only mirror needs to be promoted to a read-write volume so that it can become the primary volume for data storage. In addition, read-write volumes that were mirrored to other volumes can be made into mirrors (to establish a mirroring relationship in the other direction). You can also convert read-write volumes back to read-only mirrors.

Authorization with Volumes: Intelligent Policy Management

The MapR File System uses volumes as a unique management entity. A volume is a logical unit that you create to apply policies to a set of files, directories, tables, and sub-volumes. You can create volumes for each user, department, or project. Mirror volumes and volume snapshots provide data recovery and data protection functionality.

Volumes can enforce disk usage limits, set replication levels, establish ownership and control permissible actions, and measure the cost generated by different projects or departments. When you set policies on a volume, all files contained within the volume inherit the same policies set on the volume. Other Hadoop distributions require administrators to manage policies at the file level.

You can manage volume permissions through one of the following:

  • Access Control Lists (ACLs) in the MapR Control System or from the command line. ACLs can be used to control administrative access to volumes.
  • Access Control Expressions (ACEs) in the MapR Control System or from the command line. ACEs can be used to control data access using boolean expressions.

You can also set read, write, and execute permissions on a file or directory for users and groups with standard UNIX commands, when that volume has been mounted through NFS, or using standard hadoop fs commands.