Resolved Issues

The following MapR issues, which were reported by customers, are resolved in Version 5.1.0.

Installation and Configuration

14230: The difficulty of verifying that files were installed by a MapR package has been resolved. On the Ubuntu platform, you can check the md5sums against the installed files by running src/support/tools/ On the RHEL platform, you can use sha256sum against the installed files by running

19532: Web browsers no longer fail to connect to the MCS due to a Weak Ephemeral Diffie-Hellman Key.

19742: Restarting warden no longer resets permissions on the root directory to 777.

19757: Fixed the issue that caused the pullcentralconfig script to fail with the following error:
Failed to move central config file, error Invalid cross-device link.

19790: The warden service starts successfully when you reboot the operating system with the abrt-ccpp component enabled.

19808: When you install MapR, the TLSv1 protocol is now automatically disabled for services that run a web server.

20543: When you install MapR on a CentOS node, the nfs-utils package will be installed with the mapr-nfs package if it is already not installed on the system.

20695: When you install MapR on a CentOS node, nss version 3.19 is installed as a dependency to prevent SSL connection errors.

20916: Fixed the issue causing a map task to report a successful completion instead of a failure when the combiner thread fails with an out-of-memory error.

21134: When you install MapR on a node, the tcp_syn_retries parameter is now automatically set to 4.

21147: hdfs-default.xml and hdfs-site.xml are now included in the hadoop-hdfs-*.jar to resolve the issue where Oozie is unable to run hadoop commands when centralized logging is enabled.

21230: You can now enable insecure protocols, such as TLSv1 and SSLv3, that MapR disables by default.

21523: For systems with SSDs, MapR now automatically configures the Linux IO scheduler for the drives in a storage pool based on the disk type. That is, on SSD-based systems, MapR will set the scheduler type to NOOP.


11646: Improved auto-compaction to compact segments that have completely expired and to mark partitions in need of TTL compaction.

19388: Added C-API functionality that allows the scanner to return results for a particular timestamp or between two timestamps.

19430, 21638: MapR-DB applications failed with a segmentation fault because of memory corruptions in the C API.

19501: Truncating a table with the HBase shell truncate command no longer leads to a Stale File Handle error when the table is subsequently accessed.


11828: Fixed the issue that caused write operations on a volume to hang.

14447: Fixed the issue that caused NFS client IOs to break during a VIP failover.

14482: MapR-FS native C APIs now support impersonation.

16679: Fixed the issue that caused CLDB to consider itself unstable and go into restart mode as a result of unknown containers in memory.

17298: Fixed the issue that caused a memory leak after closing a file using the MapR-FS C API.

17642: Fixed the issue that caused space reservation for container resynchronization to be out of sync with the amount of space needed.

17762: Fixed the issue where the desksetup script does not populate /lib/udev/rules.d/99-mapr-disk.rules and the disk setup fails due to wrong device permissions.

18193: Fixed the issue that caused read pages to be reused after they were freed.

18981: Application calls to hdfsGetHosts() no longer fail with a “could not get fidmap” error message.

19027: ShimLoader now uses a NULL value when the Thread Context Class Loader (TCCL) is not set.

19231: Fixed the issue that caused the client to crash while handling a mix of compressed and uncompressed data in the same write request.

19568: Fixed the issue that caused mirroring to fail with the following error: MapInCidsToMirrorCids failed for volume

19628: The mapr user now has the permission necessary to stop/start the fileserver in order to add disks from the MCS or maprcli.

20377: Fixed the issue that caused a buffer overflow even when a limit was set on Bytebuffer.

20648: When you put a node into maintenance mode, the -timeoutminutes setting no longer times out before the set timeout duration.

20671: Fixed the issue that caused mirror intra-volume resynchronization to use slow resync slots (rather than fast resync slots) when there is no data change.

20816: A JBoss application was failing with the following error: java.lang.UnsatisfiedLinkError:

20843: Fixed the issue related to MFS crashing, which caused multiple MFS servers to go down and volume data to be unavailable until the MFS nodes were restarted.

21041: Increasing the RpcTimeout value in the core.xml file will no longer disable hardmount, which allows the FileClient to try all CLDBs.

21171: Fixed the issue related to MFS restarting many times with an "mfs is potentially deadlocked" error.

21224: The snapshot delete operation will now be throttled to prevent it from competing with other operations.

21259: Fixed the issue that caused MFS to crash on multiple nodes while restoring a volume dump.

21335: When a mirror volume was created on a separate cluster from the source volume, a dump of the mirror was created, and the dump was copied to the source cluster, an attempted restore of the dump would hang. This sequence of operations is successful in Version 5.1.

21372: The issue causing degradation in performance during a scheduled snapshot delete operation has been fixed.

21407: A restore of a dump from a standard volume no longer hangs. MFS authentication issues that caused this problem have been resolved.

21696: The TOO_MANY_CONTAINERS alarm will now be raised when there are containers above the threshold limit. With this fix:

  • The threshold limit for generating the alarm is now on a per-node basis, but must be configured cluster-wide.
  • The threshold limit for RW containers can be configured and is, by default, 50K. The threshold limit for RW and snapshot containers combined is 10 times the threshold limit for RW containers (for example, 10x50K=500K).
  • You can no longer set and retrieve maxContainers using the maprcli node modify command.

21900: In the event of a node restart during a snapshot schedule, MFS will now keep accurate count of stale containers to ensure snapshot schedules run smoothly.


17115: When the debug log level was set in the nfsserver.conf file, this value did not take effect. In Version 5.1, the log level is read from the configuration file.

18153: During a manual failover operation, NFS response time increased when the OS was rebooted.

19939: Base and POSIX client licenses can now be applied even if a partner package is installed before initializing the cluster.

20013: The nfsserver.log file was not being re-created after being removed at the end of its retention time. This file is now rolled over periodically or when its size exceeds the configured value of 1GB.

20322: Fixed the issue related to the GID list that caused MFS to return an error. The NFS server was not adding the GID sent by the NFS client to its list of GIDs.

20514: Reduced the latency that NFS clients experience in the event of an uncontrolled node failure.


19592: When a volume name is provided as input, MapR now validates the input, rejecting invalid input such as a script, which prevents an alert from appearing on the browser.

19743: The maprcli alarm commands no longer return unsupported system alarms, and the CLDB no longer raises the unsupported alarms.

POSIX Client

19978: Fixed the issue related to latency that POSIX clients experience in the event of an uncontrolled node failure.

21337: When the mapr-loopbacknfs service was restarted, its status was reported as FAILED. This misleading status message is no longer printed.

20558: When you remove nodes with the fileserver and nfsserver roles using the MapR Control System or the maprcli node remove –hostids <hostnames> -service <nfsserver> command, the nodes no longer appear in the NFS nodes tab of the MCS or when you run the maprcli node list –nfsnodes 1 command.

21277: Fixed the issue related to the shared memory segment that prevented the loopbacknfs service from starting after installation.


18672: YARN applications no longer fail when label-based scheduling and time-based resource reservations are enabled at the same time.

20026: Resolved the issue related to UserGroupInformation.initialize() attempting to resolve KDC realm even in unsecured mode. This resulted in applications that are using Hadoop to appear to hang if the network card was disconnected.

20287: Fixed the issue related to YARN Resource Manager crashing when FileSystemStateStore fails to delete the files.

20419: YARN-3493 had to be backported to Hadoop 2.7.0 because of an exception in the Resource Manager.

20898: A YARN application timed out because some of the reducers became stuck in LOCALIZED state.

21006: When the number of vcores in a queue was specified with a decimal value, the FairScheduler read the value after the decimal point as the vcores value.

21338: YARN job client no longer fails when it tries to connect to a port that is not within the range defined by the property.

21146: An attempt to copy files including symbolic links failed even when the distcp -i option (ignore failures) was used.

22261: After a ResourceManager restart, the Scheduler tab in the RM UI now shows correct information about the job that is running (below the Cluster Metrics and User Metrics tabs).

22242: The Resource Manager no longer fails at startup with a null pointer exception.


19926: Warden no longer silently updates to set isDB to true in cases where was run with the -noDB option.


14265: When the TaskTracker starts up, it sets the reserved physical memory based on the initial calculation from warden. However, when an adjustment was made to the slots (for example, based on CPU, memory, or MFS disks), the reserved physical memory was not adjusted downwards. In version 5.1, the TaskTracker physical memory limit is set after slots calculation.

20117: The JobTracker UI no longer fails with a NullPointerException whenever a job with invalid configuration is submitted to the JobTracker.

20498: Fixed the JobTracker performance problem caused by a cached configuration object. Now, the object is created statically and used for all task completion events.

20725: When an application such as Oozie spawns a job within a job, job failure longer occurs due to static order initialization of the Configuration object.

21107: Fixed the issue related to the JobClient failing with the error " Connection timed out" on the client node.

21226: Multiple users can access the JobTracker view in the MCS if you disable the new mapred.jobtracker.ui.showcountersJobTracker option to prevent the CPU/memory counters from displaying.


20644, 20854: Fixed the issue that caused mirrors to fail with multiple schedules stopped. Diagnostics showed, with the maprcli schedule list command, a 0 value in the inuse column even when there were multiple associated volumes.

21713: Fair Scheduler queue names that contain spaces no longer cause the Resource Manager to shut down. In Version 5.1. leading and/or trailing spaces in queue names are trimmed internally and associated applications proceed. A queue name that contains spaces only (no other characters) causes applications to fail with an exception.


16095: The MapR UI Jetty web server has been updated and now provides an option to disable the SSLv3 protocol.

20658: The MCS no longer hangs or logs the following error upon login:
<URL> returns invalid code: 404
<DATE> <TIME>,088 ERROR com.mapr.baseutils.URLProbingUtility


14197: The mapr superuser, designated administrative users, and users with cluster ACL permissions set to at least login can nowaccess the CLDB view in the MCS when security is enabled.

19823: MapR now supports selective auditing of certain filesystem and table operations.

20061: You do not have to list the cluster name as the first entry in the mapr-clusters.conf file to log into a secure cluster with Kerberos enabled.

21326: Access of a protected file over NFS was denied but not audited in the file server. This operation is now audited.

21327: Fixed the issue that caused connections to hosts on secure clusters to fail.

21510: JBoss applications no longer fail with the following error:

22012: After applying a 4.0.2 patch, only the mapr user could run the hadoop fs -ls / command.

22332: Version 3.2.1 of the Apache commons-collections library contained a remote execution vulnerability. To fix this problem, MapR Version 5.1 packages use Version 3.2.2 of the library (commons-collections-3.2.2.jar).