MapR Kubernetes Ecosystem 1.0 Release Notes

These notes describe the first release of the MapR Kubernetes Ecosystem.

You may also be interested in the Kubernetes Release Notes.

Version 1.0
Release Date December 2019
MapR Version Interoperability Compatible with MapR 5.2.2*, 6.0.x, and 6.1.0
Kubernetes Compatibility** Kubernetes 1.13, 1.14, 1.15, and 1.16
OpenShift Compatibility 4.1 and 4.2
CSI Driver Compatibility Version 1.0. For more information, see MapR Container Storage Interface (CSI) Storage Plugin Release Notes.
Operators cspaceoperator-1.0.0 supports MapR Kubernetes Ecosystem 1.0

drill-operator-1.0.0 supports MapR Drill 1.15.0 and 1.16.0

spark-operator-2.4.4 supports MapR Spark 2.4.4

Related Resources

*Metrics are not supported on MapR 5.2.2 clusters.

** Kubernetes alpha features are not supported.

New in this Release

This first release of the MapR Kubernetes Ecosystem includes Spark and Drill operators that run in a Kubernetes environment and leverage data stored on a cloud-based or on-premise MapR Data Platform.




Note the following limitations:

  • All nodes in the Kubernetes cluster must use the same Linux OS. Configuration files are available to support these Linux distributions:
    • CentOS
    • Red Hat (use CentOS configuration file)
    • Ubuntu
  • The Basic POSIX client package is included by default when you install the MapR Container Storage Interface (CSI) Storage Plugin. The Platinum POSIX client can be enabled by specifying a parameter in the Pod spec. Only the FUSE-based POSIX client is supported. NFSv3 and NFSv4 are not supported.

Known Issues

Note the following known issues:


  • K8S-1115: Bootstrap errors. During bootstrapping, if the kubectl client version doesn't match the Kubernetes version, bootstrapping may fail with an error, such as "ERROR: Could not run kubectl apply -f <dir>. Workaround: Before running the bootstrapping utility, ensure that your kubectl client version matches your Kubernetes cluster version. To display the version information:
    kubectl version


  • MapR Kubernetes Ecosystem containers currently run as the root user.


  • K8S-1039: Drill does not support plain authentication, but does support MapR-SASL. Also, if the underlying MapR storage cluster is not secure, Drill can also only be non-secure. Workaround: None.
  • K8S-1039: Drill clients cannot connect to Drill on Kubernetes clusters by using the ZooKeeper connection string. Workaround: Use the Drillbit direction connection instead. Note that the connection string requires you to specify a port. The port is appended to the IP address, with a colon in between.
    For a secure system, use this connection string:
    "jdbc:drill:drillbit=<ip address>:<port>;auth=maprsasl"
    For a non-secure system, use this connection string:
    "jdbc:drill:drillbit=<ip address>:<port>" -n <user account> -p <password>
    The port is defined in the drill-cr-full.yaml file in the userport field. It is 21010 by default. The port must be unique among all the clusters to which the Kubernetes cluster connects. Therefore, if your Kubernetes cluster connects to five other clusters, each of those five clusters must have a unique userport.
  • K8S-1116: Note the following auto-scaling known issues:
    • Auto-scaling does not work for on-premise clusters.
    • When auto-scaling is enabled for on-premise clusters, a manual scale down of the Drill pods (for example, scaling down from 8 pods to 6 pods) does not work. Workaround: Set maxcount to be equal to the count (essentially disabling auto-scaling), and re-apply the CR.
    • For Drill Clusters on GKE: If there are PENDING Drill pods, the Kubernetes cluster is likely over-subscribed. The Drill operator might have trouble getting metrics in this scenario, and auto-scaling might not function correctly. For example, you might observe that the Drill cluster does not scale down even after the CPU load has reduced. Workaround: To prevent over-subscription, set count and maxcount to lower values.


  • K8S-1032: Running Spark jobs as the root user does not produce metrics in Collectd because the root user is locked in the Spark driver pod. Workaround: To produce metrics for a Spark job, run the job as a non-root user.
  • K8S-1098: On a nonsecure cluster, there are no metrics in Grafana for a non-mapr user. Workarounds: Use either of the following workarounds:
    • Run the Spark job with the same user as the cluster admin user (typically mapr). This is usually the easiest workaround.
    • Run the following command to open the metrics streams to all users:
      maprcli stream edit -path /var/mapr/mapr.monitoring/metricstreams/0 -produceper p -consumeper - -topicperm p -copyperm p- 


  • K8S-1082: Spark driver fails to start on OpenShift. When you run a Spark job on OpenShift, the driver fails to start, and the following error is generated: adduser: Permission denied. The error is generated because MapR Kubernetes Ecosystem containers run as the root user, which OpenShift does not allow. Workaround: On the OpenShift cluster, run the following command to allow pods to run with root-user access.
    oc adm policy add-scc-to-group anyuid system:authenticated
  • K8S-1090: Applying a CSpace CR can fail on OpenShift. When a CSpace is deleted on OpenShift, Security Context Constraints (SCCs) are created, which can cause a CSpace CR to fail. Workaround: After deleting a CSpace, OpenShift users should request that an administrator manually delete any SCCs that remain after CSpace deletion. Use these steps:
    1. Use the OpenShift get command to find all the SCCs:
      oc get scc
    2. Use the delete command to delete an SCC:
      oc delete scc <scc_name>
      For more information about managing SCCs, see Managing Security Context Constraints.
  • K8S-1136: Spark metrics are not available when Spark pods are running on OpenShift. When this happens, errors such as the following can be generated in the Collectd log:
    [2019-12-19 16:45:35] [error] SparkPlugin: Failed to get metrics for service: IdURL at url: http://mapr:mapr@pyspark-with-dependencies-mapr-driver:4040/api/v1/applications: java.lang.Exception: Failed : HTTP error code : 403
    java.lang.Exception: Failed : HTTP error code : 403
        at org.collectd.RestConnection.getResponse(
        at org.collectd.plugins.SparkPlugin.getSparkId(
        at org.collectd.plugins.SparkPlugin.readSparkMetrics(
    [2019-12-19 16:45:35] [error] SparkPlugin: Failed to read json with error: : java.lang.Exception: SparkPlugin: Failed to get spark_id from IdURL
    java.lang.Exception: SparkPlugin: Failed to get spark_id from IdURL
        at org.collectd.plugins.SparkPlugin.readSparkMetrics(
    Workaround: None.

Resolved Issues