Running Spark Applications in Compute Spaces

During bootstrapping, the MapR Spark operator (based on the Spark operator here) is installed on the Kubernetes cluster in the spark-operator namespace. The Spark operator is able to create Spark jobs from custom resources by using the API. The Spark operator V1beta2 currently supports Spark version 2.4.4.

Known Limitations

The following Spark features are not supported in the current version of Spark on Kubernetes:
  • Spark Thrift server*
  • Kerberos authentication
  • Dynamic Resource Allocation and the External Shuffle Service
  • Local File Dependency Management
  • Spark Application Management
  • Job Queues and Resource Management
*Spark Thrift server is not supported through the Spark operator, and non-operator workflows are currently not supported.