Deploying the Spark Application

The MapR Kubernetes Ecosystem supports various types of Spark jobs. To create a pod in the CSpace namespace for your Spark application, do the following:
  1. (Required for connecting to a secure storage cluster) Create and deploy MapR credentials for the Spark application. See Creating MapR Credentials for Spark Applications in Compute Spaces.
  2. Create a Custom Resource (CR) for a Spark job, or modify a sample Spark Custom Resource. For more information, see Creating and Deploying a Custom Resource for Spark Applications.
  3. Run the following command to create a Spark job using a Spark custom resource file:
    kubectl apply -f <path to Spark custom resource file>
    
  4. Run the following command to verify whether the Spark application was submitted:
    kubectl get pods -n <Spark cspace>
    
    A driver pod is created and starts up in the Running state. For additional information, see Running Spark on Kubernetes.