Requirements for Using Admission Webhook on OpenShift

The Spark driver/executor pod can use volume mounts using the admission webhook provided during spark-operator deployment. For example, for spark-hive jobs, hivesite is mounted as a volume and placed in the path of the driver. The admission webhook is not enabled by default on OpenShift. Before using the admission webhook provided by the spark-operator, you must do the following to enable admission webhook on OpenShift:
  1. Modify the /etc/origin/master/master-config.yaml on the OpenShift master nodes to add the following to the pluginConfig section:
    admissionConfig:
      pluginConfig:
        ValidatingAdmissionWebhook:
          configuration:
            kind: DefaultAdmissionConfig
            apiVersion: v1
            disable: false
        MutatingAdmissionWebhook:
          configuration:
            apiVersion: v1
            disable: false
            kind: DefaultAdmissionConfig
  2. Restart the API and controllers by running the command that is appropriate for the OpenShift installation method in your environment:
    systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
    or
    /usr/local/bin/master-restart api
    /usr/local/bin/master-restart controllers
  3. Rerun the Spark job, and validate that the Spark driver pod has the hive-site-volume volume in the volumes section. For example:
    Volumes:
      hive-site-volume:
        Type:        ConfigMap (a volume populated by a ConfigMap)
        Name:        mapr-hivesite<xxx>-cm
        Optional:    false