Requirements for Running the Sample Spark Applications

If you want to run the sample Spark Hive Metastore job using the mapr-spark-hive.yaml file in the examples/spark/ directory, you must first create a directory for the Spark application on the MapR cluster. Run the following command to create a directory for Spark:
hadoop fs -mkdir /spark-warehouse
If you wish to run the sample word-count Spark job using the mapr-spark-wc.yaml file in the examples/spark/ directory, you must first create a text file named input.txt to run the word-count job and put the text file in the /tmp directory on the MapR Filesystem. For example:
maprfs:///tmp/input.txt

For information about the available sample Spark applications, see Creating and Deploying a Custom Resource for Spark Applications.