Apache Spark and MapR Database JSON Integration

Contributed by

7 min read

Apache Spark is an open source big data processing framework, which is being widely used for analytics on streaming and batch workloads. Spark is fully supported on MapR, and it typically uses data in the form of large files. With the Spark/MapR Database connectors, you can use MapR Database as a data source and as a data destination for Spark jobs.

MapR Database is a high performance NoSQL database, which supports two primary data models: JSON documents and wide column tables. A Spark connector is available for each data model. The Native Spark Connector for MapR Database JSON provides APIs to access MapR Database JSON documents from Apache Spark, using the Open JSON Application Interface (OJAI) API. To access the wide column data model, which is often referred to as “MapR Database Binary,” the Spark HBase and MapR Database Binary Connector should be used. This blog article will describe examples of the connector for MapR Database JSON.


Big data applications are moving towards data sets with flexible (or no predefined) schemas. Hence, the OJAI API was introduced in the MapR Database 5.1 release. The OJAI API is the set of interfaces that allows the application to manipulate structured, semi-structured, or unstructured data. Please refer to the following GitHub repository for more details on the OJAI API: https://github.com/ojai/ojai.

With the new release (MapR 5.2, MEP 3.0), a new connector was developed to integrate MapR Database JSON tables with Spark. This connector uses OJAI API internally to access/mutate the tables. It is this connector API that will be further explored in this blog post.

The Spark/MapR Database JSON Connector API

In the MapR Ecosystem Pack (MEP) 3.0 release, the Native Spark Connector for MapR Database JSON supports loading data from a MapR Database table as a Spark Resilient Distributed Dataset (RDD) of OJAI documents and saving a Spark RDD into a MapR Database JSON table. (An RDD is the base format for storing data for use by Spark.)

Here are the interfaces for loading the JSON table into an RDD:


The above function also supports another variant wherein one can directly load the documents as an RDD of Scala objects:


Below is the API for saving the objects into a MapR Database table:


The above function (i.e., saveToMapRDB) also contains more self-explanatory parameters:

createTable – Create the table before saving the documents, and throw an exception if the table already exists. The default value is set to false. bulkInsert – Save a group of rows of data at once into a MapR Database table. The default value is set to false. idFieldPath – Key to be used to identify the document. The default value is set to “id."

Similar to loading the document into a Scala bean class, one can save an RDD of user-specified Scala class objects into the MapR Database JSON table.

Saving Objects in a MapR Database JSON Table

To access the connector API, it is required to import the Scala package “com.mapr.db.spark._.” All the required implicit definitions are included in the com.mapr.db.spark package.

Below is the code, which saves the RDD of Person objects into the MapR Database JSON table:

    val spark = new SparkConf().setAppName("json app")
    val sc = new SparkContext(spark)
    val people = sc.parallelize(getUsers())
    people.saveToMapRDB("/tmp/UserInfo", createTable= true)

Here is the getUsers function, which allocates Person objects:

  def getUsers(): Array[Person] = {
   val users: Array[Person] =

      Person("DavUSCalif", "David", "Jones",
                   Seq("football", "books", "movies"),
                   Map("city" -> "milpitas", "street" -> "350 holger way", "Pin" -> 95035)),

      Person("PetUSUtah", "Peter", "pan",
                   Seq("boxing", "music", "movies"),
                   Map("city" -> "salt lake", "street" -> "351 lake way", "Pin" -> 89898)),

      Person("JamUSAriz", "James", "junior",
                   Seq("tennis", "painting", "music"),
                   Map("city" -> "phoenix", "street" -> "358 pond way", "Pin" -> 67765)),

      Person("JimUSCalif", "Jimmy", "gill",
                  Seq("cricket", "sketching"),
                  Map("city" -> "san jose", "street" -> "305 city way", "Pin" -> 95652)),

Person("IndUSCalif", "Indiana", "Jones",
              Seq("squash", "comics", "movies"),
            Map("city" -> "sunnyvale", "street" -> "35 town way", "Pin" -> 95985)))


Loading Data from a MapR Database JSON Table

The code provided below will load the documents from the "/tmp/UserInfo" table into an RDD:

val usersInfo = sc.loadFromMapRDB("/tmp/UserInfo").collect

Here is the result from the printing of usersInfo documents:


{"Pin":95035,"city":"milpitas","street":"350 holger way"},

  "address":{"Pin":95985,"city":"sunnyvale","street":"35 town way"},

  "address":{"Pin":67765,"city":"phoenix","street":"358 pond way"},

  "address":{"Pin":95652,"city":"san jose","street":"305 city way"},

   "address":{"Pin":89898,"city":"salt lake","street":"351 lake way"},

Projection Pushdown and Predicate Pushdown for the Load API

The “load” API of the connector also supports “select” and “where” clauses. These can be used for projection pushdown of subsets of fields and/or can filter out documents by using a condition.

Here is an example on how to use the “where” clause to restrict the rows:

val usersLivingInMilpitas = sc.loadFromMapRDB("/tmp/UserInfo")
.where(field("address.city") === "milpitas")

Similarly, if one wants to project only first_name and last_name fields, the following code will generate the required output:

val namesOfUsers = sc.loadFromMapRDB("/tmp/UserInfo")
.select("first_name", "last_name")

Setting up the Project to Use the Spark/MapR Database Connector

To access the loadFromMapRDB and saveToMapRDB API, the following Maven package and artifactId information is required in the project’s pom.xml file:


To add the Spark core dependency into the pom.xml file:


MapR specific jars are located in the mapr-releases repository. The following repository information should be included in the pom.xml file to enable Maven to download the dependencies:


The code for the example application can be accessed here.


Once the data is loaded as an RDD of either OJAI documents or an RDD of a Scala bean class, it can be processed further using Spark transformations. The data loaded from MapR Database tables can be enriched using the data from other data sources.

The Spark connector will be further enhanced to support the DataFrame and DataSet APIs. It enables you to use Spark SQL and Spark Streaming to transform the data seamlessly from MapR Database JSON tables.

This blog post was published April 27, 2017.

50,000+ of the smartest have already joined!

Stay ahead of the bleeding edge...get the best of Big Data in your inbox.

Get our latest posts in your inbox

Subscribe Now