7 min read
The telecommunications industry is on the verge of a major transformation through the use of advanced analytics and big data technologies like the MapR Data Platform. The MapR Guide to Big Data in Telecommunications is designed to help you understand the trends and technologies behind this data driven telecommunications revolution. Download your complimentary copy here.
This post will help you get started using Apache Spark Streaming for consuming and publishing messages with MapR Event Store and the Kafka API. Spark Streaming is an extension of the core Spark API that enables continuous data stream processing. MapR Event Store is a distributed messaging system for streaming event data at scale. MapR Event Store enables producers and consumers to exchange events in real time via the Apache Kafka 0.9 API. MapR Event Store integrates with Spark Streaming via the Kafka direct approach. This post is a simple how to example, if you are new to Spark Streaming and the Kafka API you might want to read these first:
The example data set is from the Telecom Italia 2014 Big Data Challenge, it consists of aggregated mobile network data generated by the Telecom Italia cellular network over the city of Milano and Trento. The data measures the location and level of interaction of the users with the mobile phone network based on mobile events that occurred on the mobile network over 2 months in 2013. The projects in the challenge used this data to provide insights, identify and predict mobile phone-based location activity trends and patterns of a population in a large metropolitan area.
The Data Records are in TSV format, an example line is shown below:
First you import the packages needed to integrate MapR Streams (Now called MapR Event Store) with Spark Streaming and Spark SQL.
In order for Spark Streaming to read messages from MapR Event Store you need to import from org.apache.spark.streaming.kafka.v09. In order for Spark Streaming to write messages to MapR Event Store you need to import classes from org.apache.spark.streaming.kafka.producer._ ;
A Scala CallDataRecord case class defines the schema corresponding to the TSV records. The parseCallDataRecord function parses the tab separated values into the CallDataRecord case class.
These are the basic steps for the Spark Streaming Consumer Producer code:
We will go through each of these steps with the example application code.
The first step is to set the KafkaConsumer and KafkaProducer configuration properties, which will be used later to create a DStream for receiving/sending messages to topics. You need to set the following paramters:
For more information on the configuration parameters, see the MapR Event Store documentation.
2) Initialize a Spark StreamingContext object.
We use the KafkaUtils createDirectStream method with a StreamingContext object , the Kafka configuration parameters, and a list of topics to create an input stream from a MapR Event Store topic. This creates a DStream that represents the stream of incoming data, where each message is a key value pair. We use the DStream map transformation to create a DStream with the message values.
Next we use the DStream foreachRDD method to apply processing to each RDD in this DStream. We parse the message values into CallDataRecord objects, with the map operation on the DStream, then we convert the RDD to a DataFrame, which allows you to use DataFrames and SQL operations on streaming data.
Here is example output from the cdrDF.show :
The CallDataRecord RDD objects are grouped and counted by the squareId. Then this sendToKafka method is used to send the messages with the squareId and count to a topic.
Example output for the squareId and count is shown below:
To start receiving data, we must explicitly call start() on the StreamingContext, then call awaitTermination to wait for the streaming computation to finish.
This tutorial will run on the MapR v5.2 Sandbox, which includes MapR Event Store and Spark 1.6.1. You can download the code, data, and instructions to run this example from here: Code: https://github.com/caroljmcdonald/mapr-streams-spark
In this blog post, you learned how to integrate Spark Streaming with MapR Event Store to consume and produce messages using the Kafka API.
This blog was originally published on September 6, 2016.
Stay ahead of the bleeding edge...get the best of Big Data in your inbox.