6 min read
In this post, we will discuss how to use the MapR Control System (MCS) to monitor MRv1 jobs. We will also see how to manage and display jobs, history, and logs using the command line interface.
In part 1 of this post, we focused on how to work with built-in and custom counters, a vital part of monitoring Hadoop job progress. If you missed part 1, you may wish to refer to it before continuing. (Note: The material from this blog post is from one of our free on-demand training courses, Developing Hadoop Applications.)
Using the MCS to Monitor MRv1 jobs
You can use the MCS to show granular job and task information in a cluster.
To display this level of information in the MCS, you must configure the metrics database. The MCS can be used to display the metrics only for MRv1 jobs. To view metrics for MRv2 (YARN) jobs, you can use the YARN Resource Manager WebUI.
The first time you log in to the MCS, you’ll need to specify the URL for the metrics database (database-server:3306), username, password, and name of database (metrics).
The screenshot above shows some details about a job executed on the cluster, including the following:
You can dig deeper into a job by clicking into the job name or id.
You can dig into the details of a task by clicking the task id or primary attempt from the previous screen. The details of the task are displayed as follows:
You can dig further into the task details by clicking the task attempt id.
You can display the log file associated with a given task by clicking the log link (from previous screen). The details in the log file include the following:
Note that debug scripts are optional and must be configured to run.
Using Command Line Interface
You can use the Command Line interface to manage and display jobs, history and logs.
Use the hadoop job command to list and get the status of the running MapReduce jobs. Note that if you have MRv1 and MRv2 in your environment, then the hadoop command by default points to MRv2. So, in this case we are looking at MRv2 jobs.
You can also use the Job tracker and Task tracker web UI to track the status of a launched job or to check the history of previously run jobs.
To view the history of a job, you can run the hadoop job – history command.
To stop a job that is already launched, use the hadoop job –kill command rather that the operating system “Kill”.
Hadoop logs messages to Log4j by default. The definable levels in the implementation are trace, debug, info, warn, error, and fatal (in increasing order of arbitrary severity). You configure the logging preferences for your Hadoop jobs in the commons-logging.properties file which you place in the CLASSPATH.
You can write to the configured log system in your map and reduce code. Note that the messages are syslog-style messages. As such, you specify the level of the message with one of the following:
You can configure a log level in your code such that only messages from that level (and higher) are reported to the logging subsystem. Under normal operating circumstances, you should not need more detail than “info” from your jobs. But when you are debugging your code, you should enable “debug” level info from your jobs (mapred.map.child.log.level and mapred.reduce.child.log.level).
In this post, we have seen how to use the MCS to view metrics for MRv1 jobs. We have also seen how to monitor and manage jobs and history using the Command Line Interface. Note that the hadoop command by default will point to MRv2 jobs (if both MRv1 and MRv2 are in your environment).
Stay ahead of the bleeding edge...get the best of Big Data in your inbox.