hRaven Build Status Coverage Status

hRaven collects run time data and statistics from map reduce jobs running on Hadoop clusters and stores the collected job history in an easily queryable format. For the jobs that are run through frameworks (Pig or Scalding/Cascading) that decompose a script or application into a DAG of map reduce jobs for actual execution, hRaven groups job history data together by an application construct. This allows for easier visualization of all of the component jobs' execution for an application and more comprehensive trending and analysis over time.


Quick start

Clone the github repo or download the latest release:

git clone git://

If you cloned the repository, build the full tarball:

mvn clean package assembly:single

Extract the assembly tarball on a machine with HBase client access.

Create the initial schema

hbase [--config /path/to/hbase/conf] shell bin/create_schema.rb


hRaven requires the following HBase tables in order to store data for map reduce jobs:

The initial table schema can be created by running the create_schema.rb script:

hbase [--config /path/to/hbase/conf] shell bin/create_schema.rb

Data Loading

Currently, hRaven loads data for completed map reduce jobs by reading and parsing the job history and job configuration files from HDFS. As a pre-requisite, the Hadoop Job Tracker must be configured to archive job history files in HDFS, by adding the following setting to your mapred-site.xml file:

  <description>Store history and conf files for completed jobs in HDFS.

Once your Job Tracker is running with this setting in place, you can load data into hRaven with a series of map reduce jobs:

  1. JobFilePreprocessor - scans the HDFS job history archive location for newly completed jobs; writes the new filenames to a sequence file for processing in the next stage; records the sequence file name in a new row in the job_history_process table
  2. JobFileRawLoader - scans the processing table for new records from JobFileProcessor; reads the associated sequence files; writes the associated job history files for each sequence file entry into the HBase job_history_raw table
  3. JobFileProcessor - reads new records from the raw table; parses the stored job history contents into individual puts for the job_history, job_history_task, and related index tables

Each job has an associated shell script under the bin/ directory. See these scripts for more details on the job parameters.


Once data has been loaded into hRaven tables, a REST API provides access to job data for common query patterns. hRaven ships with a simple REST server, which can be started or stopped with the command:

./bin/ (start|stop) rest

The following endpoints are currently supported:

Get Job

Path: /job/<cluster>[/jobId]
Returns: single job
Optional QS Params: n/a

Get Flow By JobId

Path: /jobFlow/<cluster>[/jobId]
Returns: the flow for the jobId
Optional QS Params - v1:

Get Flows

Path: /flow/<cluster>/<user>/<appId>[/version]
Returns: list of flows
Optional QS Params - v1:

Get Flow Timeseries

Path: /flowStats/<cluster>/<user>/<app>
Returns: list of flows with only minimal stats
Optional QS params:

Note: This endpoint duplicates functionality from the "/flow/" endpoint and maybe be combined back in to it in the future.

Get Tasks

Path: /tasks/<cluster>/[jobId]

Returns: task details of that single job

Get App Versions

Path: /appVersion/<cluster>/<user>/<app>
Returns: list of distinct app versions
Optional QS params:

Get New Jobs

Path: /newJobs/<cluster>/

Returns: list of apps with only minimal stats

Optional params:

Project Resources

Bug tracker

Have a bug? Please create an issue here on GitHub

Mailing list

Have a question? Ask on our mailing list!

hRaven Users:

[email protected]

hRaven Developers:

[email protected]

Contributing to hRaven

For more details on how to contribute to hRaven, see

Known Issues

  1. While hRaven stores the full data available from job history logs, the rolled-up statistics in the Flow class only represent data from sucessful task attempts. We plan to extend this so that the Flow class also reflects resources used by failed and killed task attempts.

Copyright and License

Copyright 2016 Twitter, Inc. and other contributors

Licensed under the Apache License Version 2.0: