Access data stored in Amazon DynamoDB with Apache Hadoop, Apache Hive, and Apache Spark


You can use this connector to access data in Amazon DynamoDB using Apache Hadoop, Apache Hive, and Apache Spark in Amazon EMR. You can process data directly in DynamoDB using these frameworks, or join data in DynamoDB with data in Amazon S3, Amazon RDS, or other storage layers that can be accessed by Amazon EMR.

Currently, the connector supports the following data types:

Hive type Default DynamoDB type Alternate DynamoDb type(s)
string string (S)
bigint or double number (N)
binary binary (B)
boolean boolean (BOOL)
array list (L) number set (NS), string set (SS), binary set (BS)
map<string,string> item (ITEM) map (M)
map<string,?> map (M)
struct map (M)

The connector can serialize null values as DynamoDB null type (NULL).

Hive StorageHandler Implementation

For more information, seeĀ Hive Commands Examples for Exporting, Importing, and Querying Data in DynamoDB in the Amazon DynamoDB Developer Guide.

Hadoop InputFormat and OutputFormat Implementation

An implementation of Apache Hadoop InputFormat interface and OutputFormat are included, which allows DynamoDB AttributeValues to be directly ingested by MapReduce jobs. For an example of how to use these classes, see Set Up a Hive Table to Run Hive Commands in the Amazon EMR Release Guide, as well as their usage in the Import/Export tool classes in DynamoDBExport.java and DynamoDBImport.java.

Import/Export Tool

This simple tool that makes use of the InputFormat and OutputFormat implementations provides an easy way to import to and export data from DynamoDB.

Supported Versions

Currently the project builds against Hive 2.3.0, 1.2.1, and 1.0.0. Set this by using the hive1.version, hive1.2.version and hive2.version properties in the root Maven pom.xml, respectively.

How to Build

After cloning, run mvn clean install.

Example: Hive StorageHandler

Syntax to create a table using the DynamoDBStorageHandler class:

CREATE EXTERNAL TABLE hive_tablename (
    hive_column1_name column1_datatype,
    hive_column2_name column2_datatype
STORED BY 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler'
    "dynamodb.table.name" = "dynamodb_tablename",
    "dynamodb.column.mapping" =
    "dynamodb.type.mapping" =
    "dynamodb.null.serialization" = "true"

dynamodb.type.mapping and dynamodb.null.serialization are optional parameters.

Example: Input/Output Formats with Spark

Using the DynamoDBInputFormat and DynamoDBOutputFormat classes with spark-shell:

$ spark-shell --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar
import org.apache.hadoop.io.Text;
import org.apache.hadoop.dynamodb.DynamoDBItemWritable
import org.apache.hadoop.dynamodb.read.DynamoDBInputFormat
import org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat
import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.io.LongWritable

var jobConf = new JobConf(sc.hadoopConfiguration)
jobConf.set("dynamodb.input.tableName", "myDynamoDBTable")

jobConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat")
jobConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat")

var orders = sc.hadoopRDD(jobConf, classOf[DynamoDBInputFormat], classOf[Text], classOf[DynamoDBItemWritable])


Example: Import/Export Tool

Export usage
java -cp target/emr-dynamodb-tools-4.2.0-SNAPSHOT.jar org.apache.hadoop.dynamodb.tools.DynamoDBExport /where/output/should/go my-dynamo-table-name
Import usage
java -cp target/emr-dynamodb-tools-4.2.0-SNAPSHOT.jar org.apache.hadoop.dynamodb.tools.DynamoDBImport /where/input/data/is my-dynamo-table-name

Additional options

export <path> <table-name> [<read-ratio>] [<total-segment-count>]

read-ratio: maximum percent of the specified DynamoDB table's read capacity to use for export

total-segments: number of desired MapReduce splits to use for the export
import <path> <table-name> [<write-ratio>]

write-ratio: maximum percent of the specified DynamoDB table's write capacity to use for import

Maven Dependency

To depend on the specific components in your projects, add one (or both) of the following to your pom.xml.

Hadoop InputFormat/OutputFormats & DynamoDBItemWritable


Hive SerDes & StorageHandler