Build and Push Jobs for Voldemort Read Only Stores


We have been using the Build and Push Job at Linkedin to create Voldemort Read-Only stores from data present in sequence files/ Avro Container files on HDFS. The Voldemort build and push job uses the fault tolerance and parallelism of Hadoop and builds individual Voldemort node/partition-level data stores, which are transferred to Voldemort for serving. A Hadoop job reads data from a source in HDFS, repartitions it on a per-node basis, and finally writes the data to individual Read-only storage engine [1].

The VoldemortBuildAndPushJob will behave in the following way:

  1. Build an XML storeDef for your data (based off of the key and value metadata in your JsonSequenceFile/Avro on HDFS).
  2. Connect to push.cluster
  3. Get the storeDefs for all stores in the push.cluster
  4. Look through the storeDefs for a store with the same name as If one is found, validate that the storeDef in the cluster matches the storeDef for your data. If it doesn’t, fail. If no storeDef exists on the cluster that matches, then add your storeDef to the cluster.
  5. Build the Voldemort store in Hadoop
  6. Push the Voldemort store to push.cluster

Azkaban Job

The Build and Push Job is an Azkaban job. Azkaban is a workflow scheduler used at LinkedIn [2]. You provide a job file to Azkaban with a set of properties and the jars and Azkaban executes this job.

You can download the tarball. Then untar it

tar -xvf build-and-push.tar.gz
All Azkaban .job and .properties files must go in this directory.
All .jar files should go into this directory.
All code should go into this directory. This includes Java, as well as scripting languages and shell scripts.
Any testing code should be placed in this folder.

PS: In case you are running Voldemort Server locally before you start the Voldemort server ensure that the file has the following entry

This instructs the server to use that class while fetching files from Hadoop during the push phase

Pushing JSON Data - Job File

push.cluster=tcp://localhost:6666"test store"

Pushing AVRO Data - Job File

azkaban.should.proxy=true anagpal
build.replication.factor= 1
avro.value.field=localizedFirstNames"Testing avro build and push"

Notice the following properties:

  1. build.type.avro=true This specifies that input data is Avro.
  2. avro.key.field=memberId This specifies the field to be used as the key
  3. avro.value.field=localizedFirstNames This specifies the field to be used as the value

Running the Job


Then after compiling on your hadoop gateway/local machine copy the directory and execute the following command:

./run-job dist/build-and-push-1.00-all.jar -j dist/package/ -c dist/package/ --ignore-deps <job name>

You need to change the input/output paths along with the ugi name,store name and server location. You can then query the voldemort server to see the new store entries

File Format

We use a custom data and index format for the Read-Only store.

Ony every Node you will find a node directory containing one or mutliple data and index files with the following naming convention:



Chunk size issue:
Symptom (possible):
Caused by: Job failed! :
Check the number of mappers and / or reducers (limit = 10000). If they're over the limit, use the num.chunks parameter to reduce number of chunks and hence #mappers, reducer
Size limit Chunk overflow exception: chunk
Each chunk data file is capped at 2 Gb and hence you may want to increase the num.chunks to break it down into multiple chunks


  1. Serving Large-scale Batch Computed Data with Project Voldemort
  2. Azkaban
Fork me on GitHub