brew install sbt
rm -rf .git
sbt console
. It could take a few minutes the first time. It will will create the directories project/target and target (which are .gitignored). The result is a scala 2.10.4 console, with all the project dependencies loaded.git init
Main
and Schema
classes to your package nameMain
class and click Run Main
We use sbt-assembly to bundle the application in a fat JAR, ready to be submitted to a Spark cluster. The JAR must not include the Spark components (spark-core, spark-sql, hadoop-client, etc) and their dependencies.
sbt assembly
. The generated is in target/scala-2.10/{projectname}-assembly-{version}.jarTODO: try to remove the manual part of editing build.sbt.
screen
~/spark/bin/spark-submit --master spark://ec2-w-x-y-z.eu-west-1.compute.amazonaws.com:7077 --class io.basilic.MySparkJob ~/MyProject-assembly-1.0.jar > /mnt/job.out &> /mnt/job.err
tail -f /mnt/job.{out,err}
sbt dependency-graph
.TODO: write this paragraph
TODO: By default, spark-ec2 runs with hadoop-client on 1.0.4.
One can also run the cluster on 2.0.x with `--hadoop-major-version=2`,
which is an alpha version. @see http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client
spark-ec2 does not provide a way to use the stable 2.4
It would be nice to find a way to run spark-ec2 with the hadoop-client 2.4.x.
@see https://groups.google.com/d/msg/spark-users/pHaF01sPwBo/faHr-fEAFbYJ