Apache Spark for Java Developers

This is the code repository for Apache Spark for Java Developers, published by Packt. It contains all the supporting project files necessary to work through the book from start to finish.

About the Book

Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone.

The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages.

By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications.

Instructions and Navigation

All of the code is organized into folders. Each folder starts with a number followed by the application name. For example, Chapter02.

Chapter wise code files are placed inside the following folder:

The code will look like the following:

Any command-line input or output is written as follows: "\src\main\java\com\packt\sfjd"

SparkConf conf =new SparkConf().setMaster("local").setAppName("Local File
system Example");
JavaSparkContext jsc=new JavaSparkContext(conf);

If you want to set up Spark on your local machine, then you can follow the instructions mentioned in Chapter 3, Let Us Spark.

Related Products