Apache Spark Machine Learning Cookbook

This is the code repository for Apache Spark Machine Learning Cookbook, published by Packt. It contains all the supporting project files necessary to work through the book from start to finish.

About the Book

Machine learning aims to extract knowledge from data, relying on fundamental concepts in computer science, statistics, probability, and optimization. Learning about algorithms enables a wide range of applications, from everyday tasks such as product recommendations and spam filtering to cutting edge applications such as self-driving cars and personalized medicine. You will gain hands-on experience of applying these principles using Apache Spark, a resilient cluster computing system well suited for large-scale machine learning tasks.

Instructions and Navigation

All of the code is organized into folders. Each folder starts with a number followed by the application name. For example, Chapter02.

We've provided a video for the setup required in Chapter 1 code

The code will look like the following:

object HelloWorld extends App {
println("Hello World!")
}

Please use the details from the software list document. To execute the recipes in this book, you need a system running Windows 7 and above, or Mac 10, with the following software installed: Apache Spark 2.x Oracle JDK SE 1.8.x JetBrain IntelliJ Community Edition 2016.2.X or later version Scala plug-in for IntelliJ 2016.2.x Jfreechart 1.0.19 breeze-core 0.12 Cloud9 1.5.0 JAR Bliki-core 3.0.19 hadoop-streaming 2.2.0 Jcommon 1.0.23 Lucene-analyzers-common 6.0.0 Lucene-core-6.0.0 Spark-streaming-flume-assembly 2.0.0 Spark-streaming-kafka-assembly 2.0.0 The hardware requirements for this software are mentioned in the software list provided with the code bundle of this book.

Related Products