The Web is getting faster, and the data it delivers is getting bigger. How can you handle everything efficiently? This book introduces Spark, an open source cluster computing system that makes data analytics fast to run and fast to write. You’ll learn how to run programs faster, using primitives for in-memory cluster computing. With Spark, your job can load data into memory and query it repeatedly much quicker than with disk-based systems like Hadoop MapReduce.
Written by the developers of Spark, this book will have you up and running in no time. You’ll learn how to express MapReduce jobs with just a few simple lines of Spark code, instead of spending extra time and effort working with Hadoop’s raw Java API.
- Quickly dive into Spark capabilities such as collect, count, reduce, and save
- Use one programming paradigm instead of mixing and matching tools such as Hive, Hadoop, Mahout, and S4/Storm
- Learn how to run interactive, iterative, and incremental analyses
- Integrate with Scala to manipulate distributed datasets like local collections
- Tackle partitioning issues, data locality, default hash partitioning, user-defined partitioners, and custom serialization
- Use other languages by means of pipe() to achieve the equivalent of Hadoop streaming