Big Data Processing with Apache Spark

  • Автор темы b10w01f
  • 250
  • Обновлено
  • 30, Jan 2019
  • #1
Apache Spark is an open-source big-data processing framework built around speed, ease of use, and sophisticated analytics. Spark has several advantages compared to other big-data and MapReduce technologies like Hadoop and Storm.

It provides a comprehensive, unified framework with which to manage big-data processing requirements for datasets that are diverse in nature (text data, graph data, etc.) and that come from a variety of sources (batch versus real-time streaming data).

Spark enables applications in HDFS clusters to run up to a hundred times faster in memory and ten times faster even when running on disk.

In this mini-book, the reader will learn about the Apache Spark framework and will develop Spark programs for use cases in big-data analysis.

The book covers all the libraries that are part of Spark ecosystem, which includes Spark Core, Spark SQL, Spark Streaming, Spark MLlib, and Spark GraphX.
Big Data Processing with Apache Spark

b10w01f


Рег
22 Nov, 2008

Тем
41

Постов
41

Баллов
451
Тем
49554
Комментарии
57426
Опыт
552966

Интересно