Spark和Hadoop是友,非敌

Spark 在 6 月份取得了激动人心的成绩。在圣何塞举办的 Hadoop 峰会上,Spark 成了人们经常提及的话题和许多演讲的主题。IBM 还在 6 月 15 号宣布,将对 Spark 相关的技术进行巨额投资。

这一声明帮助推动了旧金山 Spark 峰会 的召开。在这里,人们会看到有越来越多的工程师在学习 Spark,也有越来越多的公司在试验和采用 Spark。

对 Spark 的投资和采用形成了一个正向循环,迅速推动这一重要技术的成熟和发展,让整个大数据社区受益。然而,人们对 Spark 的日益关注让一些人产生了奇怪、固执的误解:即 Spark 能取代 Hadoop,而不是对 Hadoop 的补充。这一误解从《公司纷纷抛弃大数据技术 Hadoop》这样的新闻标题上就能看出来。

作为大数据长期践行者、现任大数据即服务公司首席执行官,我想就这一误解发表看法,进行一些澄清。

Spark 和 Hadoop 配合得很好。

Hadoop 正日益成为公司处理大数据的企业平台之选。Spark 则是运行在 Hadoop 之上的内存中处理解决方案。Hadoop 最大的用户(包括易趣和雅虎)都在自己的 Hadoop 集群中运行 Spark。Cloudera 和 Hortonworks 在其 Hadoop 包中也加入了 Spark。我们 Altiscale 的客户在我们最开始推出时就使用运行着 Spark 的 Hadoop。

将 Spark 放到 Hadoop 的对立面就像是在说你的新电动车非常酷,根本不需要电一样。但事实上,电动车会推动对更多电力的需求。

为什么会产生这种混淆?如今的 Hadoop 由两大部分组成。第一部分是名为 Hadoop 分布式文件系统(HDFS)的大规模存储系统,该系统能高效、低成本地存储数据,且针对大数据的容量、多样性和速度进行了优化。第二部分是名为 YARN 的计算引擎,该引擎能在 HDFS 存储的数据上运行大量并行程序。

YARN 能托管任意多的程序框架。最初的框架是由谷歌发明的 MapReduce,用来帮助处理海量网络抓取数据。Spark 是另一个这样的框架,还有一个名为 Tez 的新框架。当人们谈论 Spark 与 Hadoop 的“对决”时,他们实际上是在说现在程序员们更喜欢用 Spark 了,而非之前的 MapReduce 框架。

但是,MapReduce 不应该和 Hadoop 等同起来。MapReduce 只是 Hadoop 集群处理数据的诸多方式之一。Spark 可以替代 MapReduce。商业分析们会避免使用这两个本来是供程序员使用的底层框架。相反,他们运用 SQL 等高级语言来更方便地使用 Hadoop。

在过去四年中,基于 Hadoop 的大数据技术涌现出了让人目不暇接的创新。Hadoop 从批处理 SQL 进化到了交互操作;从一个框架(MapReduce)变成了多个框架(如 MapReduce、Spark 等)。

HDFS 的性能和安全也得到了巨大改进,在这些技术之上出现了众多工具,如 Datameer、H20 和 Tableau。这些工具极大地扩大了大数据基础设施的用户范围,让数据科学家和企业用户也能使用。

Spark 不会取代 Hadoop。相反,Hadoop 是 Spark 的基石。随着各个组织寻求运用范围最广、最健壮的平台来将自己的数据资产转变为可行动的商业洞见,它们对 Hadoop 和 Spark 技术的采用也会越来越多。

英语原文:

June was an exciting month for Apache Spark. At Hadoop Summit San Jose, it was a frequent topic of conversation, as well as the subject of many session presentations. On June 15, IBM announced plans to make a massive investment in Spark-related technology.

This announcement helped kick off the Spark Summit in San Francisco, where one could witness the increasing number of engineers learning about Spark — and the increasing number of companies experimenting with and adopting Spark.

The virtuous cycle of Spark investment and adoption is driving rapidly the maturity and capabilities of this important technology, to the benefit of the entire big data community. However, the growing attention directed toward Spark also has given rise to a strange and stubborn misconception: that Spark is somehow an alternative to Apache Hadoop, instead of a complement to it. This misconception can be seen in headlines like “Newer Software Aims to Crunch Hadoop’s Numbers” and “Companies Move On From Big Data Technology Hadoop.”

As a long-time big data practitioner, an early advocate for investment in Hadoop by Yahoo! and now CEO of a company that provides big data as a service for the enterprise, I’d like to bring some perspective and clarity to this conversation.

Spark and Hadoop work together.

Hadoop is increasingly the enterprise platform of choice for big data. Spark is an in-memory processing solution that runs on top of Hadoop. The largest users of Hadoop — including eBay and Yahoo! — both run Spark inside their Hadoop clusters. Cloudera and Hortonworks ship Spark as part of their Hadoop distributions. And our own customers here at Altiscale have been using Spark on Hadoop since we launched.

To position Spark in opposition to Hadoop is like saying that your new electric car is so cool that you won’t need electricity anymore. If anything, electric cars will drive demand for more electricity.

Why the confusion? Modern-day Hadoop consists of two main components. The first is a large-scale storage system called the Hadoop Distributed File System (HDFS), which stores data in a low-cost, high-performance manner optimized for the volume, variety and velocity of big data. The second component is a computation engine called YARN, which can run massively parallel programs on top of the data stored in HDFS.

YARN can host any number of programming frameworks. The original such framework was MapReduce, invented at Google to help process massive web crawls. Spark is another such framework, as is another new one called Tez. When people talk about Spark “crushing” Hadoop, what they really mean is that programmers now prefer using Spark to the older MapReduce framework.

However, MapReduce should not be equated with Hadoop. MapReduce is just one of many ways to process your data in a Hadoop cluster. Spark can be used as an alternative. Looking more broadly, business analysts — a growing base of big data practitioners — avoid both of these frameworks, which are low-level toolkits meant for programmers. Instead, they use high-level languages like SQL that make Hadoop more accessible.

In the last four years, Hadoop-based big data technology has seen an unprecedented level of innovation. We’ve gone from batch SQL to interactive; from one framework (MapReduce) to multiple frameworks (e.g., MapReduce, Spark and many others).

We’ve seen enormous performance and security improvements in HDFS, and we’ve seen an explosion of tools that sit on top of all of this — such as Datameer, H20 and Tableau — that make all of this big data infrastructure usable by a far broader range of data scientists and business users.

Spark isn’t a challenger that’s going to replace Hadoop. Rather, Hadoop is a foundation that makes Spark possible. We expect to see increasing adoption of both as organizations seek the broadest and most robust platform possible for turning their data assets into actionable business insight.

翻译:1thinc0 via:techcrunch

End.

极客网企业会员

免责声明:本网站内容主要来自原创、合作伙伴供稿和第三方自媒体作者投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。任何单位或个人认为本网站中的网页或链接内容可能涉嫌侵犯其知识产权或存在不实内容时,应及时向本网站提出书面权利通知或不实情况说明,并提供身份证明、权属证明及详细侵权或不实情况证明。本网站在收到上述法律文件后,将会依法尽快联系相关文章源头核实,沟通删除相关内容或断开相关链接。

2015-07-15
Spark和Hadoop是友,非敌
Spark 在 6 月份取得了激动人心的成绩。在圣何塞举办的 Hadoop 峰会上,Spark 成了人们经常提及的话题和许多演讲的主题。IBM 还在 6 月 15 号宣布,将对 Spark 相关的技术

长按扫码 阅读全文