How will the rise of stream processing effect Big Data paradigm?

Even if you’ve never heard so far about stream processing, it is an easy guess you’ll likely run into this term in the coming weeks around Hadoop World, given the way it illustrates so nicely what YARN all about and also of course given the Hortonworks announcement of endorsing Apache Storm (disclaimer – I love Hortonworks).

Now, before I start fantasizing about the potential popularity of various architectures in the years to come, some quick background…

“Traditional” big data streaming / messaging is just about moving events from place to place. For example, collecting log records from dozens or hundreds of servers in near real-time and delivering each to the relevant target(s), like HDFS files,HBase, a database or another application in a reliable fashion (with specific semantics). It is a re-incarnation of the message queuing concepts, in a high-throughput, distributed fashion and extended both to actively collect events directly from various sources and to persist the events on the other side in various targets.

It seems to me the most popular system is Apache Kafka (originally from LinkedIn), but there is also Apache Flume (originally from Cloudera) which is bundled in most Hadoop distributions.

Stream processing on the other hands, focuses on doing complex parsing, filtering, transformations and computations on the data stream as the data flows, for example maintaining an up-to-date counters and aggregated statistics per user or rolling window “top n” (continuous computation). This is opposed to batch processing frameworks, like MapReduce and Tez, that are based on one-time bulk processing. In this space, Apache Storm (originally from Backtype / Twitter) is the main player, but as LinkedIn released Apache Samza last month, it might lead to some healthy competition. There are other initiatives – for example, Cloudera contributed a library or building real-time analytics on Flume which might be good enough for simpler cases.

By the way, these two types of systems are complementary. Specifically, the combination of Kafka and Storm seems to be gaining traction – here is one success story from LivePerson.

Now, stay with me for a minute while I hype… What if you could write most of the common batches as stream processing? What if you could provide your business users updated data that is only seconds or tens of seconds behind the present? What if you could just have the answers to many common questions always updated and ready? Could stream processing significantly de-emphasize / replace batch processing on Hadoop in the future?

Now, that is of course mostly crazy talk today. In reality, maybe stream processing would not be a good fit for many computation types, and it is rather early to fully understand it. Also, the functionality, optimizations, integrations, methodologies and best practices around stream processing would likely take a few years to really mature.

Having said that, it seems to me that at least in the database / ETL space, stream processing has a great potential to evolve into a powerful alternative paradigm on how we design a system that adds analytic to operational systems.

But, instead of hyping anymore, I’ll ask – how crazy is that? How far are we from such capabilities? What I already see looks promising in theory. Here are a couple of examples:

  • Storm already has its own official abstraction layer to simplify writing complex computations. It is called Trident, contributed by Twitter and is part of Storm 0.8. It provides higher-level micro-batch primitives that allows, for example, to build parallel aggregations and joins. Interestingly, when Twitter released it a year ago, they specifically described it as similar to Pig or Cascading, but for stream processing.
    It seems to me that declarative,SQL-like continuous computation is only one step away from that.
  • Even more fascinating is Twitter’s Summingbird (open-sourced last month). It is a programmer-friendly abstraction of stream processing that can be executed either in real-time (on Storm) or as batch (using Scalding, translated to MapReduce, in the future maybe to Tez). More interestingly, It was designed to allow an hybrid mode, transparently mixing historical data (batch) from Hadoop with last-minute updates from Storm.
    So, batch and streaming could co-exist and even transparently mix, even today.

Now, it remains to be seen of course how will those technologies evolve, but it seems that the current state of events is already not too bad. With the adoption steam processing by mainstream Hadoop distributions (surely Hortonworks wont be the last), usability, functionality and innovation will likely all accelerate. So, if you are starting to look into “next-gen” Big Data infrastructure and low require latency from event to results, it might be a good time to start looking into stream processing.

Update 30/10/13: I can’t believe I didn’t spend time to read about Lambda Architecture before. Nathan Marz (who led Twitter’s Storm) came up with a coherent, end-to-end architecture for a data system to make relational databases obsolete, with both a batch processing component and a stream processing component. Find some time to watch his fascinating presentation from Strange Loop 2012, you won’t regret it.

The end of the classical MPP databases era

Over the years, enterprises realized that their many isolated systems generate a vast amount of data. What if they could put all that massive data into one centralized platform, correlate it and analyze it? Surely that would uncover a wealth of relevant, hidden business insights. Of course, those were the eighties (actually earlier), so this new platform was called the Data Warehouse. And it was good. So, over the years, it became very popular and by the end of the nineties nearly every large organization had one or dozen of them.

However, with all the goodness, there were some challenges. Data warehousing required a lot of specialized skills and tools. For example, regular databases couldn’t really support very large data warehouses, so from a certain scale, it required specialized and expensive products – like those from Teradata. As a response to the quick adoption of the data warehouse, the “classical” MPP databases arrived to the market about a decade ago, first Netezza and later the rest of the gang (Vertica, Greenplum etc). They were all relatively low cost, and built parallel processing on top of shared-nothing cluster architecture. They were the one place were you could throw “petabytes of data” and analyze it over a large, mostly-commodity, computing cluster.

So, what happened?

Simply, over the time the market gained experience and its requirements and expectations have evolved. For example, some examples of challenges not solved by existing MPP databases include:

  • The main challenge of a data warehouse is making sense of the data. The classical DW solution is to clean up and standardize the data as it is being loaded (ETL). This is required as the regular MPP database schema is rigid, and schema evolution is hard and painful (just like all relational databases). That standard method is, however, complex and very time consuming, which makes it very hard for the DW to adapt to the constant and frequent changes in its source systems.
    Nowadays, a common requirement is to support dynamic schemas / schema-on-read, so at the very least the frequent schema and data sources changes wont block the ingestion of data.
    This also supports the on-going shift of power from DBAs to the developers.
  • The classical MPP databases have relatively rigid HA and scalability – unlikely that you could add a couple of nodes every week or every month in real life or survive a “once an hour” node failure rate etc. In other words, MPP databases provided scalability but not elasticity, plus HA that is focused on surviving a single node failure.
    Today elasticity is a requirement – inspired by “failure is a common, normal event” mentality of Hadoop and of relying on lower-end commodity servers.
    The next step for this mentality will be to also replace the DR concept with a native multi-data-center active-active support as part of the core architecture, which some of the NOSQL and NewSQL players are advocating.
  • If talking about rigidity, the existing players usually had an assumption of isolation and uniformity – uniform physical nodes, dedicated high performance cluster interconnect and relatively homogenous workload.
    Nowadays with cloud deployments or deployments on a shared Hadoop cluster (maybe virtualized in the future), those assumptions needs to be revisited and products needs to take into account never-ending fluctuation of CPU, I/O and network on shared resources.
  • Finally, MPP databases are propriety, closed-source beasts. That doesn’t fit well with the collaborative open-source ecosystem led by the major web giants which are the main driver behind the existing big data boom, innovation and fast pace. This represents a huge mentality shift – just think why MapR doesn’t own the Hadoop market even though it came with a superior offering early on.

Speaking of mentality shift – one more thing to add .Most organizations that will start implementing a solution increasingly ask for a single “big data” platform – that must support complex SQL but also online/OLTP lookups, batches processing unstructured data, free-text searches, graph analysis and whatever else comes up next (future-proofing). For that, the industry is no doubt standardizing on Hadoop as the unifying infrastructure – and will converge around YARN + HDFS infrastructure. Now, no one wants start its big data journey with two big data platforms, so even if sometimes it could have made sense, it will be very hard to promote such solution at the current industry buzz level. The only exception might be SAP Hana, which built a unique offering of merging OLTP and DW into a single MPP system – skipping the ETL headache altogether (while also of course supports integration with Hadoop).

So, what will happen?

Well, looking at my crystal ball, it seems that popularity will continue to quickly shift to modern MPP databases on top of Hadoop.They will have flexible schema (which sucks, but it’s the only way to keep up with the schema changes and data sources proliferation). They will be significantly less monolithic and will leverage the Hadoop ecosystem instead – for example HDFS for storage management, YARN for resource management, HCatalog for metadata management, maybe even support for various open on-disk file formats etc. So, they will be mostly trimmed down to parallel query optimization and runtime engines.

As for the old MPP players, they would sooner or later try to adapt to this world, but for most of them it is simply too late already. The only player who have committed to such change is Greenplum (now: Pivotal) which started migrating its MPP database to HDFS (and renamed it to HAWQ). Even for them, even though their first release is now out, handling the challenges mentioned will not be easy. But it is still early in the game, and their lead (in my opinion) in query optimization over the rest of the “SQL on/in Hadoop” players may buy them enough time to evolve into a truly leading “next-gen” MPP-on-Hadoop player.