Highlights from writing a data sheet for JethroData

A couple of weeks ago, my friends at JethroData asked me to help them out writing a technical data sheet.

Now, JethroData product is a SQL engine for Hadoop, currently in private beta, that has a very different design and architecture than the rest of the crowd. Specifically, it actually always indexes all the columns of your tables, and also uses HDFS as a shared, remote storage instead of running locally on the data nodes. It has some very good reasons to work this way, but being radically different has made it harder to explain.

So, my task was to write a technical data sheet that would help technical people switch from “this is an insane nonsense” to “that actually makes sense in these cases, but how do you handle this or that…”. In other words, tell a consistent technical story that makes sense and highlights their strengths, encouraging readers to engage further.

I think data sheets in general are a piece of paper you get from a vendor at their booth, and quickly browse through them later while you are stuck at a boring presentation (or is it just me?). So, they have to be interesting and appealing. Also, due to their short format, they should be very focused on the main values and not too high-level or low-level. Quite a challenge!

To overcome the challenge, I first needed deeper knowledge regarding the product. For that, I spent some time with JethroData team, especially with Boaz Raufman, their co-founder and CTO. I have grilled him with questions on architecture and functionality, which led to great discussions. While of course some functionality is still lacking at this early phase, I was impressed to see that the core architecture is designed to handle quite a lot.

Then came the harder part – focus. There are many ways to use JethroData, its implementation has various advantages in different scenarios and of course, the implementation has many details, each with some logic behind it. It is impossible to cram it all into a two-page datasheet, even if most of it is very important. So, I had several lively discussions with Eli Singer, JethroData CEO, of everything from positioning to many specific strengths and supporting arguments and their relative importance.

At the end of the day, I think we got something pretty sweet. You can read it on their web site technology page. If you are attending Strata/Hadoop World, JethroData team will be on the Startup Showcase on Monday and in booth #75 Tuesday-Wednesday – bring questions and arguments 🙂

Advertisements

How will the rise of stream processing effect Big Data paradigm?

Even if you’ve never heard so far about stream processing, it is an easy guess you’ll likely run into this term in the coming weeks around Hadoop World, given the way it illustrates so nicely what YARN all about and also of course given the Hortonworks announcement of endorsing Apache Storm (disclaimer – I love Hortonworks).

Now, before I start fantasizing about the potential popularity of various architectures in the years to come, some quick background…

“Traditional” big data streaming / messaging is just about moving events from place to place. For example, collecting log records from dozens or hundreds of servers in near real-time and delivering each to the relevant target(s), like HDFS files,HBase, a database or another application in a reliable fashion (with specific semantics). It is a re-incarnation of the message queuing concepts, in a high-throughput, distributed fashion and extended both to actively collect events directly from various sources and to persist the events on the other side in various targets.

It seems to me the most popular system is Apache Kafka (originally from LinkedIn), but there is also Apache Flume (originally from Cloudera) which is bundled in most Hadoop distributions.

Stream processing on the other hands, focuses on doing complex parsing, filtering, transformations and computations on the data stream as the data flows, for example maintaining an up-to-date counters and aggregated statistics per user or rolling window “top n” (continuous computation). This is opposed to batch processing frameworks, like MapReduce and Tez, that are based on one-time bulk processing. In this space, Apache Storm (originally from Backtype / Twitter) is the main player, but as LinkedIn released Apache Samza last month, it might lead to some healthy competition. There are other initiatives – for example, Cloudera contributed a library or building real-time analytics on Flume which might be good enough for simpler cases.

By the way, these two types of systems are complementary. Specifically, the combination of Kafka and Storm seems to be gaining traction – here is one success story from LivePerson.

Now, stay with me for a minute while I hype… What if you could write most of the common batches as stream processing? What if you could provide your business users updated data that is only seconds or tens of seconds behind the present? What if you could just have the answers to many common questions always updated and ready? Could stream processing significantly de-emphasize / replace batch processing on Hadoop in the future?

Now, that is of course mostly crazy talk today. In reality, maybe stream processing would not be a good fit for many computation types, and it is rather early to fully understand it. Also, the functionality, optimizations, integrations, methodologies and best practices around stream processing would likely take a few years to really mature.

Having said that, it seems to me that at least in the database / ETL space, stream processing has a great potential to evolve into a powerful alternative paradigm on how we design a system that adds analytic to operational systems.

But, instead of hyping anymore, I’ll ask – how crazy is that? How far are we from such capabilities? What I already see looks promising in theory. Here are a couple of examples:

  • Storm already has its own official abstraction layer to simplify writing complex computations. It is called Trident, contributed by Twitter and is part of Storm 0.8. It provides higher-level micro-batch primitives that allows, for example, to build parallel aggregations and joins. Interestingly, when Twitter released it a year ago, they specifically described it as similar to Pig or Cascading, but for stream processing.
    It seems to me that declarative,SQL-like continuous computation is only one step away from that.
  • Even more fascinating is Twitter’s Summingbird (open-sourced last month). It is a programmer-friendly abstraction of stream processing that can be executed either in real-time (on Storm) or as batch (using Scalding, translated to MapReduce, in the future maybe to Tez). More interestingly, It was designed to allow an hybrid mode, transparently mixing historical data (batch) from Hadoop with last-minute updates from Storm.
    So, batch and streaming could co-exist and even transparently mix, even today.

Now, it remains to be seen of course how will those technologies evolve, but it seems that the current state of events is already not too bad. With the adoption steam processing by mainstream Hadoop distributions (surely Hortonworks wont be the last), usability, functionality and innovation will likely all accelerate. So, if you are starting to look into “next-gen” Big Data infrastructure and low require latency from event to results, it might be a good time to start looking into stream processing.

Update 30/10/13: I can’t believe I didn’t spend time to read about Lambda Architecture before. Nathan Marz (who led Twitter’s Storm) came up with a coherent, end-to-end architecture for a data system to make relational databases obsolete, with both a batch processing component and a stream processing component. Find some time to watch his fascinating presentation from Strange Loop 2012, you won’t regret it.

Oracle In-Memory Option: the good, the bad, the ugly

Last week, Oracle have announced the new Oracle Database In-Memory Option. While there is already a great discussion at Rob’s blog and further analysis at Curt’s, I thought I could add a bit.

The Good

If Oracle does deliver what Larry promises, it will be the Oracle’s biggest advance for analytics since Exadata V2 in 2009, which introduced Hybrid Columnar Compression and Storage Indexes. I mean, especially for mixed workloads – getting 100x faster analytical queries while OLTP goes faster… Quite a bold promise.

The Bad

I’ll be brief:

  • How Much– in Oraclespeak, Option means extra-cost option. Both the pricing model (per core / per GB) and the actual price haven’t been announced. Since this is Oracle, both of them will be decided by Larry a month before GA – so the TCO analysis will have to wait…
  • When – it seems this is a very early pre-announcement of a pre-beta code. Since it missed 12c release 1 (which came out this July), I assume it will have to wait to 12c release 2, so it will likely be end of next year. Why? I would assume that a feature so intrusive is too much for a patchset (unless they are desperate).

Andy Mendelsohn says In-Memory option is pre-beta and will be released “some time next year.” #oow13

— Doug Henschen (@DHenschen) September 23, 2013

  • Why now– Oracle is obviously playing catch up…Even if we put Hana aside, it is lagging behind both DB2 (BLU) and SQL Server (especially 2014 – mostly updatable column store indexes, also in-memory OLTP). Also, there might be other potential competitors rising in the analytics space (Impala for starter?). So, this announcement is aimed at delay customers attrition to faster platforms while Oracle scrambles to deliver something.

The Ugly

So, my DB will have 100x faster analytics and 2x faster OLTP? Just by flipping an option? Sound much better (and cheaper) then buying Exadata… Or did Larry mean 100x faster than Exadata? hard to tell.
For some use cases, there will be cannibalization, for sure – for example, apps (EBS, Siebel etc) with up to a few TBs of hot data (which is almost every enterprise deployment) should seriously reconsider Exadata – replace smart scan with in-memory scan and get flash from their storage.

BTW – is this the reason why Oracle didn’t introduce a new Exadata model? Still thinking of how to squeeze in the RAM? That would be interesting to watch.

Update: Is Oracle suggesting In-Memory is 10x faster than Exadata? Check the pic:

 

Big Data products/projects types – from proprietary to industry standard

I recently read Merv’s excellent post on proprietary vs open-source Hadoop – suggesting that use the term distribution-specific is likely more appropriate. It reminded that I wanted to write about the subject – I had a somewhat more detailed classification in mind.
I’m focusing on Hadoop but this also is very relevant to the broader Big Data ecosystem, including the NoSQL/NewSQL solutions etc.

But first, why bother? Well, the classification typically hints some things to decision makers regarding that piece of software. For enterprises, which typically relies on outside help, some open questions might be:

  • Skills – How can we get confidence in using this software (people, training etc)? Can we get help from our partners / consultants / someone new if needed? In our region / country?
  • Support  – Can we get commercial support? Global or local? Is there more than one vendor so we can choose / switch if the relationship goes bad?
  • Finally, how much of a “safe bet” is it? Will it be in a few years a niche, abandoned code or mainstream? Will it integrate well with the ecosystem over time (ex: YARN-friendly) or not? and what are the odds that there will be someone around supporting it in several years when we still rely on it in production?

Of course, the state of piece of software may change over time, but still I think each of the following categories has some immediate implications regarding these questions. For example, proprietary (closed-source) projects should come with decent paid support, optional professional services and training, average plus documentation, and decent chance of long term support (but small startups are still a potential risk).

Having said that, here is the way I classify projects and products:

  • A proprietary offering of a specific Hadoop distribution –  a closed-source offering bundled with a vendor’s Hadoop distribution. Some examples are MapR (with proprietary implementation of HDFS, MapReduce etc), Pivotal (with HAWQ as the central attraction of their distribution) and also Cloudera (with Cloudera Manager, although proprietary installation and management is almost universal).
    The propriety bits are usually considered by the vendor as a significant part of their secret sauce.
  • Proprietary offering on Hadoop – a closed-source offering that runs on Hadoop. Some are distribution-agnostic (A.K.A Bring-Your-Own-Hadoop), for example Platfora, which runs on Cloudera, Hortonworks, MapR and Amazon. Some are certfiied on only a specific distribution (for now?) like Teradata-Hortonworks.
    Also, while some are Hadoop-only, many are an extension of an existing vendor offering to Hadoop, for example – all ETL vendors, some analytics vendors, some security vendors etc (the BI vendors typically just interact with Hadoop using SQL).
  • Open source projects, not bundled with a distribution – many great tools have been released as open source and have started building a community around them. While they are not yet bundled with an existing distribution, this doesn’t stop many from using them.
    Some of the projects complement existing functionality  – like salesforce.com Phoenix for smart, thin SQL layer over HBase or Netflix Lipstick for great Pig workflow visualization. Other projects are alternatives to existing projects, like Linkedin (now Apache) Kafka for distributed messaging as a popular alternative for Apache Flume or like Apache Accumulo as a security-enhanced alternative to Apache HBase. Yet others are just new types of stuff on Hadoop, like Twitter’s Storm for distributed, real-time, stream computations or any of the distributed graph processing alternatives (Apache Giraph and others).
  • Open source projects, bundled with a single distribution – a few of them are  existing open source projects that got picked by a distribution, like Linkedin DataFu that adds useful UDFs to Apache Pig and is now bundled with Cloudera. However, most of them are projects led by the distribution, with varying community around them, for example Hortonworks Ambari (Apache Incubator), Cloudera Impala and Cloudera Hue, plus various security initiatives all over the place (Cloudera Sentry, Hortonworks Knox, Intel Rhino etc).
    As an interesting anecdote of the market dynamics – it seems Cloudera Hue is now also bundled in Hortonworks 2.0 beta  under “third-party components”, which might hint it could evolve into last category:
  • Industry-standard open source projects – these are projects that you can expect to find on most leading distributions, meaning, several vendors provide support.
    While some are obvious (Pig,Hive etc), some are new or just arriving. For example, HCatalog was created by Hortonworks but is now included in Cloudera and MapR. I would guess that merging it into Hive helped smooth things out (likely it is easier to deliver it than rip it out…). It would be interesting to see if Tez will follow the same adoption path, as Hortonworks planned to eventually build support directly in Hive, Pig and Cacading. A similar fate would likely come to most of the security projects I mentioned, as well as ORC and Parquet support (ORC is already in Hive).

So there you have you it. I believe knowing the type of project can help organization gauge the risk involved (vs. the benefit) to make an informed decision about its use.

Oh, and on a side note – “bundled in a distribution” and “distribution does provides decent support, bug fixes and enhancements”  are of course two different things… That’s why the “brag the number of your commiters” resonate well with me, but that’s a different story.

The end of the classical MPP databases era

Over the years, enterprises realized that their many isolated systems generate a vast amount of data. What if they could put all that massive data into one centralized platform, correlate it and analyze it? Surely that would uncover a wealth of relevant, hidden business insights. Of course, those were the eighties (actually earlier), so this new platform was called the Data Warehouse. And it was good. So, over the years, it became very popular and by the end of the nineties nearly every large organization had one or dozen of them.

However, with all the goodness, there were some challenges. Data warehousing required a lot of specialized skills and tools. For example, regular databases couldn’t really support very large data warehouses, so from a certain scale, it required specialized and expensive products – like those from Teradata. As a response to the quick adoption of the data warehouse, the “classical” MPP databases arrived to the market about a decade ago, first Netezza and later the rest of the gang (Vertica, Greenplum etc). They were all relatively low cost, and built parallel processing on top of shared-nothing cluster architecture. They were the one place were you could throw “petabytes of data” and analyze it over a large, mostly-commodity, computing cluster.

So, what happened?

Simply, over the time the market gained experience and its requirements and expectations have evolved. For example, some examples of challenges not solved by existing MPP databases include:

  • The main challenge of a data warehouse is making sense of the data. The classical DW solution is to clean up and standardize the data as it is being loaded (ETL). This is required as the regular MPP database schema is rigid, and schema evolution is hard and painful (just like all relational databases). That standard method is, however, complex and very time consuming, which makes it very hard for the DW to adapt to the constant and frequent changes in its source systems.
    Nowadays, a common requirement is to support dynamic schemas / schema-on-read, so at the very least the frequent schema and data sources changes wont block the ingestion of data.
    This also supports the on-going shift of power from DBAs to the developers.
  • The classical MPP databases have relatively rigid HA and scalability – unlikely that you could add a couple of nodes every week or every month in real life or survive a “once an hour” node failure rate etc. In other words, MPP databases provided scalability but not elasticity, plus HA that is focused on surviving a single node failure.
    Today elasticity is a requirement – inspired by “failure is a common, normal event” mentality of Hadoop and of relying on lower-end commodity servers.
    The next step for this mentality will be to also replace the DR concept with a native multi-data-center active-active support as part of the core architecture, which some of the NOSQL and NewSQL players are advocating.
  • If talking about rigidity, the existing players usually had an assumption of isolation and uniformity – uniform physical nodes, dedicated high performance cluster interconnect and relatively homogenous workload.
    Nowadays with cloud deployments or deployments on a shared Hadoop cluster (maybe virtualized in the future), those assumptions needs to be revisited and products needs to take into account never-ending fluctuation of CPU, I/O and network on shared resources.
  • Finally, MPP databases are propriety, closed-source beasts. That doesn’t fit well with the collaborative open-source ecosystem led by the major web giants which are the main driver behind the existing big data boom, innovation and fast pace. This represents a huge mentality shift – just think why MapR doesn’t own the Hadoop market even though it came with a superior offering early on.

Speaking of mentality shift – one more thing to add .Most organizations that will start implementing a solution increasingly ask for a single “big data” platform – that must support complex SQL but also online/OLTP lookups, batches processing unstructured data, free-text searches, graph analysis and whatever else comes up next (future-proofing). For that, the industry is no doubt standardizing on Hadoop as the unifying infrastructure – and will converge around YARN + HDFS infrastructure. Now, no one wants start its big data journey with two big data platforms, so even if sometimes it could have made sense, it will be very hard to promote such solution at the current industry buzz level. The only exception might be SAP Hana, which built a unique offering of merging OLTP and DW into a single MPP system – skipping the ETL headache altogether (while also of course supports integration with Hadoop).

So, what will happen?

Well, looking at my crystal ball, it seems that popularity will continue to quickly shift to modern MPP databases on top of Hadoop.They will have flexible schema (which sucks, but it’s the only way to keep up with the schema changes and data sources proliferation). They will be significantly less monolithic and will leverage the Hadoop ecosystem instead – for example HDFS for storage management, YARN for resource management, HCatalog for metadata management, maybe even support for various open on-disk file formats etc. So, they will be mostly trimmed down to parallel query optimization and runtime engines.

As for the old MPP players, they would sooner or later try to adapt to this world, but for most of them it is simply too late already. The only player who have committed to such change is Greenplum (now: Pivotal) which started migrating its MPP database to HDFS (and renamed it to HAWQ). Even for them, even though their first release is now out, handling the challenges mentioned will not be easy. But it is still early in the game, and their lead (in my opinion) in query optimization over the rest of the “SQL on/in Hadoop” players may buy them enough time to evolve into a truly leading “next-gen” MPP-on-Hadoop player.

Designing for database multitenancy and Oracle 12c

I’ve been keeping a low profile over the last few months as I’ve been busy working in a bio-informatics startup, but I wanted to join the Oracle 12c release celebration by discussing database multitenancy and adding some reality check on Oracle’s new option.

So, you are building a database app and want it to be multitenant – having different users and companies trusting you and your shared infrastructure with their data. What are your design options?

Single Database, Shared Schema – I guess that’s the most common one (think of most webapps). There is a big security burden on the app here – a SQL injection vulnerability means game over. There is some positive side-effects – easy to do internal reports across all users and no duplicated objects.

If you use Oracle Enterprise Edition you could potentially use VPD to enforce row-level security (isolation) – safeguarding against standard SQL injections. However, SQL injection bugs in your app could be escalated using database-level zero-days / unpatched vulnerabilities.

Single Database, Multiple Schemas – here you can leverage standard database access control features to isolate different users. As a nice bonus, you can have an additional shared schema for your app metadata tables (so you can do regular joins of your metadata tables with customer tables).SQL injections vulnerabilities could still exploit database vulnerabilities for privilege escalation and accessing other customers data.

Single Instance, Multiple Databases – here you have multiple isolated databases sharing the same runtime (memory, background processes, logs etc). Having worked with Greenplum database, I was getting used to that, as this have been a basic feature of PostgreSQL, SQL Server and many others for a decade or two. In a sense, Oracle CDB/PDB is just very lately catching up. Anyway, the databases are isolated (so no join / access possible to another database), but since they share the runtime, SQL injections vulnerabilities could still exploit database vulnerabilities for privilege escalation.

Different Instances, Shared O/S – if you want to have a more secured isolation, the minimum is to have each customer on a separate instance of the database software with a different O/S user. To break the isolation, attacker needs to find a SQL injection vulnerability, run O/S command and exploit unpatched O/S vulnerability to get root. I’m not sure this type of isolation is being really used, as it is quite clumbersome to manage the environment this way, and it is inferior to the last option:

Different Instances, Virtualized – each customer is in its own VM (cloned from some base image), fully isolated. The VMs could grow, shrink and be moved around to manage the changing workload and different customer sizes. This is the only option in my opinion that give real database level isolation between customers (you still have to worry about shared storage and network, although those could also be virtualized, and of course middleware vulnerabilities).

So, what about 12c CDB/PDB?

Well, outside multitenancy use case, CDB/PDB combo should really help simplify Oracle patching (and is included at no extra cost for Enterprise Edition for a single PDB per CDB). That by itself is a good reason to adopt it eventually, once it proves to be stable.

However, for multitenancy use cases, it is harder to justify it. First, business-wise, this is an extra cost option on top of Enterprise Edition. It is priced 17.5K $ per license, so for a modern 16-core server (8 licenses) it costs an additional 140K $ per node! Even with hypothetical 50% or even 75% discounts, it is very hard to find economical justification for that given the alternatives (especially as it can’t guarantee real isolation). For me, this is where I lost interest… Now, there might be some theoretical savings regarding database management, but if you really design a new multitenancy operation, you will surely invest effort to fully automate everything across all layers, and you could still use the free CDB/PDB features to help you get there.

On the technical side, there are some mysteries. The biggest one is that a single CDB can only hold about 252 PDBs. So, if I have 10,000 medium-sized customers, should I install 40 PDBs? After some thinking about it, it seems to me that maybe this feature was originally designed to throw together a relatively small number of heavy databases on a shared cluster to ease resource contention in RAC environment, not as a new paradigm for database multitenancy.  But your guess is as good as mine.

So, for real multitenant isolation, I believe virtualization will make a lot more sense to most customers. It is a very lightweight and proven technology, could be used across all the layers in the stack (with minimal vendor lock-in), provides resource management, best isolation and is (mostly) free. For less isolated multitenant supporting many thousands of users, seperate schemas might do it, and if there will be millions of customers, shared schema might be the only thing that works.

The only case where I think CDB/PDB could be useful for Oracle multitenancy might be if a customer is already heavily invested in a large RAC cluster that is not virtualized, and is now running into challenges of putting many databases on this shared cluster. In other words, it solves an Exadata-specific challenge – if you are married to Exadata and can’t effectively consolidate, it might be worth the pay once you are convinced 12c is stable enough, so your Exadata environment won’t be under-utilized.

Bottom line – CDB/PDB is a great catch-up that will ease future patching, but I believe the fancier use cases around multitenancy don’t make economical and technical sense. What do you think?

Hortonworks brings out the heavy artillery

Just a few hours ago, Hortonworks posted several new posts on their blog regarding several stealth projects that they developed and (as can be expected from Hortonworks) are working to open source and contribute to the community.

The hottest one seems to me to be Tez, which is being donated to ASF as an incubator project. Its main aim is to accelerate the runtime of Hive, Pig and Cascading jobs. It defines a concept of a task, which comprises of a set of (Direct Acyclic Graph of) MapReduce jobs and treats it as a whole, streaming data between them without spilling (spooling) back to HDFS after each MapReduce pair.

Why is that good?

  • The actual writing and reading temporary results from HDFS (reducer output from the middle of the chain) can be very slow and resource intensive.
  • This is a blocking operation – so the whole parallel processing may stale due to a few slow tasks of one set of MapReducer (while speculative execution helps here, it doesn’t eliminate the problem).

It is very interesting that Hortonworks chose to publish this as a new infrastructure project. It seems they claim it to be a sort of next-gen, generalized MapReduce. I guess it started as a way to accelerate Hive, but it’s great that it will also help Pig (which also generates MapReduce chains) and Cascading (which helps developers easily create MapReduce chains).

This is part of Hortonworks effort to make Hive 100x faster –  the “Stinger Initiative. Continue reading