Oracle In-Memory Option: the good, the bad, the ugly

Last week, Oracle have announced the new Oracle Database In-Memory Option. While there is already a great discussion at Rob’s blog and further analysis at Curt’s, I thought I could add a bit.

The Good

If Oracle does deliver what Larry promises, it will be the Oracle’s biggest advance for analytics since Exadata V2 in 2009, which introduced Hybrid Columnar Compression and Storage Indexes. I mean, especially for mixed workloads – getting 100x faster analytical queries while OLTP goes faster… Quite a bold promise.

The Bad

I’ll be brief:

  • How Much– in Oraclespeak, Option means extra-cost option. Both the pricing model (per core / per GB) and the actual price haven’t been announced. Since this is Oracle, both of them will be decided by Larry a month before GA – so the TCO analysis will have to wait…
  • When – it seems this is a very early pre-announcement of a pre-beta code. Since it missed 12c release 1 (which came out this July), I assume it will have to wait to 12c release 2, so it will likely be end of next year. Why? I would assume that a feature so intrusive is too much for a patchset (unless they are desperate).

Andy Mendelsohn says In-Memory option is pre-beta and will be released “some time next year.” #oow13

— Doug Henschen (@DHenschen) September 23, 2013

  • Why now– Oracle is obviously playing catch up…Even if we put Hana aside, it is lagging behind both DB2 (BLU) and SQL Server (especially 2014 – mostly updatable column store indexes, also in-memory OLTP). Also, there might be other potential competitors rising in the analytics space (Impala for starter?). So, this announcement is aimed at delay customers attrition to faster platforms while Oracle scrambles to deliver something.

The Ugly

So, my DB will have 100x faster analytics and 2x faster OLTP? Just by flipping an option? Sound much better (and cheaper) then buying Exadata… Or did Larry mean 100x faster than Exadata? hard to tell.
For some use cases, there will be cannibalization, for sure – for example, apps (EBS, Siebel etc) with up to a few TBs of hot data (which is almost every enterprise deployment) should seriously reconsider Exadata – replace smart scan with in-memory scan and get flash from their storage.

BTW – is this the reason why Oracle didn’t introduce a new Exadata model? Still thinking of how to squeeze in the RAM? That would be interesting to watch.

Update: Is Oracle suggesting In-Memory is 10x faster than Exadata? Check the pic:

 

Advertisements

3 thoughts on “Oracle In-Memory Option: the good, the bad, the ugly

  1. Too bad Impala and Parquet are not used in the BDA (yet!). It would be a much more reasonable comparison to in-memory option. Oracle could still win, but a smaller difference for sure.

  2. Pingback: The end of the monolithic database engine dream | Big Data, Small Font

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s