Oracle In-Memory Option: the good, the bad, the ugly

Last week, Oracle have announced the new Oracle Database In-Memory Option. While there is already a great discussion at Rob’s blog and further analysis at Curt’s, I thought I could add a bit.

The Good

If Oracle does deliver what Larry promises, it will be the Oracle’s biggest advance for analytics since Exadata V2 in 2009, which introduced Hybrid Columnar Compression and Storage Indexes. I mean, especially for mixed workloads – getting 100x faster analytical queries while OLTP goes faster… Quite a bold promise.

The Bad

I’ll be brief:

  • How Much– in Oraclespeak, Option means extra-cost option. Both the pricing model (per core / per GB) and the actual price haven’t been announced. Since this is Oracle, both of them will be decided by Larry a month before GA – so the TCO analysis will have to wait…
  • When – it seems this is a very early pre-announcement of a pre-beta code. Since it missed 12c release 1 (which came out this July), I assume it will have to wait to 12c release 2, so it will likely be end of next year. Why? I would assume that a feature so intrusive is too much for a patchset (unless they are desperate).

Andy Mendelsohn says In-Memory option is pre-beta and will be released “some time next year.” #oow13

— Doug Henschen (@DHenschen) September 23, 2013

  • Why now– Oracle is obviously playing catch up…Even if we put Hana aside, it is lagging behind both DB2 (BLU) and SQL Server (especially 2014 – mostly updatable column store indexes, also in-memory OLTP). Also, there might be other potential competitors rising in the analytics space (Impala for starter?). So, this announcement is aimed at delay customers attrition to faster platforms while Oracle scrambles to deliver something.

The Ugly

So, my DB will have 100x faster analytics and 2x faster OLTP? Just by flipping an option? Sound much better (and cheaper) then buying Exadata… Or did Larry mean 100x faster than Exadata? hard to tell.
For some use cases, there will be cannibalization, for sure – for example, apps (EBS, Siebel etc) with up to a few TBs of hot data (which is almost every enterprise deployment) should seriously reconsider Exadata – replace smart scan with in-memory scan and get flash from their storage.

BTW – is this the reason why Oracle didn’t introduce a new Exadata model? Still thinking of how to squeeze in the RAM? That would be interesting to watch.

Update: Is Oracle suggesting In-Memory is 10x faster than Exadata? Check the pic:

 

Advertisements

Designing for database multitenancy and Oracle 12c

I’ve been keeping a low profile over the last few months as I’ve been busy working in a bio-informatics startup, but I wanted to join the Oracle 12c release celebration by discussing database multitenancy and adding some reality check on Oracle’s new option.

So, you are building a database app and want it to be multitenant – having different users and companies trusting you and your shared infrastructure with their data. What are your design options?

Single Database, Shared Schema – I guess that’s the most common one (think of most webapps). There is a big security burden on the app here – a SQL injection vulnerability means game over. There is some positive side-effects – easy to do internal reports across all users and no duplicated objects.

If you use Oracle Enterprise Edition you could potentially use VPD to enforce row-level security (isolation) – safeguarding against standard SQL injections. However, SQL injection bugs in your app could be escalated using database-level zero-days / unpatched vulnerabilities.

Single Database, Multiple Schemas – here you can leverage standard database access control features to isolate different users. As a nice bonus, you can have an additional shared schema for your app metadata tables (so you can do regular joins of your metadata tables with customer tables).SQL injections vulnerabilities could still exploit database vulnerabilities for privilege escalation and accessing other customers data.

Single Instance, Multiple Databases – here you have multiple isolated databases sharing the same runtime (memory, background processes, logs etc). Having worked with Greenplum database, I was getting used to that, as this have been a basic feature of PostgreSQL, SQL Server and many others for a decade or two. In a sense, Oracle CDB/PDB is just very lately catching up. Anyway, the databases are isolated (so no join / access possible to another database), but since they share the runtime, SQL injections vulnerabilities could still exploit database vulnerabilities for privilege escalation.

Different Instances, Shared O/S – if you want to have a more secured isolation, the minimum is to have each customer on a separate instance of the database software with a different O/S user. To break the isolation, attacker needs to find a SQL injection vulnerability, run O/S command and exploit unpatched O/S vulnerability to get root. I’m not sure this type of isolation is being really used, as it is quite clumbersome to manage the environment this way, and it is inferior to the last option:

Different Instances, Virtualized – each customer is in its own VM (cloned from some base image), fully isolated. The VMs could grow, shrink and be moved around to manage the changing workload and different customer sizes. This is the only option in my opinion that give real database level isolation between customers (you still have to worry about shared storage and network, although those could also be virtualized, and of course middleware vulnerabilities).

So, what about 12c CDB/PDB?

Well, outside multitenancy use case, CDB/PDB combo should really help simplify Oracle patching (and is included at no extra cost for Enterprise Edition for a single PDB per CDB). That by itself is a good reason to adopt it eventually, once it proves to be stable.

However, for multitenancy use cases, it is harder to justify it. First, business-wise, this is an extra cost option on top of Enterprise Edition. It is priced 17.5K $ per license, so for a modern 16-core server (8 licenses) it costs an additional 140K $ per node! Even with hypothetical 50% or even 75% discounts, it is very hard to find economical justification for that given the alternatives (especially as it can’t guarantee real isolation). For me, this is where I lost interest… Now, there might be some theoretical savings regarding database management, but if you really design a new multitenancy operation, you will surely invest effort to fully automate everything across all layers, and you could still use the free CDB/PDB features to help you get there.

On the technical side, there are some mysteries. The biggest one is that a single CDB can only hold about 252 PDBs. So, if I have 10,000 medium-sized customers, should I install 40 PDBs? After some thinking about it, it seems to me that maybe this feature was originally designed to throw together a relatively small number of heavy databases on a shared cluster to ease resource contention in RAC environment, not as a new paradigm for database multitenancy.  But your guess is as good as mine.

So, for real multitenant isolation, I believe virtualization will make a lot more sense to most customers. It is a very lightweight and proven technology, could be used across all the layers in the stack (with minimal vendor lock-in), provides resource management, best isolation and is (mostly) free. For less isolated multitenant supporting many thousands of users, seperate schemas might do it, and if there will be millions of customers, shared schema might be the only thing that works.

The only case where I think CDB/PDB could be useful for Oracle multitenancy might be if a customer is already heavily invested in a large RAC cluster that is not virtualized, and is now running into challenges of putting many databases on this shared cluster. In other words, it solves an Exadata-specific challenge – if you are married to Exadata and can’t effectively consolidate, it might be worth the pay once you are convinced 12c is stable enough, so your Exadata environment won’t be under-utilized.

Bottom line – CDB/PDB is a great catch-up that will ease future patching, but I believe the fancier use cases around multitenancy don’t make economical and technical sense. What do you think?

Exadata HCC – storage savings revisited

This is a third post in a series regarding Exadata HCC (Hybrid Columnar Compression) and the storage savings it brings to Oracle customers. In Part 1 I showed the real-world compression ratios of Oracle’s best DW references, in Part 2 I investigated why is that so, and in this part I’ll question the whole saving accounting.

So, we saw in part 1 that most Exadata DW references don’t mention the storage savings of HCC, but those who do show an average 3.4x “storage savings”.  Now let’s see what savings, if any, this brings. It all has to do with the compromises involved in giving up modern storage capabilities and the price to pay to when fulfilling these requirements with Exadata.

Let me start with a somewhat weaker but valid point. Modern storage software allows online storage software upgrade. A mission-critical database (or any database) shouldn’t be down or at risk when upgrading storage firmware or software. In order to achieve similar results with Exadata, the storage layer has to be configured as a three-way mirror (ASM High Redundancy). This is actually Oracle’s best practice, see for example the bottom of page 5 of the Exadata MAA HA paper. This configuration uses significantly more storage than any other solution on the market. This means that while the total size of all the data files might be smaller in Exadata thanks to HCC, you still need a surprisingly large raw volume of storage to support it, or you’ll have to compromise and always use offline storage software upgrades – likely the critical quarterly patch bundle, which could take at least an hour of downtime to apply, from what I read on the blogsphere.

To make it a bit more confusing, the Exadata X3 datasheet only mentions (in page 6) the  usable data capacity with 2-way mirror (ASM normal redundancy), even though the recommended configuration is 3-way mirror. I wonder if that has anything to do with later providing less net storage? Continue reading

Exadata HCC – where are the savings?

this is a second in a series of posts on Oracle’s Exadata Hybrid Columnar Compression (HCC), which is actually a great feature of Oracle database. It is currently locked to Oracle-only storage (Exadata, ZFS appliance etc) and Oracle marketing pushes it hard as it provides “10x” compression to Oracle customers.

In the previous post, I showed that Oracle’s best data warehouse reference customers gets only an average “storage saving” of at most 3.4x. In this post, I’ll investigate why they don’t get the promised “10x-15x savings” that Oracle marketing occasionally mentions. In the next post, I plan to explain why I use double quotes around storage savings – why even that number is highly inflated.

10x compared to what? I remember that in one of the recent Oracle Openworld (was it this year?), Oracle had a marketing slide claiming Exadata provides 10x compression and non-Exadata provides 0x compression… (BTW – please post a link in the comments if you can share a copy). But leaving the funny / sad ExaMath aside, do non-Exadata customers enjoy any compression?

Well, as most Oracle DBAs will know, Oracle have introduced in 9i Release 2 (around 2002) a new feature that was called Data Segment Compression, which was renamed in 10g to Table Compression, in 11g release 1 to a catchy “compress for direct_load operations” and as of 11g release 2 is called Basic Compression. This feature is included in Enterprise Edition without extra cost. It provides dictionary-based compression at the Oracle’s table data block level. It is most suited for data warehousing, as the compression kicks in only during bulk load or table reorganization – updates and small inserts leaves data uncompressed.

What is the expected (and real world) average compression ratio of tables using this feature? The consensus is around 3x compression. Yes, for data warehousing on non-Exadata in the last decade Oracle provides 3x compression with Enterprise Edition! Continue reading

Exadata HCC – Real-World Storage Savings

this is a first in a series of posts on Oracle’s Exadata Hybrid Columnar Compression (HCC), which is actually a  great feature of Oracle database. It is currently locked to Oracle-only storage (Exadata, ZFS appliance etc) and Oracle marketing pushes it hard as it provides “10x” compression to Oracle customers.

Oracle have bold claims regarding HCC all over. For example in this whitepaper from November 2012,  the first paragraph claims “average storage savings can range from 10x to 15x” and the second paragraph illustrates it with 100TB DB going down to 10TB, with 90TB of storage savings. After that, the paper switch to a real technical discussion on HCC.

So, what does HCC “10x” compression looks like in real life? How much storage savings will Oracle customers see if they move to Exadata and start using HCC?
It is very hard to find some unbiased analysis. So, to find out and start an hype-free discussion, I decided to get some real world data from Oracle customers. Here are my findings.

To start, I needed access to an undisputed data source. Luckily, one can be found on Oracle’s web site – an impressive 76-page long Exadata customer reference booklet from September 2012 containing a sample of 33 customer stories. Obviously, it is not very “representative” – reference customers tend to be more successful than the average ones –  but I think there is still a lot value in analyzing it.  Hey, maybe we’ll find that their storage saving is even larger than 10x-15x, who knows!

So, once I had data, Continue reading