Following my previous post on the various meanings of a customer concurrency requirement, I will try to explain why database (SQL) concurrency is usually the wrong target to set.
My main point is that database SQL concurrency is the result of both the SQL workload’s throughput (like “queries per hour”) and the database-specific latency (SQL response time). For example, I’ll demonstrate how, for a fixed workload, making a query go faster (tuning it) automatically reduces the database concurrency. This is a generic point, it is not specific to a database technology, and applies beyond the database domain.
Here is a simple example. Let us assume that a database is required to support 1800 similar queries per hour, arriving randomly. That means on average one new query every two seconds. Let us also assume that for now, each query runs on average 60 seconds, regardless of the database load (just for simplicity sake). So, given those specific query throughput and latency, the database will have about 30 concurrent SQLs running on average.
Continuing the example, let’s assume we now somehow tune the database to make these type of SQL faster and now each query execution takes only 10 seconds. If the workload is still 1800 SQLs per hour, suddenly we will only have about five concurrent SQLs! If we further tune the SQLs to execute in half a second, we will see less than one concurrent SQL – as the rate of which SQLs are submitted is much lower than each SQL run time.
What this thought exercise nicely demonstrate is that SQL concurrency is a derived metric