Separating compute and storage

What it means, and why it’s important for databases

Adam Storm
4 min readJan 14, 2019

If you follow trends in the database industry, you will have noticed that the separation of compute and storage has become a hot topic over the last several years. The ability to separate compute and storage allows database software increased availability and scalability, and has the potential to dramatically reduce costs— benefits which are driving the movement’s momentum. Before diving into the benefits however, it’s best to first level-set on what is meant by separating compute and storage.

“Separating compute and storage” involves designing databases systems such that all persistent data is stored on remote, network attached storage. In this architecture, local storage is only used for transient data, which can always be rebuilt (if necessary) from the data on persistent storage.

Historically, database systems have principally been architected with “tightly coupled compute and storage” where there is a dependance on locally attached storage for some, if not all, of the persistent data. Clearly, this tight coupling is the opposite of separating compute and storage. Why were systems designed this way? Primarily for performance reasons.

Database systems were originally designed in the 1950’s for transaction processing. One of the best examples of such a system is SABRE, which was designed to automate airline bookings, launched in 1960, and is still in use by the travel industry today, processing more than 57,000 transactions every second. If you’ve booked travel sometime in your lifetime, more than likely that travel booking went through the SABRE system.

In transactional systems, latency is king, and ensuring that transactions can complete in less than a second (often in just a few milliseconds) is key. As a result, these original database systems had to keep storage as close as possible to the compute so that the persistence of these transactions (something required to ensure ACID properties) could complete as quickly as possible. If storage was network attached, especially with network speeds in the 1960's, latency would have suffered.

Fast forward a few decades and a lot has changed. First, databases are no longer just used for transaction processing. Once it was observed that transactional data was a treasure trove of business information, companies build data warehouses which stored processed transactional data, often aggregated across many systems, and supplemented that with additional data. With these data warehouses, businesses began running complex analytical queries to determine how to drive smarter business practices. To this day, analytical queries are typically long-running (seconds, minutes, hours, sometimes even days) and as a result storage latency was less of a concern.

Additionally, the dynamics of hardware changed. While microchip transistors have increased in power following Moore’s law, single threaded performance has leveled off, and memory bandwidth has not kept up with network and storage bandwidth. Couple that with the fact that human perception of transaction completion time has physical limits, it became possible in the 90’s to build transactional systems that were predominantly dependant on network attached storage, albeit only for some workloads, and often requiring specialized hardware. More recently these constraints have been lifted (largely due to faster networks), and shared storage transactional systems are becoming mainstream.

With the reduced requirements for tightly coupled storage, database software architects observed that there were additional benefits to separating compute and storage:

  1. Scalability — When all persistent data is stored on locally attached disks and a cluster of database nodes is required to meet performance requirements, databases typically partition the data such that each node “owns” some portion of the data (typically using hash partitioning). This ownership model makes scaling the database difficult. When additional database nodes are added, the data in the cluster must be “rebalanced” so that the ratio of data to compute is equal across the nodes of the cluster. This rebalancing requires the physical shipping of data across nodes of the cluster and if rebalancing isn’t performed, the new hardware will have been added, but it will not own any data and therefore, the newly added hardware can not be leveraged. Conversely, when all persistent storage is network attached, systems are easier to scale. If the cluster must increase its compute capacity, a new node can be added and the data can be “remastered”, a process in which the data node ownership is modified without moving any data (since all nodes can see all of the data). As a result, scaling out a cluster which has separated compute and storage is typically very fast (small number of seconds) compared with hours or days for systems where compute and storage is tightly coupled.
  2. Availability —With all (persistent) data stored remotely, if a compute node fails, there is no data lost, or even made temporarily unavailable. As the remaining nodes in the cluster can access all data (through the network), recovery from node failure can be performed quickly, by one (or more) of the surviving compute nodes.
  3. Cost —The presence of cloud computing dramatically changed the pricing dynamics around local vs remote storage. In cloud computing, remote storage is dramatically cheaper than local storage (object storage, for example, is typically 5x less expensive than local or block storage) and is infinitely plentiful (at least from the end user’s perspective). Furthermore, cloud data centers have robust networking infrastructure to achieve low latency and high bandwidth communication to remote storage.

Unfortunately, software advancements often lag hardware advancements, and even though the technical barriers to separated compute and storage have vanished, retrofitting existing database architectures to leverage remote storage is difficult and time consuming. That being said, several databases vendors have devised novel architectures over the last decade which leverage separated compute and storage (IBM Db2 pureScale & Db2 Warehouse on Cloud, AWS Aurora, Snowflake, etc.). I’m of the opinion that in the coming years, it will be commonplace for all new database systems to embrace the separation of compute and storage, and the migration of existing systems to this paradigm will accelerate.

--

--

Adam Storm

Adam has been developing and designing complex software systems for the last two decades. He is also a son, brother, husband and father.