Not All Clusters Are the Same

There are a wide variety of techniques out there for clustering storage appliances. The question is: what problem are you really trying to solve? If you look at Isilon’s clustered storage appliances (where I was chief architect), you’ll see that the clustering is done at the block level. The block addresses are generalized into a generic (node, drive, block_num) tuple and the on-disk data structures simply use that generalized address everywhere a block address would normally be used (plus a bunch of details I’m glossing over). The communication on the back end of an Isilon cluster is block reads and writes, transaction messages, and lock messages (plus some other miscellaneous bits). Each read or write operation is controlled by the initiator, and the smallest granularity of locking is at the block level. Cache lives both at the disk and at the initiator. If you were to put it into an architecture category, you’d call it an Infiniband SAN (Storage Area Network). This is perfect for a file system. This architecture lends itself to zero-copy, extremely high-performance file access for streaming files, very low CPU utilization on the nodes holding the disks (which allows the addition of the accelerator nodes for high speed FibreChannel and 10GbE), infinite scalability, and extremely low latency for operations on cached data.

However, it doesn’t support high read/write concurrency on a single file. Imagine if you ran an OLTP database with a high write load using an architecture like that. With the locking done at the block level, you can never expect to get high concurrency for items smaller than a block. Every node that wants to write to a block would have to get an exclusive lock on that block, which invalidates other nodes’ caches. If you had an active table with massive read/write load sitting on top of a cluster like this, performance would tank, dominated by lock contention. Then why do some databases take this approach to scale? How can you possibly make a shared-backend cluster resembling a SAN and expect it to scale with a database workload like some have done? How can you make an expandable storage engine plug-in and expect the entire database to scale? What works extremely well for a file system does not work at all for a database. We need a new approach.

Clustrix has a new approach. Rather than shipping the data blocks on the back end, we ship the queries. That may sound like an innocuous statement, but really it has a far-reaching impact on the architecture. Clustrix has taken a novel approach to solve the clustered database problem, resulting in a database system that can handle high concurrency at any scale.


Follow Clustrix on Twitter (@Clustrix), Facebook, and LinkedIn.

Visit our resources or documentation page for further reading.