Tech Talk Series: Part 3 Wrap Up, Flex Up and Flex Down

tech talk series part 3 wrap up featured image

Last week was our Tech Talk on Scaling-In and Down.

We’ve explored scaling up and out in Tech Talk #1. MySQL is a very popular RDBMS, but challenges arise when your workload hits the limits of the largest node you can provision. But this third Tech Talk isn’t about adding scale, it’s about removing it. From the technical/DevOps perspective, it’s hard enough to have sufficient resources deployed, so why would you ever want to consider removing them?

Everyone’s seen the “wall of shame” of tweets about major e-commerce sites suffering site slowdowns and outages during Black Friday and Cyber Monday. Here’s a brief recap just from the last decade:

2011: PC Mall, Newegg, Toys R’Us, Avon: 30+min outages. Walmart: 3hr outage

2012: Kohl’s: repeated multi-hour outages

2013: Urban Outfitters, Motorola: offline most of Cyber Monday

2014: Best Buy: 2hrs+ total outages. HP, Nike: site crashes

2015: Neiman Marcus: 4hr+ outage

2016: Old Navy, Macy’s: multi-hour outages

And there are similar “flash sales” (short duration, limited items, deep discount) all over the world, including China’s Singles Day and Flipkart’s Big Billion Day.

This is the very reason scale is needed… to avoid these kinds of high-impact outages. But hidden here is a big reason why this keeps happening:

Workloads with peaks waste a lot of resources during non-peaks

Ideally capacity should scale elastically: deploy the capacity for when you need it, and scale it back when you do not. However, most RDBMSs cannot elastically shrink when they’re at scale.

Most RDBMS deployments don’t scale-in/down easily

Single-node MySQL deployments can scale-up or down on DBaaS solutions like AWS RDS, Azure SQL, or Google Cloud SQL. But if your deployment leverages master/master (including certification replication-based solutions like MariaDB Galera Cluster or Percona XtraDB Cluster) or sharding, scaling the workload is tricky.

Each individual additional node in master/master won’t give linear write scale; instead, they give additional HA. So removing nodes doesn’t give the same amount of scale-in as actually shrinking each node, i.e., swapping each node for a smaller instance. And that kind of swapping requires bringing up separate nodes from backup, using replication to catch up, and then cutting over–a lot of effort.

Scaling-in a sharded array is similarly complex. Partitions have to be consolidated between shards, application queries often have to be modified, and shard:data LUT routing has to be updated. Nearly everyone I’ve talked to who has deployed and/or supported sharded installations has confirmed: “We never try to scale back in.”

Result: It’s difficult to provision sufficient headroom for future peaks when scaling is one way

If it was up to DBAs and DevOps, every system would have enough headroom for the “unexpected.” This avoids service downtimes, frustrated users and stakeholders, and blown up ticket queues. Unfortunately, DBAs and DevOps often don’t get to set their own budgets, and finance departments view “headroom” as excess capacity, i.e., wasted resources, which becomes the perennial “estimation game,” along the lines of:

DevOps: “We expect 30% more traffic than last year, so we should provision 50% more.”

Finance: “That sounds excessive. You already have half your servers underutilized. I’ll give you 35% more.”

So when the spike to 40% more comes, the site craters.

It’s important to design the ability to scale-in/down when architecting scale for your MySQL deployment

Depending on your method of adding scale, your MySQL deployment will be able to scale in/down with various amounts of difficulty. Determining how exposed your workload is to seasonal peaks is key to budgeting sufficient hardware for those peaks, without having significant amounts of underutilized servers.

ClustrixDB Can Scale-Out and Scale-in Easily

ClustrixDB is a shared-nothing, fully distributed MySQL-compatible clustered database, which can simply add or drop nodes to scale out or scale in.

Here’s how you scale it:

  1. Add nodes via IP. Add IP to Load Balancer. No app changes.
  2. Remove nodes via IP. Remove IP from Load Balancer. No app changes.
  3. Minor “database pause” for multi-node “group change”

Here’s how you flex up, and flex down.