Application-Level Sharding

There are many ways to scale databases. One I was less familiar with before my last job is application-level sharding. While there are tools that work on some platforms to help you achieve application-level sharding (like Vitess with MySQL), the core work of this approach is not achieved at the database layer. Because of that, this post is really independent of the RDBMS, for the most part.

A friend asked me about application-level sharding last week, and I realized I had some knowledge here that I can share. Though I think I ask more questions in this article than I answer! In the vended database world (IBM Db2 LUW), I had never even encountered the idea of application-level sharding. When I spent time working with MySQL it was pretty much a given for larger companies, though many places created their own ways of doing it.

Traditional Database Scaling Options

Traditionally, scaling a relational database meant essentially buying a bigger server. While that has become much easier to do in the cloud, there are still limits of what kind of workloads can be supported. While I don’t really have the patience to read through the EC2 instance specifications, they seem to top out at 64 cores. Going beyond that may require specialized or dedicated hardware, and I’ve seen database servers at over double that a decade ago.

When the hardware approach is more targeted toward using commodity hardware and scaling out instead of scaling up, the option of choice for particularly large environments often falls to application level sharding.

What is Application-Level Sharding?

At it’s most basic, application-level sharding involves identifying a sharding key and sending queries that involve different values of that key to different databases. It makes the most sense when all queries and all tables include some column that can be used as a sharding key.

For example, a large retailer might shard on store id – sending queries for one store id to one database while sending queries for another store id to another server.

Let’s walk through what this looks like in terms of a traditional three tier architecture:

In this architecture, we have multiple servers (or containers, or vms) at the web and application layer. Any of these can talk with any of the machines in the next layer. Often there are processes for sharing a bit of persistent configuration data between the machines in each layer. The entire purpose of the database layer is to store nearly all of the persistent data, and have it readily accessible. The problem with adding servers at this layer has always been how to manage that persistent data. If we try to share the data among multiple database servers we can often have problems with data replication, and generally traditional RDBMSes can have only one node that accepts writes at a time. There are architectures like Oracle RAC or IBM Db2 PureScale that do the sharing at the storage layer, with multiple compute nodes accessing the same data, and that can be one scaling solution.

The problem with these is that they are expensive both in licensing and in terms of the hardware needed for them, not to mention the specialized expertise in architecting them, building them, and supporting them.

Scaling the database layer using commodity hardware requires something different. Even if not using commodity hardware, scaling the database by sharding different data to different servers can be an option. The problem then becomes directing any given query to the place where the data it needs resides. There has to be some intermediary routing queries to the right location. That intermediary can exist between the app and the database layer or even between the web and the app layer, depending on other requirements.

In this layout, the DB Selector becomes essentially a query router. We don’t generally like to add overhead to every single query, particularly if this is for e-commerce or OLTP applications.

In this layout, each app server is only aware of one database server, and the overhead of making a selection where things go is often at the session level in some way. This also has the advantage of allowing you to put pods in different geographic locations that are likely to be closer to certain end users.

The Selector Itself

Let’s think for a moment about the selector – in whichever position it resides. The method of selecting which pod or database a query session goes to will rely on the value of the sharding key you’ve selected. There are many methods of doing this. Maybe your key ranges from 1 to 10, so 1-5 go on pod 1 while 6-10 go on pod 2. What happens when value 11 comes in? What happens if value 2 is just the 500-pound gorilla and needs to be split out to its own pod/db?

Any deterministic function would work for deciding which sharding key lives where, but you need to think about what happens for re-sharding – when you need to add, remove, or change where a specific key value routes to. For something you plan to scale out, a process for easily spinning up a new database server with all the proper configuration and even the proper schema is important.

This selector may need a database of its own just to track where every sharding key value goes, and that database may end up being the busiest in your organization – it needs super-fast lookup and probably enough memory to keep your shard map in memory, along with the best HA/DR you can come up with.

Database Concerns

Everything we’ve talked about so far is beyond the database layer. What concerns might a DBA have when supporting databases in this kind of sharding?

Cross-Shard Queries

One of the problems this layout introduces is the challenge of getting queries to run across all of your shards. You probably actually don’t want to do this, but may instead want to copy data into a data warehouse before querying it. Querying two databases and mashing the result sets together may not seem bad, but what about when you have 50 of them? There is no built in way of doing this kind of query because the sharding is not happening at the database layer.

Shared Data

Ok, forget a query that spans all the shards – what about tables that cannot include your sharding key or that are the same data across all shards? How do you manage these? Do you use replication to keep the same tables in sync across shards? If so, how do you handle updates to this data, since bidirectional replication is nearly always a bad idea on any RDBMS?

Maybe this data lives in an entirely separate database using technologies like IBM Db2’s federation to be able to query it in the same query with sharded data, and accepting the performance penalty that implies.

Ongoing Maintenance

There are a lot of advantages to this layout. Suddenly if you need to do offline database maintenance, only one portion of your database is down at any one time instead of the whole thing. If a database is getting too busy, you can likely split it again to spread the load out.

But there are disadvantages, too. Suddenly keeping your schema in sync across multiple databases is critically important to up time, even more so if your app servers are different. Online schema change becomes an absolute requirement, and you’re likely to have to write the app in such a way that it can truly tolerate schema change. If your application is written correctly, it shouldn’t even know if you add a column. But what about if you want to drop a column? Can you even be sure your application isn’t using that column somewhere?

What happens when you have one big bad value of the sharding key that is too big even for its own dedicated shard? Further sharding beyond the sharding key is often complicated to use.

Depending on how you set this up, it could be to your advantage. You could have a canary pod where you deploy application changes first and validate them in production before moving on to pods with more critical data or stores on them.

HA and DR

The main HA and DR plan is on a shard-by-shard basis, and you use whatever is available for your RDBMS. For Db2, each shard would probably be a four-server HADR cluster spanning two regions. You’d also still need good old fashioned database backups.

Out There Ideas

There are all kinds of crazy directions this could take. Can you have different RDBMSes on different shards? Would it work to use Oracle and Db2 side by side? Your DBA would hate you, but maybe you could.

Summary

For another perspective check out a Medium article on this topic by Tripti Shastri. Application-level sharding can be a tool in your tool box, just think through the implications before you move forward!

Ember Crooks
Ember Crooks

Ember is always curious and thrives on change. She has built internationally recognized expertise in IBM Db2, spent a year working with high-volume MySQL, and is now learning Snowflake. Ember shares both posts about her core skill sets and her journey learning Snowflake.

Ember lives in Denver and work from home

Articles: 548

One comment

  1. Divide to conquer. Theres a reason trees grow up to a limited size, but replicate to a forest. Lets learn forma Nature, which have eons of time perfecting systems

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.