Introduction to the regional level

We call a data center a region, and a region is a collection of multiple clusters. At the cluster level, the dominant concern is sharding the key space (using consistent hashing) and grouping the keys into appropriate buckets (for example, viral keys vs. dormant keys and high-churn keys vs. low-churn keys). At the regional level, our main concern will be the replication of keys to meet the overall load.

Consistency concerns come with replication. At the regional level, we must maintain consistency between Memcached and storage clusters (we will provide read-your-writes consistency at the regional level). How can we invalidate stale cached data that has been updated in the storage cluster? These cross-cluster problems are going to be discussed in this lesson.

Overview of design problems at the regional level

To manage the high workload, we add multiple front-end clusters that use the same storage cluster, but to do this, we need to manage replication and data consistency.

  • If we scale a single cluster naively, our networks start to face incast congestion. Rather, we can replicate clusters when the load becomes too high.

  • Replication and consistency: when we have multiple Memcached servers caching the data from the same storage cluster, how can we ensure that all the Memcached servers are up to date?

    • Should we send invalidations through web servers?

    • Should we send invalidations from the storage clusters to the Memcached servers for items that are no longer up to date or need to be updated?

    • Do batching invalidations help to reduce the packet rates?

  • Sometimes we require a substantially high hit rate on a set of keys, but having a tight coupling between memory and throughput (such as packing the webserver and Memcached in a front-end server) can be memory-in-efficient.

  • A new cluster doesn't have any key-value items cached to it; how can we bring it online without causing a decrease in the hit rate?

We will discuss the above concerns in detail in the following lesson.

Invalidation from web servers

The following reasons explain why we want to split Memcache into front-end and storage clusters and then disproportionately increase the front-end clusters that serve the storage clusters.

  1. We attain a system with smaller failure domains.

  2. Easier to control and manage network configurations.

  3. Reduction of incast congestion.

However, when dealing with a distributed cache with replicas, we need to efficiently flag items that are not up to date. When dealing with cache invalidations, let's first consider broadcasting them from the web servers. Even though this might seem like the obvious solution, it comes with two problems:

  1. The packet overhead is larger due to it being a broadcast.

  2. If web servers face systemic invalidations (for example, a misroute of a delete operation because of a configuration error), broadcasting them might exacerbate the problem. A rolling restart might be required for the whole system in such cases, which affects system availability.

Invalidations from storage servers

The second option is to use the storage layer to inform the Memcache about any updates. For this to work, we can look for specific key changes in the SQL statements.

When a web server updates data on a storage cluster, it sends an invalidation to its own Memcached clusters. This way, Memcached servers have to re-request updated items from the database to maintain read-after-write semantics. However, when dealing with updates from the replica clusters, we can use an invalidation daemon at the database that checks SQL statements for invalidations to then broadcast those invalidations from the storage cluster to other Memcached clusters. SQL statements that modify the primary SQL database are modified to contain the Memcached key that needs to be invalidated.

Level up your interview prep. Join Educative to access 80+ hands-on prep courses.