In this lesson, we will recap how well our system addresses the requirements we set in the first lesson.

Accessing and updating highly viewed content

Highly viewed content means that the load and, thus, the latency can peak randomly. To make the service more elastic, we allowed more front-end clients to scale the two layersThe two layers being the Memcached server layer and the Memcached client layer. independently. This meant that multiple clients could communicate with a few Memcached servers due to a non-uniform distribution key access. Having these many clients caused items in Memcached to face two problems: stale sets and thundering herds. We used leases to fix this issue. According to thisRajesh Nishtala, Hans Fugal, Steven Grimm, Marc Kwiatkowski, Herman Lee, Harry C. Li, Ryan McElroy, Mike Paleczny, Daniel Peek, Paul Saab, David Stafford, Tony Tung, and Venkateshwaran Venkataramani. 2013. Scaling Memcache at Facebook. In Proceedings of the 10th USENIX conference on Networked Systems Design and Implementation (nsdi’13). USENIX Association, USA, 385–398. study, query rates dropped from 17000 queries per second to 1300 queries per second when using the database with leases. This resulted in reduced load on the storage layer. Moreover, not all items have the same characteristics. To deal with the interference caused by these different requirements, we segmented our key-value items into different pools. The breakdown is given below:

Level up your interview prep. Join Educative to access 80+ hands-on prep courses.