Caching Basics—Theory

Learn the basic theory of caching to implement caches more sensibly.

Understanding caching

Caching is a common technique used by servers to improve the performance and responsiveness of applications. Caching involves temporarily storing frequently accessed data in a readily accessible location, such as memory, disk, or even another server, so it can be quickly retrieved without performing the computation or database query again.

Several types of caching mechanisms can be used by servers, including in-memory caching, external caching, and distributed caching. Every kind of caching has its advantages and disadvantages, and the choice of which type to use will depend on the specific requirements of each application. Let’s try to understand and evaluate each of them briefly.

In-memory caching

In-memory caching is the simplest and involves storing data in the server’s RAM. In-memory caching is typically used for small datasets that are frequently accessed, such as session data or user information. This type of caching is fast but has limited scalability because the amount of memory available on a single server is limited.

One interesting point to note here is that because the data we are caching is available locally on one machine, it will not sync with the data on other machines. Suppose we have a back-end service deployed on five virtual machines, with the load distributed equally. Assume our service caches the last response it read from the database in memory, and three out of five boxes have cached it as of now for the next two minutes.

Get hands-on with 1400+ tech skills courses.