...

/

Distributed Caching

Distributed Caching

Learn about distributed caching in Python.

Caching systems such as those provided by cachetools or functools.lru_cache (discussed in the previous lesson) present a big flaw in relation to distributed systems: their data store is not distributed. As those functions usually save data into a Python dictionary, they do not offer a scalable and shared cache data store which is needed for large applications.

When a system is distributed across a network, it also needs a cache that is distributed across a network. Nowadays, there are plenty of network servers that offer caching capability, such as memcached, Redis, and many others.

pymemcache

The simplest one to use is probably memcached, which once installed, can simply be launched by calling the memcached command. My preferred Python library for interacting with memcached is pymemcache. I recommend using it. The following example shows how you can connect to memcached and use it as a network-distributed cache across your applications.

To run the following application, click Run and open another terminal to start memcached using command: memcached -u memcache -p 11211. Then, enter the command python pymemcache-simple.py in the first terminal.

from pymemcache.client import base

# Don't forget to run `memcached' before running
client = base.Client(('localhost', 11211))
client.set('some_key', 'some_value')
result = client.get('some_key')
print(result)  # some_value
Connecting to memcached

While straightforward enough, this example allows storing key/value tuples across the network and accessing them through multiple, distributed nodes. It is simple yet powerful and it is a first step that might be a cheap enough means ...