What do Google’s web search,
Going by the textbook definition of a distributed system, it is simply a collection of independent computers that work together towards a common goal. The processing is called distributed computing. To an outsider, the distributed system would appear as a single entity. For example, when we enter some search words on Google, we get a response (almost) instantaneously. We simply accessed a web page that, to us, seemed like a single server serving our request. In reality, our request went to several (100+) servers that collaborated to serve us. When you realize that Google indexes and searches billions of web pages to return millions of results in less than a second, you’ll begin to wonder whether having a single machine doing that is adequate. You’ll realize that a better approach would be to split the task into multiple subtasks that are then assigned to separate machines that work independently of each other.
When a search request comes, it’s forwarded to multiple servers. Each server returns a subset of the result. These responses are then aggregated and returned to the user. This approach is much faster than carrying out the search on one machine. Google Search is a good example of a distributed system. Multiple nodes work together to achieve a performance that wouldn’t be otherwise possible.
You might be thinking, why do we need multiple machines to collaborate? Why can’t all this be done by a single powerful machine? For that, we need to understand scaling up and parallel computing.
Unlike distributed computing, parallel computing consists of splitting a task into subtasks and assigning each to a different CPU (or different cores in a single CPU) on the same machine instead of different ones. For small tasks, this is doable. But what if the task is large? We would need to scale up.
Scaling up, also called vertical scaling, consists of adding more resources to the same machine. For example, we could increase the number (or processing power) of CPUs (and/or cores), use a bigger RAM, use storage media that is faster and has more capacity, and increase network bandwidth. There are three issues with this approach that make adoption difficult. One, the state of the art imposes some limits beyond which we can’t power up the machine. For instance, increasing the CPU’s processing power means adding more circuitry on the chip. However, this produces heat, which could potentially melt the system. Second, unlike distributed systems, where each machine has its own resources, having multiple processors that compete for various system resources would create a bottleneck at the resource access point. Third, the cost doesn’t necessarily increase linearly when we scale up. For example, moving from a CPU with 16 cores to one with 32 cores would more than double the price.
If we want to play a computer game that our hardware doesn’t support, we can simply add a more powerful graphics card and the required RAM. But what if we want to search exabytes of data in milliseconds? Can we add more RAM, more disk storage, and a CPU having 100+ cores? No. This is where distributed systems help us by scaling out instead of scaling up.
The web search example covered previously is a perfect example of scaling out, also called horizontal scaling. Here, as the task’s requirements increase, instead of making the machines more powerful, we simply add more machines. These are independent nodes with their own resources.
For instance, suppose a web server can serve, on average, 1,000 requests per second. Suppose the website gets popular, and the traffic increases to 2,000 requests per second. We can easily scale up and add more RAM, faster storage media, and higher bandwidth links to cater to these requests. But what if the traffic sees a massive increase, and now the website needs to serve one million requests per second? It might not be possible (or feasible) to enhance the computational capabilities of the computer system to such an extent. In this scenario, we would need to scale out. We would need to add multiple servers, each serving a subset of the incoming requests. As the traffic increases, we would incrementally add more servers to this distributed system and split the incoming requests between them.
Up till now, we have only looked at the need for distributed systems from the perspective of scalability. Distributed systems are much more than just that. We’ll now look at the benefits of task distribution and the challenges associated with them.
Let’s dig in.
We have already seen one major benefit of distributed systems: improving performance and achieving scalability. Sharing of resources is another important benefit. If a machine is not being used, it would be better to let other users and applications utilize its resources rather than them going to waste. One of the most critical benefits is resilience to failure. If a system fails, the availability of the replicas of the content/service on different machines would ensure that the data is not lost and that the service would remain available.
There are several applications that rely on distributed systems. Let’s get a feel for the different types of distributed applications.
Searches and scientific computing: These involve searching huge amounts of data, which would otherwise take several hours if done on a single machine. Search engines that crawl the internet and then allow searches on petabytes of the indexed data are one example. Other search applications include searching scientific data for information, compiling insights, and identifying recurring patterns. For example,
Training models: Training machine learning and deep learning models involves huge datasets. Processing these on a single machine would be time-consuming. Distributing the processing over multiple machines would help save time. More recently, large language models have appeared that involve training on an enormous text corpus typically measured in petabytes. It involves different books, web articles, text on websites and forums, etc., and that, too, in different languages. Maintaining all this data and processing it would not be possible on a single machine. To understand the scales involved in a real-world training project,
Cloud computing: This was one of the biggest apps of the last decade and will continue to gain importance. Major players, such as Amazon Web Services, Microsoft Azure, and Google Cloud, created large
Web services: Various web services that handle thousands of requests per second also rely on distribution to achieve massive scale. Weather APIs, news feeds, etc., that are used by individual users and other websites benefit from a distributed architecture. By distributing traffic over different machines, they are able to scale. If the machines are spread over different geographical areas, the incoming requests can be routed toward the nearest server, thereby reducing delays and improving performance. The system also becomes more fault tolerant because if a single server fails, the remaining servers can take over.
Content distribution networks (CDNs): Before cloud computing took off, we had CDNs like Akamai. Various websites and multimedia streaming services benefit from them. CDNs help improve performance by mirroring content on servers that are placed all over the world, and end user requests are directed toward the nearest server. This reduces the load on a single machine and also helps reduce delays by serving the request from the nearest location. Also, even if a server fails, the user would be able to get content from an alternate server nearest to the user’s location.
Data storage and sharing: Various distributed system applications are targeted toward file sharing. For instance, various torrent clients, such as BitTorrent, create a peer-to-peer network between the active users, and users simultaneously download small chunks of the file from multiple peers who can transmit it the fastest. A distributed approach ensures that files are available even if some of the peers are offline, and by downloading in parallel, the system is able to ensure that the aggregate system bandwidth is utilized optimally.
Critical operations: Banking, for instance, is a service that needs a very high fault tolerance. If a server is offline, banks can neither afford to lose customer data nor make the service unavailable for their customers. The consequences would be catastrophic. Distributing and mirroring the service over multiple machines ensures service availability and security of the data.
Decentralization: In recent years, efforts have picked up pace to decentralize and democratize the internet. Various applications have been created in this regard. Blockchains, such as Bitcoin and Ethereum, and private browsers, such as Tor browser, are fully distributed and aim to ensure privacy and provide independence to the users as they try to reduce centralized control.
Massively multiplayer online games: Gaming needs no further explanation. Games are fun. We want to play with friends all over the world and enjoy without interruptions, something that a distributed architecture ensures.
Distributed computing makes system design way more complex. There are assumptions and simplifications that hold true in a single-machine system but don’t work in a distributed environment. Let’s look at some challenges now.
Coordination: Coordination is an important challenge that needs to be addressed. Multiple computers are collaborating toward a common goal, so who gets to decide which task will be assigned to which computer? Assigning tasks in a manner that optimally utilizes overall system resources is nontrivial. Adding to the complexity is the fact that the nodes would have varying capacities and that an assignment that is optimal in terms of resource utilization might not be optimal for the end user. Users want quick response times and might need to access a machine that is closest to them to reduce latency. This might not necessarily be the most efficient in terms of resource allocation. In case one of the machines is to be assigned a leadership role so that it’s responsible for this coordination, we need consensus algorithms that would be used by all the participating machines to elect a leader. This process would incur delays and might become a performance bottleneck if there is a
Connectivity: A related challenge is maintaining a communication network between the nodes. There are two approaches to do this. On one extreme, we have the client-server architecture where all nodes communicate only with the centralized server. The server has the complete picture of the network and can make optimal decisions. However, the server becomes a bottleneck and a single point of failure. This is especially true if the number of clients is big.
On the other extreme, we have the peer-to-peer architecture where there is no centralized server, and each node communicates with all other nodes (or a subset of them) to make decisions and convey them to others. Here, the control traffic might become a bottleneck if the number of peers gets huge, but no single machine is overloaded, as is the case in a client-server approach. Searching for the machine that has the required data or can work on the required task is more difficult in this approach as opposed to the client-server approach, where the server is responsible for answering requests. So there is always a trade-off that the system designer needs to be aware of. Sometimes, the system designer might go with a hybrid approach, which tries to bring in the best of both worlds.
Consistency: Maintaining a consistent state all across the system is another key challenge. For fault tolerance, distributed systems usually mirror processes and tasks across multiple servers that might be spread over a wide geographical area. This mirroring is not as easy as it sounds. For example, if one machine crashes, it’s easy to identify that because it would stop responding to messages. But what if the machine is working, but the data is corrupted? This is called a Byzantine fault, and a majority vote is needed between multiple machines to decide which version of the data is the corrupted one. This, of course, incurs an overhead. If multiple instances get corrupted simultaneously, we would need to include more nodes in the voting to determine the correct value.
Updates to data also become complex when data is mirrored across multiple machines. Due to network limitations, the updates can never be instantaneous. A problem would arise if a user accesses data on a machine that has not yet received the update while another user accesses a machine that has been updated. Both would have different data, and the system state would become inconsistent. Consistency models and algorithms are therefore an important area of system design. Please also note that because of communication delays, maintaining a universal time between all the machines also becomes difficult. There are algorithms that help achieve this, but if the clocks go out of sync, many applications that rely on timestamps would fail. For example, if multiple users are doing financial transactions, we need to know the order of transactions.
Security: No discussion on challenges is complete without mention of security issues. Security is especially important in distributed systems because in many cases, the participants might be independent users not known to each other. For example, in torrents, the data might be fetched from unknown people who might be malicious users. Similarly, in cloud computing, a machine might be shared between multiple users, and separating and securing each user’s data would be critical.
When studying these design challenges, keep in mind that on one hand, the goal is to have a system that consists of nodes collaborating toward a common goal, and on the other, we need to ensure they appear to be a single entity. The problems discussed above make it difficult to achieve the latter. The challenges seem daunting. But the good thing is that there are tools that do most of the work for us. This brings us to the last point: middleware for building distributed applications.
Middleware technologies help save development time and simplify system management by abstracting various aspects of interaction between different components. Standardized interfaces further simplify complexities associated with distributed computing.
Message-oriented middleware (MOM) and message queue middleware, including Apache Kafka and Apache ActiveMQ, allow
Database middleware, including ODBC and JDBC, provides standardized interfaces for efficient database access. Together with transactional middleware like Java Transaction API, it ensures the
Each middleware type addresses specific aspects of distributed system challenges. There is a vibrant ecosystem of middleware out there. Looking at the popular ones in each of the middleware types mentioned above would be a good starting point in developing distributed systems.
This blog introduced you to the benefits of a distributed system design and the popular applications in the market. Next time you play a multiplayer online game, use a banking app, or look at the next image of a black hole, you’ll know how distributed systems helped make those possible and the challenges that were addressed by the developers.
To learn more about distributed systems and the middleware ecosystems, check out our other courses.
Free Resources