Introduction
This lesson gives a gentle introduction to asynchronous programming in C#.
We'll cover the following
Introduction
In this part of the course, we'll explain the motivation for and the basics of asynchronous programming in C#. Treat it like a gist or summary on the subject and not an in-depth tutorial. After completing this section, you should have a fair grasp of how asynchronous programming works in C# minus the minutiae.
Concurrency can be defined as dealing with multiple things at once. You can concurrently run several processes or threads on a machine with a single CPU but you'll not be parallel when doing so. Concurrency allows us to create an illusion of parallel execution even though the single CPU machine runs one thread or process at a time.
Parallelism is when we execute multiple things at once. True parallelism can only be achieved with multiple CPUs.
So far we've delved into the threading API of C#. But there's another paradigm of achieving concurrency using asynchronous programming. What is it? Let's work through an analogy:
Consider a restaurant with a single waiter. Suddenly, three customers, Kohli, Amir and John show up. The three of them take a varying amount of time to decide what to eat once they receive the menu from the waiter. Let's assume Kohli takes 5 minutes, Amir 10 minutes and John 1 minute to decide. If the single waiter starts with Amir first and takes his order in 10 minutes, next he serves Kohli and spends 5 minutes on noting down his order and finally spends 1 minute to know what John wants to eat then in total, he spends 10 + 5 + 1 = 16 minutes to take down their orders. However, notice in this sequence of events, John ends up waiting 15 minutes before the waiter gets to him, Kohli waits 10 minutes and Amir waits 0 minutes.
Now consider if the waiter knew the time each customer would take to decide. He can start with John first, then get to Amir and finally to Kohli. This way each customer would experience a 0 minute wait. An illusion of three waiters, one dedicated to each customer is created even though there's only one. Lastly, the total time it takes for the waiter to take all three orders is 10 minutes, much less than the 16 minutes in the other scenario.
The single waiter scenario can be mapped to a single thread juggling various items and whenever an item gets blocked on an operation that requires time to receive a result externally, the thread shifts to executing another item that is ready. This is an example of asynchrony using a single thread.
Going back to our restaurant analogy, we can increase the number of waiters to serve an increasing number of patrons at the restaurant. This would constitute an example of asynchrony with multiple threads.
Finally, it should be obvious that dedicating one waiter for each customer is an example of multithreading with no asynchrony and wasteful. Tying up one thread for each work item that blocks the thread waiting for results from an external source limits the number of work items a system can service.
Also, note that when we say that a thread waits for a result to arrive from an external source, it means hardware components such as the disk controller, network card, etc far below the level of OS threads are to bring back a result for the waiting thread. The waiting thread sits completely idle using up memory and other resources while the external source is working on a result for the waiting thread. Remember, there's no other hidden thread that is working to produce a result for the blocked thread.
We can achieve asynchrony using a single thread or multiple threads but multithreading doesn't imply asynchrony. We can have a multithreaded system with no asynchrony at all.
Incorporating asynchrony improves the throughput of a system - more work gets done with the same resources. The same number of waiters can serve more patrons. But asynchrony doesn't speed up computation. A waiter can't take an order faster than the time a customer takes to decide for an order.
WebServers
A common application of the asynchronous paradigm is in webservers which spend most of their time waiting for network I/O. A more recent approach to implementing webservers uses event loops. An event loop is a programming construct that waits for events to happen and then dispatches them to an event handler. Languages such as JavaScript, Ruby (MRI implementation) and Python (standard C implementation) enable the asynchronous programming model using an event loop. The idea is that a single thread runs in a loop waiting for an event to occur. Once an event arrives, it is appropriately dispatched to an event handler. The event loop's thread immediately goes back to listening for another event. The event handler may run on another thread if the language supports multiple threads. This design achieves very high concurrency if an application is frequently involved in either of the following:
Network I/O
Disk I/O
If an application spends most of its time waiting for I/O, it can benefit from an asynchronous design. One of the most common use cases you'll find in the wild is of webservers implemented using asynchronous design. A webserver waits for an HTTP request to arrive and returns the matching resource. Folks familiar with JavaScript would recall NodeJS works on the same principle. It is a webserver that runs an event loop to receive web requests in a single thread. Contrast that to webservers which create a new thread, or worse, fork a new process, to handle each web request. In some benchmarks, the asynchronous event loop based webservers outperformed multithreaded ones, which may seem counterintuitive.
In order to truly appreciate asynchronous programming model, we'll present Ryan Dahl's motivation for creating NodeJS which also runs an event loop. Ryan classifies disk and network I/O operations as blocking operations and presents the following table to put the latency for various operations in perspective.
Level up your interview prep. Join Educative to access 70+ hands-on prep courses.