Cooperative Multitasking

You will be introduced to cooperative multitasking in this lesson.

Cooperative multitasking

Unlike operating system threads, which are paused (suspended) and resumed by the operating system at unknown points in time, a fiber pauses itself explicitly and is resumed by its caller explicitly. According to this distinction, the kind of multitasking that the operating system provides is called preemptive multitasking and the kind that fibers provide is called cooperative multitasking.

Preemptive multitasking

In preemptive multitasking, the operating system allots a certain amount of time to a thread when it starts or resumes its execution. When the time is up, that thread is paused and another one is resumed in its place. Moving from one thread to another is called context switching. Context switching takes a relatively large amount of time, which could have better been spent doing actual work by threads.

Benefits of cooperative multitasking

Considering that a system is usually busy with high number of threads, context switching is unavoidable and is actually desired. However, sometimes threads need to pause themselves voluntarily before they use up the entire time that was allotted to them. This can happen when a thread needs information from another thread or from a device. When a thread pauses itself, the operating system must spend time again to switch to another thread. As a result, time that could have been used for doing actual work ends up being used for context switching.

With fibers, the caller and the fiber execute as parts of the same thread (That is the reason why the caller and the fiber cannot execute at the same time.) As a benefit, there is no overhead for context switching between the caller and the fiber. (However, there is still some light overhead which is comparable to the overhead of a regular function call.)

Another benefit of cooperative multitasking is that the data that the caller, and the fiber exchange is more likely to be in the CPU’s data cache. Because the data that is in the CPU cache can be accessed hundreds of times faster than data that needs to be read back from system memory, this further improves the performance of fibers.

Additionally, because the caller and the fiber are never executed at the same time, there is no possibility of race conditions, obviating the need for data synchronization. However, the programmer must still ensure that a fiber yields at the intended time (e.g., when data is actually ready). For example, the func() call below must not execute a Fiber.yield() call, even indirectly, as that would be premature, before the value of sharedData was doubled:

Get hands-on with 1400+ tech skills courses.