The Wish For Atomicity
In this lesson, you will see how the race conditions can be eliminated by the atomicity of the instructions.
We'll cover the following
In the last lesson, we discussed the problem of the race condition in threads accessing shared data. Let’s look at a possible solution to this problem.
TIP: USE ATOMIC OPERATIONS
Atomic operations are one of the most powerful underlying techniques in building computer systems, from the computer architecture, to concurrent code (what you are studying here), to file systems (which you’ll study soon enough), database management systems, and even
. distributed systems “Atomic Transactions” by Nancy Lynch, Michael Merritt, William Weihl, Alan Fekete. Morgan Kaufmann, August 1993. A nice text on some of the theory and practice of atomic transactions for distributed systems. Perhaps a bit formal for some, but lots of good material is found herein The idea behind making a series of actions atomic is simply expressed with the phrase “all or nothing”; it should either appear as if all of the actions you wish to group together occurred, or that none of them occurred, with no in-between state visible. Sometimes, the grouping of many actions into a single atomic action is called a transaction, an idea developed in great detail in the world of databases and
. transaction processing “Transaction Processing: Concepts and Techniques” by Jim Gray and Andreas Reuter. Morgan Kaufmann, September 1992. This book is the bible of transaction processing, written by one of the legends of the field, Jim Gray. It is, for this reason, also considered Jim Gray’s “brain dump”, in which he wrote down everything he knows about how database management systems work. Sadly, Gray passed away tragically a few years back, and many of us lost a friend and great mentor, including the co-authors of said book, who were lucky enough to interact with Gray during their graduate school years In the chapters exploring concurrency, you’ll be using synchronization primitives to turn short sequences of instructions into atomic blocks of execution, but the idea of atomicity is much bigger than that, as you will see. For example, file systems use techniques such as journaling or copy-on-write in order to atomically transition their on-disk state, critical for operating correctly in the face of system failures. If that doesn’t make sense, don’t worry — it will, in some future chapter.
Atomicity
One way to solve the problem of the race condition would be to have more powerful instructions that, in a single step, did exactly whatever you needed done and thus removed the possibility of an untimely interrupt. For example, what if you had a super instruction that looked like this:
memory-add 0x8049a1c, $0x1
Assume this instruction adds a value to a memory location, and the hardware guarantees that it executes atomically; when the instruction executed, it would perform the update as desired. It could not be interrupted mid-instruction, because that is precisely the guarantee you receive from the hardware: when an interrupt occurs, either the instruction has not run at all, or it has run to completion; there is no in-between state. Hardware can be a beautiful thing, no?
Atomically, in this context, means “as a unit”, which sometimes is taken as “all or none.” What you’d like is to execute the three instruction sequence atomically:
mov 0x8049a1c, %eax
add $0x1, %eax
mov %eax, 0x8049a1c
As mentioned, if you had a single instruction to do this, you could just issue that instruction and be done. But in the general case, you won’t have such an instruction. Imagine you were building a concurrent B-tree and wished to update it; would you really want the hardware to support an “atomic update of B-tree” instruction? Probably not, at least in a sane instruction set.
Thus, what you will instead do is ask the hardware for a few useful instructions upon which you can build a general set of what we call synchronization primitives. By using this hardware support, in combination with some help from the operating system, you will be able to build multi-threaded code that accesses critical sections in a synchronized and controlled manner, and thus reliably produces the correct result despite the challenging nature of concurrent execution. Pretty awesome, right?
This is the problem you will study in the next few chapters of this course. It is a wonderful and hard problem and should make your mind hurt (a bit). If it doesn’t, then you don’t understand! Keep working until your head hurts; you then know you’re headed in the right direction. At that point, take a break; you don’t want your head hurting too much.
THE CRUX: HOW TO SUPPORT SYNCHRONIZATION
What support do we need from the hardware in order to build useful synchronization primitives? What support do we need from the OS? How can we build these primitives correctly and efficiently? How can programs use them to get the desired results?
Get hands-on with 1400+ tech skills courses.