It addresses the mutual exclusion problem by ensuring processes can safely access shared resources without conflicts or data corruption.
Key takeaways:
Lamport’s Bakery Algorithm ensures fair access to shared resources using ticket-based ordering.
Only one process can access the critical section at a time, avoiding conflicts.
The algorithm operates in a distributed manner, with processes managing their own ticket assignment and turn-waiting logic.
Every process is guaranteed to access the critical section eventually, preventing indefinite waiting.
The algorithm may become less efficient as the number of processes increases due to constant checks.
Lamport’s bakery algorithm, introduced in 1974, ensures mutual exclusion by assigning each process a ticket number, much like customers waiting their turn in a bakery.
The algorithm aims to solve the mutual exclusion problem, wherein several processes wish to access a shared resource. It ensures that they don’t do so simultaneously, which can lead to issues like data corruption.
A few key concepts included in the algorithm are:
Key concept | Explanation |
Mutual exclusion | Ensures only one process accesses the critical section at a time. The algorithm achieves this by assigning “tickets” and ordering access based on these numbers. |
Bounded waiting | Guarantees that no process waits indefinitely. Each process knows its position in the queue, ensuring fairness. |
Critical section | A segment of code where shared resources are accessed. Protecting this section prevents data corruption or inconsistencies. |
Freedom from deadlock | Deadlock occurs when processes indefinitely wait for each other. The algorithm avoids this by assigning unique identifiers and guaranteeing progress. |
When several processes run concurrently and want to use a shared resource (like a printer, CPU, or any other utility), they might conflict or interfere with each other. We need a way to ensure that only one process can use the resource at any given time, and others must wait for their turn.
Taking a number: Just like in a bakery, every process that wants to access the shared resource takes a number. The process with the smallest number gets to use the resource.
Number assignment: Processes decide their number based on the maximum number already chosen and add one to it. If a process sees that no other process has a number, it takes the number one.
Waiting: If a process sees that another process with a lower number (or an earlier position in the queue) is also interested in the shared resource, it waits.
Entering the critical section:
When a process wants to enter its critical section, it first looks at all other processes to see what numbers they have.
It then chooses a number that is one more than the maximum number it has seen.
If two processes pick the same number at the same time, the process with the lower process ID gets priority.
Exiting the critical section:
After completing its task in the critical section, the process sets its number to zero, indicating that it no longer wishes to access the shared resource.
Dealing with starvation:
The bakery algorithm ensures that every process gets a fair chance (or turns) to access the resource. It guarantees that if a process wants to access the shared resource, it will eventually get its turn.
Imagine three processes: A, B, and C, competing for access to a shared resource.
A decides it wants to access the shared resource. It looks around and sees no other process has a number.
It takes the number one.
B also wants to access the resource. It sees A has number one, so it takes number two.
C comes in and takes number three after seeing A and B’s numbers.
A has the smallest number, so it accesses the resource first. B and C wait.
A finishes and sets its number to zero. B, having the next smallest number, accesses the resource.
C goes next after B finishes and sets its number to zero.
Let’s look at a simplified Python code for the algorithm:
import threadingimport time# Number of processesN = 5# Flag to indicate if a process is in the entry sectionentering = [False] * N# Number for each processnumber = [0] * Ndef max_number():return max(number)def bakery_algorithm(process_id):global entering, number# Entry Sectionentering[process_id] = Truenumber[process_id] = 1 + max_number()entering[process_id] = Falsefor i in range(N):if i != process_id:# Wait until process 'i' receives its numberwhile entering[i]:pass# Wait until all processes with smaller numbers or with the same# number, but with higher process IDs, finish their workwhile (number[i] != 0) and (number[process_id], process_id) > (number[i], i):pass# Critical Sectionprint(f"Process {process_id} is in the critical section.")time.sleep(1) # Simulating some work in the critical section# Exit Sectionnumber[process_id] = 0# Creating and starting threads for each processthreads = []for i in range(N):thread = threading.Thread(target=bakery_algorithm, args=(i,))threads.append(thread)thread.start()# Waiting for all threads to completefor thread in threads:thread.join()print("All processes have completed their execution.")
This code demonstrates Lamport’s Bakery Algorithm for ensuring mutual exclusion in a multithreaded environment. Here’s how it works:
Initialization (lines 5–9):
N
represents the number of processes (threads) competing for the critical section.
entering
is a flag array indicating whether a process is selecting its ticket number.
number
stores ticket numbers assigned to processes, determining their order.
Entry section (lines 17–20):
A process sets entering[process_id]
to True
while choosing its ticket number.
It selects a ticket number that is 1 + max(number)
to ensure it’s greater than any active ticket.
After assigning the number, it sets entering[process_id]
to False
.
Waiting for turn (lines 22–31):
Each process checks:
If another process is still selecting its number (while entering[i]
).
If another process with a smaller number or the same number but a lower ID is waiting (while (number[i] != 0) and (number[process_id], process_id) > (number[i], i)
).
It waits until its turn to enter the critical section.
Critical section (lines 33–35):
When conditions are met, the process enters the critical section and simulates work using time.sleep(1)
.
Exit section (line 38):
After completing its task, the process resets its ticket number to 0
, signaling that it’s done.
Thread management (lines 40–49):
Threads are created for each process, and each runs the bakery_algorithm
function.
The main thread waits for all threads to finish using join()
.
Impact of key parameters
As processes increase, ticket comparison becomes computationally expensive, especially if processes frequently enter and exit the critical section.
Nature of the critical section:
Short Tasks: If tasks in the critical section are brief, the overhead of the algorithm may dominate, reducing its efficiency.
Long Tasks: For lengthy critical sections, the algorithm’s fairness and simplicity make it an attractive choice.
Fairness: Every process gets a turn.
Distributed: Processes don’t need to coordinate with a central authority to decide who goes next.
Overhead of continuous checks: The algorithm requires all processes to continuously check others' ticket numbers. As the number of processes grows, this becomes resource-intensive.
Mitigation: Alternatives like Peterson’s Algorithm or hardware-supported Mutex Locks can reduce overhead.
Scalability issues:
Delayed communication: In distributed systems, processes may receive outdated ticket information due to network latency.
Large process count: As the number of processes increases, comparisons of ticket numbers become a bottleneck.
Alternative strategies: Algorithms like Ricart-Agrawala for distributed systems or Semaphore-based locking for simpler environments might be better suited in such scenarios.
Critical section workload:
For I/O-bound tasks, the algorithm’s simplicity might suffice.
For CPU-bound tasks, hardware-based synchronization may perform better.
Distributed systems: Lamport’s Bakery Algorithm can manage access to shared resources like distributed databases, ensuring no two nodes update the same record simultaneously.
Multithreaded applications: Threads accessing shared memory (e.g., in operating systems) can use this algorithm to prevent race conditions.
Cloud resource allocation: Allocating limited cloud resources (e.g., virtual machine instances) among multiple clients while avoiding conflicts or resource starvation.
Industrial automation: Synchronizing robotic arms in an assembly line to ensure they do not interfere with one another while accessing shared tools or spaces.
Financial transactions: Regulating access to accounts in banking systems to maintain consistency and integrity when multiple transactions occur concurrently.
To learn more about operating systems, distributed programming, synchronisation, and shared resources. Look at the following list of curated courses at Educative:
Lamport’s bakery algorithm offers a straightforward and fair method to ensure mutual exclusion in systems. where multiple processes seek access to a shared resource. By mimicking the process of customers waiting their turn in a bakery, the algorithm provides an understandable mechanism to prevent simultaneous access and potential system conflicts.
Haven’t found what you were looking for? Contact Us
Free Resources