`
A deadlock occues when a set of processes are blocked because each process is waiting for a resource that another process is holding.
All these conditions must be true for a deadlock to occur!
Mutual Exclusion: At least one resource must be held in a non-sharable mode. (only one process can use the resource at a time)
Hold and Wait: A process is holding at least one resource and waiting to acquire additional resources held by others.
No Preemption: Resources cannot be forcibly taken from a process, the process must release them on its own.
Circular Wait: A set of processes are waiting in such a order that a circular chain is formed. Kind of like a cycle.
Resource Allocation Graph: A directed graph that shows the relationship between processes and resources.
General Convention for RAGs:
If there is a single resource instance, a cycle in the graph always indicates a deadlock. If there are multiple resource instances, cycles may not imply a deadlock.
Our goal is to prevent atleast one of the four conditions from occurring.
Spooling is a process where data is temporarily held to be used and executed by a device, program or the system.
Breaking Hold and Wait: Require processes to request all resources at once. But this may lead to resource wastage / under utilization.
Breaking No Preemption: Allow OS to preempt resources and reassign them to others. For eg, Preempting CPU cycles from a low-priority process to a high-priority process.
Breaking Circular Wait: Assign a unique number to each resource and require that processes request resources in increasing order.
Here our goal is to ensure the system never enters an unsafe state.
A deadlock avoidance algorithm that checks if resource allocation will leave the system in a safe state or not. Works for single and multiple instances.
Steps of Banker’s:
For detection of deadlocks we have Cycles in RAGs and Matrix based detection algorithms.
Method | Pro | Con | Example |
---|---|---|---|
Prevention | Eliminates deadlocks entirely by breaking one of the four necessary conditions. | May lead to under-utilization of resources or system inefficiency. | Imposing resource ordering. |
Avoidance | Ensures safe states through algorithms like Banker’s Algorithm. | Requires prior knowledge of resource needs and may not be feasible in dynamic systems. | Banker’s Algorithm |
Detection | Efficiently detects deadlocks in systems with frequent resource contention. | Requires extra processing to detect and recover from deadlocks. | Cycles in RAGs, Matrix detection |
Use prevention for critical resources (e.g. mutexes) and detection for general resources.
Distributed systems pose some different challenges than single systems… There is no global knowledge of the system state, each computer knows about only the partial information. There is increased communication overhead, algorithms must be fault-tolerant, and there is no global clock. And even time delays and latency can lead to inconsistencies!
Centralized: One node collects all the information and detects deadlocks or check for cycles. But this is a single point of failure!
Hierarchical: Multiple nodes work in a hierarchy to detect. Reduces the communication overhead.
Fully Distributed Algorithm: Each node collects information about its neighbors and sends it to them. Each node then checks for cycles. This is the most fault-tolerant but has the highest communication overhead. Examples would be Path-pushing (processes exchange dependaency information) and Edge-chasing algorithms(probes are sent across the dependency graph to detect for cycles).
Strategies to prevent starvation:
Synchronization is the coordination of multiple threads or processes to ensure that they do not interfere with each other. It is used to ensure that only one thread or process accesses a critical section at a time. It is important in environments where multiple threads or processes share resources.
Locks are synchronization mechanisms used to ensure that only one thread or process accesses a critical section at a time.
Types of Locks:
A specific type of lock that ensures that only one thread can execute a critical section at a time.
Features of Mutexes
Mutex Operations
A synchronization primitive (i.e. a programming construct that allows threads to coordinate their actions) that can be used to control access to a shared resource by maintaining a counter.
Types of Semaphores
Semaphore Operations
Wait (P): Decrement the semaphore value. If the value is less than 0, block the thread.
Signal (V): Increment the semaphore value. If there are blocked threads, unblock one of them.
Aspect | Mutex | Semaphore |
---|---|---|
Type | Locking mechanism | Signaling mechanism |
Values | Binary (locked/unlocked) | Integer value (can be binary or counting) |
Ownership | Owned by the thread that locks it; only the owning thread can unlock it | Not owned by any thread; any thread can signal (V ) or wait (P ) on it |
Use Case | Ensures mutual exclusion, allowing only one thread to access a resource at a time | Controls access to a resource pool with multiple instances |
Blocking | If a thread tries to lock an already locked mutex, it will block until it’s released | Can be used to manage simultaneous access without necessarily blocking |
Sharing | Typically used within a single process | Can be used across multiple processes |
Complexity | Simpler, used for mutual exclusion | More versatile, can be used for signaling and controlling resource counts |
The producer-consumer problem is a classic synchronization problem where two threads share a common buffer. The producer thread produces data and puts it into the buffer, while the consumer thread consumes the data from the buffer.
Requirements
A possible solution
// Shared variables
Semaphore full = 0; // Counts filled slots
Semaphore empty = BUFFER_SIZE; // Counts empty slots
Mutex mutex; // Ensures mutual exclusion
Producer:
while (true)
{
item = produce_item();
wait(empty); // Decrement empty count
lock(mutex); // Enter critical section
add_item_to_buffer(item);
unlock(mutex); // Exit critical section
signal(full); // Increment full count
}
Consumer:
while (true)
{
wait(full); // Decrement full count
lock(mutex); // Enter critical section
item = remove_item_from_buffer();
unlock(mutex); // Exit critical section
signal(empty); // Increment empty count
consume_item(item);
}
Problem where multiple threads read from a shared resource, while a single thread writes to it. The goal is to allow multiple readers to access the resource simultaneously, but only one writer can access it at a time.
A possible solution
Using Mutex and Semaphore: Use a mutex to protect the shared resource and two semaphores to control access for readers and writers.
Readers Priority: Allow readers to access the resource if no writer is waiting, but block writers if there are readers.
Writers Priority: Allow a writer to access the resource only if there are no readers or writers.
Fairness: Ensure that readers and writers are treated fairly and that no thread starves. This can be achieved by using a queue to manage access.
// Shared variables
Semaphore mutex = 1; // Protects read_count
Semaphore write_block = 1; // Ensures mutual exclusion for writers
int read_count = 0;
Reader:
wait(mutex);
read_count++;
if (read_count == 1)
wait(write_block); // First reader locks writer
signal(mutex);
read_data();
wait(mutex);
read_count--;
if (read_count == 0)
signal(write_block); // Last reader releases writer
signal(mutex);
Writer:
wait(write_block); // Ensure exclusive access
write_data(); // Perform writing
signal(write_block); // Release exclusive access
Problem where a number of philosophers sit at a table with a bowl of spaghetti. Each philosopher needs two forks to eat, but there are only five forks on the table. The philosophers alternate between thinking and eating.
Requirements
A possible solution
Using Mutex and Semaphore: Use a mutex to protect the forks and a semaphore to control access to the forks.
Resource Hierarchy: Assign a unique number to each fork and require that philosophers pick up the lower-numbered fork first.
Timeouts: Implement a timeout mechanism to prevent deadlocks if a philosopher cannot acquire both forks.
For philosopher i (0 to N-1):
while (true)
{
think();
if (i % 2 == 0) {
// Even-numbered philosopher
pick_up_fork(i); // Left fork
pick_up_fork((i + 1) % N); // Right fork
} else {
// Odd-numbered philosopher
pick_up_fork((i + 1) % N); // Right fork
pick_up_fork(i); // Left fork
}
eat();
put_down_fork(i); // For left fork
put_down_fork((i + 1) % N); // fFor right fork
}
They allow threads to wait for a certain condition to become true before proceeding. Mostly used with mutexes to manage access to shared data.
Example of Condition Variables in Producer Consumer:
// Shared variables
Mutex mutex;
Condition not_full, not_empty;
Buffer buffer;
Producer:
lock(mutex);
while (buffer.is_full())
{
wait(not_full, mutex); // Wait until buffer is not full
}
buffer.add(item);
signal(not_empty); // Signal that buffer is not empty
unlock(mutex);
Consumer:
lock(mutex);
while (buffer.is_empty())
{
wait(not_empty, mutex); // Wait until buffer is not empty
}
item = buffer.remove();
signal(not_full); // Signal that buffer has space
unlock(mutex);
consume_item(item);
Locks used with conditional variables to protect shared data.
Occurs when a thread waiting on a condition variable wakes up without being notified or without the condition being true. This can happen due to the underlying implementation of condition variables. To handle spurious wakeups, always check the condition after waking up. They are like the nightmares of synchronization!
A high-level synchronization concept that combines a mutex, condition variable and the shared data into a single structure. They ensure that only one thread can access the shared data at a time and provide a way for threads to wait for a condition to become true.
General concept is where a boss thread assigns tasks to worker threads. The boss creates the worker thread and then manages too. Just like real life.
Used in loadbalancing or distributing tasks across threads.
Pseudo code:
bossThread:
for (task in taskQueue)
{
worker = getAvailableWorker();
worker.assignTask(task);
}
workerThread:
while (true)
{
task = waitForTask();
processTask(task);
}
Data flows through a sequence of stages, each one handled by a thread. Each stage processes the data and passes it to the next stage. Used in data processing, compilers, image rendering, etc.
Pseudo code:
stage1Thread:
while (true)
{
data = readData();
sendToStage2(data);
}
stage2Thread:
while (true)
{
data = receiveFromStage1();
processedData = process(data);
sendToStage3(processedData);
}
stage3Thread:
while (true)
{
data = receiveFromStage2();
writeData(data);
}
Organizes threads into layers, where each layer performs a specific function. Used in networking, operating systems, etc.
Pseudo code:
layer1Thread:
while (true)
{
data = receiveData();
process(data);
sendData(data);
}
layer2Thread:
while (true)
{
data = receiveData();
process(data);
sendData(data);
}