Introduction to Concurrency
[!NOTE] This module explores the core principles of Introduction to Concurrency, deriving solutions from first principles and hardware constraints to build world-class, production-ready expertise.
1. The Illusion of Multitasking
Modern Operating Systems give the illusion that they are doing hundreds of things at once. You can listen to music, browse the web, and compile code simultaneously. But on a single-core CPU, the hardware can only execute one instruction at a time.
The OS achieves this illusion through Concurrency: rapid context switching between tasks.
Concurrency vs Parallelism
These terms are often used interchangeably, but they are distinct:
- Concurrency: Dealing with multiple things at once (Structure). The OS manages multiple tasks, switching between them.
- Parallelism: Doing multiple things at once (Execution). Requires multi-core hardware.
[!NOTE] Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once. — Rob Pike (Co-creator of Go)
2. The Race Condition
When multiple threads access shared memory without coordination, chaos ensues. The final state of the program depends on the precise timing of the context switches. This is called a Race Condition.
The “Bank Account” Example
Imagine a joint bank account with a balance of $100.
- Alice checks balance (100), sees she has enough, and withdraws 80.
- Bob checks balance (100), sees he has enough, and withdraws 80.
If these operations happen simultaneously (interleaved), both might succeed, leaving the bank with a negative balance.
3. Hardware Reality: The Read-Modify-Write Cycle
Why does this happen? High-level languages like Java or Go hide the hardware reality. A simple operation like count++ is NOT atomic.
It compiles down to at least three machine instructions:
- LOAD: Read value from Memory into a CPU Register.
- ADD: Increment the value in the Register.
- STORE: Write the Register value back to Memory.
If a Context Switch happens between step 1 and 3, the update is lost.
4. Interactive: The Race Simulator
Witness a race condition in action. Two threads are trying to increment Counter from 0 to 1.
If they run correctly, the result should be 2. If they race, the result might be 1.
Thread A
Thread B
5. Critical Section & Mutual Exclusion
To prevent race conditions, we need to identify the code parts that access shared state.
- Critical Section: The segment of code where the process accesses shared resources (variables, files).
- Mutual Exclusion (Mutex): The requirement that only one thread can execute in the Critical Section at any given time.
The Solution Requirements
A valid solution to the Critical Section problem must satisfy three conditions:
- Mutual Exclusion: No two processes may be inside their Critical Sections simultaneously.
- Progress: If no process is executing in its Critical Section and some processes wish to enter, they cannot be postponed indefinitely. (No Deadlock).
- Bounded Waiting: There must be a bound on the number of times other processes are allowed to enter their Critical Sections after a process has made a request to enter. (No Starvation).
6. Code Example: The Race
Here is how a race condition looks in code. If you run this multiple times, you will see different results (e.g., 19998, 20000, 15432).
public class RaceCondition {
static int count = 0;
public static void main(String[] args) throws InterruptedException {
Runnable task = () -> {
for (int i = 0; i < 10000; i++) {
count++; // NOT ATOMIC: Read, Add, Write
}
};
Thread t1 = new Thread(task);
Thread t2 = new Thread(task);
t1.start();
t2.start();
t1.join();
t2.join();
System.out.println("Final Count: " + count);
// Expected: 20000
// Actual: ??? (e.g. 15492)
}
}
package main
import (
"fmt"
"sync"
)
var count = 0
func main() {
var wg sync.WaitGroup
wg.Add(2)
task := func() {
defer wg.Done()
for i := 0; i < 10000; i++ {
count++ // Race Condition here
}
}
go task()
go task()
wg.Wait()
fmt.Println("Final Count:", count)
}