2025-03-03 08:42:47 +01:00
2025-03-08 17:26:57 +01:00
#### Definizioni di base
- A **sequential algorithm** is the formal description of the behavior of an *abstract* state machine.
- A **program** is a sequential algorithm written in a programming language.
- A **process** is a program executed on a *concrete* machine, characterized by its *state* (e.g. values of the registers).
- If the process follows one single control flow (i.e. one program counter) then it is a **sequential process** , or **thread** .
- A set of sequential state machines that run simultaneously and interact through shared memory through a *shared medium* is called **concurrency** .
2025-03-03 08:42:47 +01:00
2025-03-08 17:31:57 +01:00
>[!note]
*"In a concurrent system there must be something shared"*
2025-03-03 08:47:47 +01:00
2025-03-08 17:31:57 +01:00
#### Advantages of concurrent systems
2025-03-08 17:26:57 +01:00
- efficiency: can run different stuff in parallel
2025-03-03 08:47:47 +01:00
- simplification of the logic by dividing the task in simpler tasks, running them in different processes and combining the results together.
2025-03-08 17:31:57 +01:00
#### Features of a concurrent system
2025-03-03 08:47:47 +01:00
We can assume many features:
- Reliable vs Unreliable
- Synchronous vs Asynchronous
- Shared memory vs Channel-based communication
2025-03-08 17:31:57 +01:00
- **Reliable** system: every process correctly executes its program.
- **Asynchronous:** no timing assumption (every process has its own clock, which are independent one from the other).
- **Shared medium:** A way is through a shared memory area, another way is through message passing.
2025-03-03 08:52:47 +01:00
2025-03-08 17:31:57 +01:00
>[!info] For this part of the course we assume that every process has a local memory but can access a shared part of the memory. We will assume that memory is split into registers (will see later).
2025-03-03 08:52:47 +01:00
2025-03-08 17:31:57 +01:00
**For now, we will assume that processes won't fail.**
2025-03-03 08:52:47 +01:00
We also assume that we have one processor per process. But actually the processor can be a shared resource (more processes than processors).
2025-03-03 08:57:47 +01:00
### Synchronization: cooperation vs competition
Definition of **synchronization:** the behavior of one process depends on the behavior of others.
This requires two fundamentals interactions:
- **Cooperation**
- **Competition**
#### Cooperation
Different processes work to let all of them succeed in their task.
1. **Rendezvous:** every involved process has a control point that can be passed only when all processes are at their control point: the set of all control points is called *barrier* .
2. **Producer-consumer:** two kind of processes, one that produces data and one that consumes them
- only produced data can be consumed
- every datum can be consumed at most once
2025-03-03 09:02:47 +01:00
#### Competition
Different processes aim at executing some action, but only one of them succeeds.
Usually, this is related to the access of the same shared resource.
Example: two processes want to withdraw from a bank account.
```js
function withdraw() {
x := account.read();
if x ≥ 1M€ {
account.write(x – 1M€);
}
}
```
While `read()` and `write()` may be considered as atomic, their sequential composition **is not** .
![[Pasted image 20250303090135.png]]
2025-03-03 09:17:47 +01:00
### Mutual Exclusion (MUTEX)
2025-03-03 09:07:47 +01:00
Ensure that some parts of the code are executed as *atomic* .
2025-03-03 09:12:47 +01:00
This is needed in competition but also in cooperation.
Of course, MUTEX is required only when accessing shared data.
###### Critical section
A set of code parts that must be run without interferences, i.e. when a process is in a certain shared object (C.S.) than no other process is on that C.S..
###### MUTEX problem
Design an entry protocol (lock) and an exit protocol (unlock) such that, when used to encapsulate a C.S. (for a given shared object), ensure that at most one process at a time is in a C.S. (for that shared object).
We assume that all C.S.s terminate and that the code is well-formed (*lock ; critical section ; unlock*).
2025-03-03 09:17:47 +01:00
#### Safety and liveness
Every solution to a problem should satisfy at least:
- **Safety**: *nothing bad ever happens*
- **Liveness:** *something good eventually happens*
*Liveness without safety:* allow anything, something good may eventually happen but also something terrible!
*Safety without liveness:* forbid anything (no activity in the system)
So safety is necessary for correctness, liveness for meaningfulness.
###### In the context of MUTEX:
- **Safety:** there is at most one process at a time in a C.S.
- **Liveness:**
- **Deadlock freedom:** if there is at least one invocation of lock, eventually after at least one process enters a C.S.
2025-03-03 09:22:47 +01:00
- **Starvation freedom:** every invocation of lock eventually grants access to the associated C.S.
- **Bounded bypass:** let $n$ be the number of processes; then, there exists $f: N \to N$ s.t. every lock enters the C.S. after at most $f(n)$ other C.S.s (The process must win in at most f(n) steps).
2025-03-03 09:27:47 +01:00
todo: riscrivere meglio l'ultimo
2025-03-03 09:22:47 +01:00
2025-03-03 09:32:47 +01:00
Both inclusions are strict:
Deadlock freedom $\not{\implies}$ Starvation freedom
![[Pasted image 20250303093116.png]]
Starvation freedom $not \implies$ Bounded bypass
2025-03-03 09:37:47 +01:00
Assume a $f$ and consider the scheduling above, where p2 wins $f(3)$ times and so does p3, then p1 looses (at least) $2f(3)$ times before winning.
### Atomic R/W registers
We will consider different computational models according to the available level of atomicity of the operations provided.
Atomic R/W registers are storage units that can be accessed through two operations (READ and WRITE) such that
1. Each invocation of an operation
2025-03-03 09:42:47 +01:00
- locks instantaneous (it can be depicted as a single point on the timeline: there exist $t : OpInv \to R+$)
2025-03-03 09:37:47 +01:00
- may be located in any point between its starting and ending time
2025-03-03 09:42:47 +01:00
- does not happen together with any other operation ($t$ is injective)
2. Every READ returns the closest preceding value written in the register, or the initial value (if no WRITE has occurred).
### Peterson algorithm (for two processes)
Let's try to enforce MUTEX with just 2 processes.
2025-03-03 09:47:47 +01:00
##### 1st attempt:
2025-03-03 09:42:47 +01:00
```
lock(i) :=
AFTER_YOU < - i
wait AFTER_YOU != i
return
unlock(i) :=
return
```
This protocol satisfies MUTEX, but suffers from deadlock (if one process never locks).
2025-03-03 09:47:47 +01:00
##### 2nd attempt:
2025-03-03 09:42:47 +01:00
```
Initialize FLAG[0] and FLAG[1] to down
lock(i) :=
2025-03-03 09:47:47 +01:00
FLAG[i] < - up
wait FLAG[1-i] = down
return
unlock(i) :=
FLAG[i] < - down
return
```
Still suffers from deadlock if both processes simultaneously raise their flag.
##### Correct solution:
```
Initialize FLAG[0] and FLAG[1] to down
lock(i) :=
FLAG[i] < - up
2025-03-03 09:42:47 +01:00
AFTER_YOU < - i
2025-03-03 09:47:47 +01:00
wait FLAG[1-i] = down OR AFTER_YOU != i
2025-03-03 09:42:47 +01:00
return
unlock(i) :=
2025-03-03 09:47:47 +01:00
FLAG[i] < - down
2025-03-03 09:42:47 +01:00
return
2025-03-03 10:06:24 +01:00
```
**Features:**
- it satisfies MUTEX
- it satisfies bounded bypass, with bound = 1
- it requires 2 one-bit SRSW registers (the flags and 1 one-bit MRMW registers (AFTER_YOU)
- Each lock-unlock requires 5 accesses to the registers (4 for lock and 1 for unlock)
##### MUTEX proof
Assume p0 and p1 are simultaneously in the CS.
2025-03-03 10:11:24 +01:00
2025-03-03 10:06:24 +01:00
How has p0 entered its CS?
2025-03-03 10:11:24 +01:00
a) `FLAG[1] = down` , this is possible only with the following interleaving:
![[Pasted image 20250303100721.png]]
b) `AFTER_YOU = 1` , this is possible only with the following interleaving:
![[Pasted image 20250303100953.png]]
##### Bounded Bypass proof (with bound = 1)
2025-03-03 10:16:24 +01:00
- If the wait condition is true, then it wins (and waits 0).
2025-03-03 10:11:24 +01:00
2025-03-03 10:16:24 +01:00
- Otherwise, it must be that `FLAG[1] = UP` **AND** `AFTER_YOU = 0` (quindi p1 ha invocato il lock e p0 dovrà aspettare).
- If p1 never locks anymore then p0 will eventually read `F[1]` and win (waiting 1).
- It is possible tho that p1 locks again:
- if p0 reads `F[1]` before p1 locks, then p0 wins (waiting 1)
2025-03-03 10:21:24 +01:00
- otherwise p1 sets A_Y to 1 and suspends in its wait (`F[0] = up AND A_Y = 1`), p0 will eventually read `F[1]` and win (waiting 1).
### Peterson algorithm ($n$ processes)
- FLAG now has $n$ levels (from 0 to n-1)
- level 0 means down
2025-03-03 10:26:24 +01:00
- level >0 means involved in the lock
2025-03-03 10:21:24 +01:00
- Every level has its own AFTER_YOU
```
Initialize FLAG[i] to 0, for all i
2025-03-03 10:26:24 +01:00
lock(i) :=
for lev = 1 to n-1 do
FLAG[i] < - lev
AFTER_YOU[lev] < - i
wait (∀k!=i, FLAG[k] < lev
OR AFTER_YOU[lev] != i)
2025-03-03 10:41:24 +01:00
return
2025-03-03 10:26:24 +01:00
unlock(i) :=
FLAG[i] < - 0
return
```
We say that pi is at level h when it exits from the h-th wait -> a process at level h is at any level < = h
2025-03-03 10:31:24 +01:00
##### MUTEX proof
Lemma: for every $ℓ \in \{0,\dots,n-1\}$ , at most n-ℓ processes are at level ℓ , this implies MUTEX by taking ℓ = n-1
Proof by induction on ℓ
Base (ℓ =0): trivial
Induction (true for ℓ , to be proved for ℓ +1):
- p at level ℓ can increase its level by writing its FLAG at ℓ +1 and its index in $A_Y[ℓ +1]$
- let $p_x$ be the last one that writes `A_Y[ℓ +1]` , so `A_Y[ℓ +1]=x`
- for $p_x$ to pass at level ℓ +1, it must be that $∀k≠x. F[k] < ℓ + 1 $, then $ p_x $ is the only proc at level ℓ + 1 and the thesis holds , since 1 <= n-ℓ -1
2025-03-03 19:21:59 +01:00
- otherwise, $p_x$ is blocked in the wait and so we have at most n-ℓ -1 processes at level ℓ +1: those at level ℓ , that by induction are at most n-ℓ , except for px that is blocked in its wait.
2025-03-03 10:46:24 +01:00
##### Starvation freedom proof
2025-03-03 19:21:59 +01:00
**Lemma:** every process at level ℓ ($\leq n-1$) eventually wins $\to$ starvation freedom holds by taking $ℓ =0$.
Reverse induction on ℓ
Base ($ℓ =n-1$): trivial
2025-03-03 19:26:59 +01:00
Induction (true for ℓ +1, to be proved for ℓ ):
2025-03-03 19:31:59 +01:00
- Assume a $p_x$ blocked at level ℓ (aka blocked in its ℓ +1-th wait) $\to \exists k\neq x, F[k]\geqℓ +1 \land A\_Y[ℓ +1]=x$
- If some $p_{y}$ will eventually set $A\_Y[ℓ +1]$ to $y$, then $p_x$ will eventually exit from its wait and pass to level ℓ +1
2025-03-03 19:36:59 +01:00
- Otherwise, let $G = \{p_{i}: F[i] \geq ℓ +1\}$ and $L=\{p_{i}:F[i]< ℓ + 1 \}$
2025-03-03 19:41:59 +01:00
- if $p \in L$, it will never enter its ℓ +1-th loop (as it would write $A_Y[ℓ +1]$ and it will unblock $p_x$, but we are assuming that it is blocked)
- all $p \in G$ will eventually win (by induction) and move to L
- $\to$ eventually, $p_{x}$ will be the only one in its ℓ +1-th loop, with all the other processes at level < ℓ + 1
- $\to$ $p_{x}$ will eventually pass to level ℓ +1 and win (by induction)
##### Peterson algorithm cost
- $n$ MRSW registers of $\lceil \log_{2} n\rceil$ bits (FLAG)
2025-03-03 20:35:00 +01:00
- $n-1$ MRMW registers of $\lceil \log_{2}n \rceil$ bits (AFTER_YOU)
- $(n-1)\times(n+2)$ accesses for locking and 1 access for unlocking
It satisfies MUTEX and starvation freedom. It does not satisfy bounded bypass:
- consider 3 processes, one sleeping in its first wait, the others alternating in the CS
- when the first process wakes up, it can pass to level 2 and eventually win
- but the sleep can be arbitrary long and in the meanwhile the other two processes may have entered an unbounded number of CSs
Easy to generalize to k-MUTEX.