This page lists synchronization invariants that hold in Chickadee. See also spinlocks.
Run queue invariants
-
A CPU’s run queue is implemented by members in
cpustate
andproc
:cpustate::current_
,cpustate::runq_
,proc::runq_links_
, andproc::runq_cpu_
. -
A
proc p
is on the run queue for CPUc
if:p
is part of the linked list headed atc->runq_
, orp
is running on CPUc
.
Normally a single
proc
is either running or scheduled, but a process may be both running and scheduled simultaneously. (So it is possible thatp == c->current_ && p->runq_links_.linked()
.) -
A
proc
may be on at most one CPU’s run queue at a time. -
cpustate::current_
holds the currently-running kernel task on a CPU.c->current_
is protected byc->runq_lock_
.c->current_
may only be modified byc->schedule()
, thecpustate::schedule()
function for that same CPU.- Because modifications to
c->current_
are limited, a kernel task may safely examine%gs:(8)
, which holdsc->current_
for the CPU currently running that task. (The kernel task is running on CPUc
, soc->schedule()
is not, guaranteeing thatc->current_
will not be written concurrently.) However, kernel tasks must not examine other CPUs’current_
members without the relevant locks.
-
cpustate::runq_
andproc::runq_links_
form the linked list of runnable tasks. These members are protected byc->runq_lock_
.cpustate::enqueue()
modifies these members after acquiring that lock. -
proc::runq_cpu_
is the index of the CPU on which aproc
should be scheduled. It is initialized the first timecpustate::enqueue
is called for thatproc
. In Chickadee handout code,proc::runq_cpu_
never changes after initialization.proc::unblock()
usesproc::runq_cpu_
to find a task’s home CPU. It readsrunq_cpu_
before acquiring any lock (it in fact usesrunq_cpu_
to find the relevantcpustate::runq_lock_
to lock). When an object is accessed from multiple CPUs without locks, the First Law of Synchronization says all such accesses must be reads.- Enabling process migration among CPUs would require modifying
proc::runq_cpu_
, and therefore additional synchronization objects to protect accesses to it.
Suspension invariants
-
The
proc::regs_
andproc::yields_
members are used to store resumption state. At most one of these members can be nonnull at a time. -
A nonnull
proc::regs_
orproc::yields_
must point into the kernel task stack containing theproc
(so their address must be on the same memory page as theproc
). -
The
proc::regs_
member can’t be accessed by kernel code, except:-
A running kernel task may set and modify its own
proc::regs_
if interrupts are disabled and remain disabled until the next call toproc::yield_noreturn()
. -
After
proc::init_user()
is called, the newproc
has a nonnullproc::regs_
pointer. It is safe to access that member and modify its contents, but only up to the point that the newproc
is scheduled. -
The
proc::resume()
function may access and clearproc::regs_
.
-
-
The
proc::yields_
member can’t be accessed by kernel code, except:-
A running kernel task may set and modify its own
proc::yields_
if interrupts are disabled and remain disabled until the next call toproc::yield_noreturn()
. (Theproc::yield()
function does this.) -
The
proc::resume()
function may access and clearproc::yields_
.
-
-
The
proc::resume()
function can only be called bycpustate::schedule()
. -
The
proc::yield()
function must be called with no spinlocks held. -
The
proc::yield_noreturn()
function must be called with no spinlocks held and with interrupts disabled.
Scheduling invariants
-
The
cpustate::schedule()
function must be called with no spinlocks held and with interrupts disabled. -
The
cpustate::schedule()
function must be called from the current CPU’s CPU stack. (Theproc::yield_noreturn()
function switches to that stack.) -
A task (that is, a
proc
) can run on at most one CPU at a time.Note that for user threads, the “task” encompasses both privileged and unprivileged modes of execution: it is illegal for CPU 0 to execute a particular thread in unprivileged mode while CPU 1 executes on the corresponding kernel task stack in privileged mode. This is because the unprivileged code could trap at any time, which would cause two CPUs to simultaneously execute the same kernel task.
Process state invariants
-
proc::pstate_
indicates whether a task is runnable. In the handout code,proc::ps_runnable
represents runnable tasks. The valueproc::ps_blocked
is used for tasks that may run in the future, but are currently blocked. You may add other values. -
Many contexts, including the memviewer and
cpustate::schedule()
, may examine theproc::pstate_
member for anyproc
, to see whether thatproc
is runnable, faulted, or otherwise. This member has typestd::atomic<int>
to prevent undefined behavior. -
proc::pstate_
may be changed to and fromproc::ps_runnable
in the following ways.-
Any context may change any task’s
p->pstate_
fromproc::ps_blocked
toproc::ps_runnable
viap->unblock()
. -
A kernel task
p
may change its ownp->pstate_
fromproc::ps_runnable
to any other value, such asproc::ps_blocked
.
proc::unblock()
, which changes fromproc::ps_blocked
toproc::ps_runnable
, uses an atomic compare-and-swap operation. This is because the task itself might be running on another CPU, and changingp->pstate_
to another value, such asproc::ps_faulted
. In such cases, the task’s own choice of state should win;proc::unblock()
only transitions from blocked to runnable.A task marks itself as non-runnable (that is, changes
p->pstate_
fromproc::ps_runnable
toproc::ps_blocked
, or another value) as it prepares to block until some condition holds. Generally, a call toproc::yield()
orproc::yield_noreturn()
quickly follows the change ofpstate_
.A task may change its
pstate_
while interrupts are enabled, but this requires care. When a task’s state is non-runnable at interrupt time, theproc::exception
handler will not return to that task after handling the interrupt; rather,cpustate::schedule
will choose another task to run. Thus, a task preparing to block must place itself on a wait queue, or otherwise make itself available for later waking, either (1) while interrupts are disabled, or (2) before changing itspstate_
.If you introduce new states, you can introduce new rules for those states. For instance, our handling of
exit
involves new states, and some state transitions are performed by contexts other than theproc
itself.Note that a task may be unblocked without knowing which wait queues it is on, if any. (For instance, a process might be interrupted by a signal.) Thus, your wait queue implementation must handle tasks that unblock without being removed from the corresponding wait queue.
-
-
A task may be resumed only if the associated
pstate_
isproc::ps_runnable
. A task that is not runnable must not be resumed. (Theproc::resume()
method resumes a task by loading either itsyieldstate
or itsregstate
. Afterproc::resume()
, either the kernel task starts running on its kernel task stack, or the corresponding user context starts running on its stack.proc::resume()
is called in exactly one place in the kernel, namelycpustate::schedule()
.)- A task that is actively running may have a
pstate_
other thanproc::ps_runnable
. This happens, for example, when a process takes a page fault, or as a kernel task decides whether or not to block. However,cpustate::schedule()
must not resume any task whosepstate_
does not equalproc::ps_runnable
.
Here’s why the handout code obeys this invariant.
-
While
cpustate::schedule()
chooses a task to resume, it holds itsrunq_lock_
. This protects tasks in its run queue from being removed from its queue or rescheduled on other CPUs. -
cpustate::schedule()
chooses a task withp->pstate_ == proc::ps_runnable
. Since this task is on the current CPU’s run queue, which is locked, we know the task is not currently running on any other CPU. It is not running on this CPU either (becausecpustate::schedule()
is running). Therefore,p
is not running anywhere. Onlyp
itself can changep->pstate_
fromproc::ps_runnable
, so we knowp->pstate_
will remainproc::ps_runnable
until therunq_lock_
is released. -
cpustate::schedule()
releases itsrunq_lock_
before resuming the chosen process. However, it setscpustate::current_
to the chosen process before releasing the lock. This prevents any other CPU from runningp
, and ensures thatp->pstate_
will not change until afterp
is resumed.
- A task that is actively running may have a
Process table invariants
- Read or write access to the process table
ptable
requires theptable_lock
spinlock.
Page table invariants
-
You may free a process’s page table pages only if (1) the process is not present in
ptable
, or (2) the process’s page table lock is locked for writing (forcingproc::lock_pagetable_read()
to spin).Note that in our handout code,
proc::lock_pagetable_read()
does nothing. We provide it as a hook for you to complete, in case you want to implement finer-grained page table management.
Buffer cache invariants (Pset 4)
The buffer cache, bufcache
, comprises a set of slots. The bufcache has a
lock, lock_
. Each slot has type bcslot
, and contains a state state_
, a
lock lock_
, a disk block number bn_
, a reference count ref_
, a write
owner write_owner_
(initially nullptr
), and a pointer to the cached data
buf_
. These data structures have nontrivial synchronization invariants
because disks are slow, making fine-grained synchronization important for
performance. As always, you can change these invariants, but understand the
current invariants first.
-
The
bcslot::ref_
andbcslot::state_
members are atomic and can be read without locks. -
Changing a
bcslot::state_
member requires the slot’sbcslot::lock_
. -
Allocating a slot requires both the global buffer cache lock,
bufcache::lock_
, and the relevant slot’slock_
.- Thus, if a task holds
bufcache::lock_
, it can be sure that no other task will cause a slot to transition fromref_ == 0
toref_ > 0
.
- Thus, if a task holds
-
Dereferencing a slot (
--slot->ref_
) can happen at any time and requires no locks. -
Loaded slots have read-only
bcslot::bn_
andbcslot::buf_
members. If a slot is referenced, itsbn_
andbuf_
can be read without locking.-
Once a slot has been allocated, its
bn_
does not change. -
Once a slot is loaded, its
buf_
does not change. -
All nonempty slots have distinct
bn_
s. -
All loaded slots have distinct, nonnull
buf_
s, because each block is cached in different memory. -
Allocating or freeing a slot may change its
bn_
andbuf_
. These operations require the slot’slock_
, and require thatref_ == 0
.
-
-
bufcache::lock_
precedes allbcslot::lock_
s in the lock order. For example, thebufcache::load
function initially acquires thebufcache::lock_
, then acquires a slot’sbcslot::lock_
and releases the globalbufcache::lock_
. -
The
bufcache::load
function, which is the only way to obtain a new slot reference, yields until the relevant slot is fully loaded. Thus, if a kernel task holds a slot reference, that slot hasbuf_ != nullptr
andstate_ == s_clean
ors_dirty
. -
The contents of a
bcslot::buf_
are protected bywrite_owner_
, which is a mutual-exclusion lock implemented bybcslot::lock_buffer()
andbcslot::unlock_buffer()
.