This page lists synchronization invariants that hold in Chickadee. See also spinlocks.
Run queue invariants
-
A CPU’s run queue is implemented by members in
cpustateandproc:cpustate::current_,cpustate::runq_,proc::runq_links_, andproc::runq_cpu_. -
A
proc pis on the run queue for CPUcif:pis part of the linked list headed atc->runq_, orpis running on CPUc.
Normally a single
procis either running or scheduled, but a process may be both running and scheduled simultaneously. (So it is possible thatp == c->current_ && p->runq_links_.linked().) -
A
procmay be on at most one CPU’s run queue at a time. -
cpustate::current_holds the currently-running kernel task on a CPU.c->current_is protected byc->runq_lock_.c->current_may only be modified byc->schedule(), thecpustate::schedule()function for that same CPU.- Because modifications to
c->current_are limited, a kernel task may safely examine%gs:(8), which holdsc->current_for the CPU currently running that task. (The kernel task is running on CPUc, soc->schedule()is not, guaranteeing thatc->current_will not be written concurrently.) However, kernel tasks must not examine other CPUs’current_members without the relevant locks.
-
cpustate::runq_andproc::runq_links_form the linked list of runnable tasks. These members are protected byc->runq_lock_.cpustate::enqueue()modifies these members after acquiring that lock. -
proc::runq_cpu_is the index of the CPU on which aprocshould be scheduled. It is initialized the first timecpustate::enqueueis called for thatproc. In Chickadee handout code,proc::runq_cpu_never changes after initialization.proc::unblock()usesproc::runq_cpu_to find a task’s home CPU. It readsrunq_cpu_before acquiring any lock (it in fact usesrunq_cpu_to find the relevantcpustate::runq_lock_to lock). When an object is accessed from multiple CPUs without locks, the First Law of Synchronization says all such accesses must be reads.- Enabling process migration among CPUs would require modifying
proc::runq_cpu_, and therefore additional synchronization objects to protect accesses to it.
Suspension invariants
-
The
proc::regs_andproc::yields_members are used to store resumption state. At most one of these members can be nonnull at a time. -
A nonnull
proc::regs_orproc::yields_must point into the kernel task stack containing theproc(so their address must be on the same memory page as theproc). -
The
proc::regs_member can’t be accessed by kernel code, except:-
A running kernel task may set and modify its own
proc::regs_if interrupts are disabled and remain disabled until the next call toproc::yield_noreturn(). -
After
proc::init_user()is called, the newprochas a nonnullproc::regs_pointer. It is safe to access that member and modify its contents, but only up to the point that the newprocis scheduled. -
The
proc::resume()function may access and clearproc::regs_.
-
-
The
proc::yields_member can’t be accessed by kernel code, except:-
A running kernel task may set and modify its own
proc::yields_if interrupts are disabled and remain disabled until the next call toproc::yield_noreturn(). (Theproc::yield()function does this.) -
The
proc::resume()function may access and clearproc::yields_.
-
-
The
proc::resume()function can only be called bycpustate::schedule(). -
The
proc::yield()function must be called with no spinlocks held. -
The
proc::yield_noreturn()function must be called with no spinlocks held and with interrupts disabled.
Scheduling invariants
-
The
cpustate::schedule()function must be called with no spinlocks held and with interrupts disabled. -
The
cpustate::schedule()function must be called from the current CPU’s CPU stack. (Theproc::yield_noreturn()function switches to that stack.) -
A task (that is, a
proc) can run on at most one CPU at a time.Note that for user threads, the “task” encompasses both privileged and unprivileged modes of execution: it is illegal for CPU 0 to execute a particular thread in unprivileged mode while CPU 1 executes on the corresponding kernel task stack in privileged mode. This is because the unprivileged code could trap at any time, which would cause two CPUs to simultaneously execute the same kernel task.
Process state invariants
-
proc::pstate_indicates whether a task is runnable. In the handout code,proc::ps_runnablerepresents runnable tasks. The valueproc::ps_blockedis used for tasks that may run in the future, but are currently blocked. You may add other values. -
Many contexts, including the memviewer and
cpustate::schedule(), may examine theproc::pstate_member for anyproc, to see whether thatprocis runnable, faulted, or otherwise. This member has typestd::atomic<int>to prevent undefined behavior. -
proc::pstate_may be changed to and fromproc::ps_runnablein the following ways.-
Any context may change any task’s
p->pstate_fromproc::ps_blockedtoproc::ps_runnableviap->unblock(). -
A kernel task
pmay change its ownp->pstate_fromproc::ps_runnableto any other value, such asproc::ps_blocked.
proc::unblock(), which changes fromproc::ps_blockedtoproc::ps_runnable, uses an atomic compare-and-swap operation. This is because the task itself might be running on another CPU, and changingp->pstate_to another value, such asproc::ps_faulted. In such cases, the task’s own choice of state should win;proc::unblock()only transitions from blocked to runnable.A task marks itself as non-runnable (that is, changes
p->pstate_fromproc::ps_runnabletoproc::ps_blocked, or another value) as it prepares to block until some condition holds. Generally, a call toproc::yield()orproc::yield_noreturn()quickly follows the change ofpstate_.A task may change its
pstate_while interrupts are enabled, but this requires care. When a task’s state is non-runnable at interrupt time, theproc::exceptionhandler will not return to that task after handling the interrupt; rather,cpustate::schedulewill choose another task to run. Thus, a task preparing to block must place itself on a wait queue, or otherwise make itself available for later waking, either (1) while interrupts are disabled, or (2) before changing itspstate_.If you introduce new states, you can introduce new rules for those states. For instance, our handling of
exitinvolves new states, and some state transitions are performed by contexts other than theprocitself.Note that a task may be unblocked without knowing which wait queues it is on, if any. (For instance, a process might be interrupted by a signal.) Thus, your wait queue implementation must handle tasks that unblock without being removed from the corresponding wait queue.
-
-
A task may be resumed only if the associated
pstate_isproc::ps_runnable. A task that is not runnable must not be resumed. (Theproc::resume()method resumes a task by loading either itsyieldstateor itsregstate. Afterproc::resume(), either the kernel task starts running on its kernel task stack, or the corresponding user context starts running on its stack.proc::resume()is called in exactly one place in the kernel, namelycpustate::schedule().)- A task that is actively running may have a
pstate_other thanproc::ps_runnable. This happens, for example, when a process takes a page fault, or as a kernel task decides whether or not to block. However,cpustate::schedule()must not resume any task whosepstate_does not equalproc::ps_runnable.
Here’s why the handout code obeys this invariant.
-
While
cpustate::schedule()chooses a task to resume, it holds itsrunq_lock_. This protects tasks in its run queue from being removed from its queue or rescheduled on other CPUs. -
cpustate::schedule()chooses a task withp->pstate_ == proc::ps_runnable. Since this task is on the current CPU’s run queue, which is locked, we know the task is not currently running on any other CPU. It is not running on this CPU either (becausecpustate::schedule()is running). Therefore,pis not running anywhere. Onlypitself can changep->pstate_fromproc::ps_runnable, so we knowp->pstate_will remainproc::ps_runnableuntil therunq_lock_is released. -
cpustate::schedule()releases itsrunq_lock_before resuming the chosen process. However, it setscpustate::current_to the chosen process before releasing the lock. This prevents any other CPU from runningp, and ensures thatp->pstate_will not change until afterpis resumed.
- A task that is actively running may have a
Process table invariants
- Read or write access to the process table
ptablerequires theptable_lockspinlock.
Page table invariants
-
You may free a process’s page table pages only if (1) the process is not present in
ptable, or (2) the process’s page table lock is locked for writing (forcingproc::lock_pagetable_read()to spin).Note that in our handout code,
proc::lock_pagetable_read()does nothing. We provide it as a hook for you to complete, in case you want to implement finer-grained page table management.
Buffer cache invariants (Pset 4)
The buffer cache, bufcache, comprises a set of slots. The bufcache has a
lock, lock_. Each slot has type bcslot, and contains a state state_, a
lock lock_, a disk block number bn_, a reference count ref_, a write
owner write_owner_ (initially nullptr), and a pointer to the cached data
buf_. These data structures have nontrivial synchronization invariants
because disks are slow, making fine-grained synchronization important for
performance. As always, you can change these invariants, but understand the
current invariants first.
-
The
bcslot::ref_andbcslot::state_members are atomic and can be read without locks. -
Changing a
bcslot::state_member requires the slot’sbcslot::lock_. -
Allocating a slot requires both the global buffer cache lock,
bufcache::lock_, and the relevant slot’slock_.- Thus, if a task holds
bufcache::lock_, it can be sure that no other task will cause a slot to transition fromref_ == 0toref_ > 0.
- Thus, if a task holds
-
Dereferencing a slot (
--slot->ref_) can happen at any time and requires no locks. -
Loaded slots have read-only
bcslot::bn_andbcslot::buf_members. If a slot is referenced, itsbn_andbuf_can be read without locking.-
Once a slot has been allocated, its
bn_does not change. -
Once a slot is loaded, its
buf_does not change. -
All nonempty slots have distinct
bn_s. -
All loaded slots have distinct, nonnull
buf_s, because each block is cached in different memory. -
Allocating or freeing a slot may change its
bn_andbuf_. These operations require the slot’slock_, and require thatref_ == 0.
-
-
bufcache::lock_precedes allbcslot::lock_s in the lock order. For example, thebufcache::loadfunction initially acquires thebufcache::lock_, then acquires a slot’sbcslot::lock_and releases the globalbufcache::lock_. -
The
bufcache::loadfunction, which is the only way to obtain a new slot reference, yields until the relevant slot is fully loaded. Thus, if a kernel task holds a slot reference, that slot hasbuf_ != nullptrandstate_ == s_cleanors_dirty. -
The contents of a
bcslot::buf_are protected bywrite_owner_, which is a mutual-exclusion lock implemented bybcslot::lock_buffer()andbcslot::unlock_buffer().