This is not the current version of the class.

Lecture 8: System call implications

In this lecture, we discussed the relationship between system call semantics and internal operating system structures, using process-related system calls to jump off. System call design choices can fundamentally impact the ways kernels are implemented. We looked at three system calls, getppid, waitpid, and clone/texit (system calls for threading).

getppid

The getppid system call behaves as follows:

Some implications for kernel implementations:

For getppid:

ppid_

Minimally, we could add the parent process ID to struct proc.

struct proc { ...
    pid_t ppid_;

This field is initialized in fork, and changed to 1 if the parent dies before the child.

How much work is required to initialize, maintain, and use ppid_? Let’s reason in terms of computational complexity, given a few variables:

Given that, performance is:

What about synchronization? A naive programmer might simply access ppid_ without synchronization, but that would violate the Fundamental Law of Synchronization: “If two threads (or cores) simultaneously access an object in memory, then both accesses must be reads.” If a child on core 0 calls getppid at the same time as its parent on core 1 calls exit, then core 1 might write the child’s ppid_ simultaneously with core 0 reading it! So some synchronization plan is required.

One solution is to protect proc::ppid_ using ptable_lock. getppid’s implementation in proc::syscall would look like:

    auto irqs = ptable_lock.lock();
    auto ppid = this->ppid_;
    ptable_lock.unlock(irqs);
    return ppid;

(spinlock_guard would simplify this a bit.) exit would hold the same lock while reparenting. However, note that this lock is not required to protect ppid_ until the fork creating the child completes (though depending on your implementation, fork might hold the lock for other reasons). This is because the child will not run and the parent will not exit until fork returns, so there will never be conflicting accesses from different cores. Initialization often requires less synchronization than active use.

But there are other, finer-grained designs. For instance, we could add a special ppid_lock_ to each proc:

struct proc { ...
    pid_t ppid_;
    spinlock ppid_lock_;

Then getppid would just obtain the ppid_lock_ for its own process. This lock is far less likely to be a source of contention than the global ptable_lock. But as a consequence, exit would have to obtain many locks during its execution, rather than just one!

    auto irqs = ptable_lock.lock();
    for (int i = 0; i != NPROC; ++i) {
        if (ptable[i]) {
            spinlock_guard guard(ptable[i]->ppid_lock_);
            if (ptable[i]->ppid_ == current()->id_) {
                ptable[i]->ppid_ = 1;
            }
        }
    }
    ptable_lock.unlock(irqs);

How should you choose between these synchronization plans? To some degree, it’s up to you. One of the most important things is that the plan makes sense to you. But it’s often good to start with a simple, coarse-grained lock design. As actual (as opposed to hypothetical) performance problems crop up, such a design can be broken down into finer-grained locks. “Premature optimization is the root of all evil.” However, the most performant multithreaded software usually has a fine-grained locking plan that was carefully planned in advance. Bad interface and implementation choices can box you in to a coarser-grained lock strategy than would be optimal.