In this section, we’ll talk about two aspects of OS design
that arguably need to be revisited: the virtual memory
subsystem, and the
“Towards O(1) Memory.” Michael Swift. In Proc. HotOS 2017.
Since the dawn of computing, memory capacity has been a primary limitation in system design. Forthcoming memory technology such as Intel and Micron’s 3D XPoint memory and other technologies may provide far larger memory capacity than ever before. Furthermore, these new memory technologies are inherently persistent and save data across system crashes or power failures. We conjecture that current operating systems are ill-equipped for an environment where there is ample memory. For example, operating systems do substantial work for every page allocated, which adds unnecessary overhead when dealing with terabytes of memory.
We suggest that now is the time for a complete rethinking of memory management for both operating systems and language runtimes considering excess memory capacity. We propose a new guiding principle: Order(1) operation, so that memory operations have low constant time independent of size. We describe a concrete proposal of this principle with the idea of file-only memory, in which most dynamic memory allocation is managed with file- system mechanisms rather than common virtual memory mechanisms.
fork() in the road.” Andrew Baumann, Jonathan Appavoo,
Orran Krieger, and Timothy Roscoe. In Proc. HotOS 2019.
The received wisdom suggests that Unix’s unusual combination of
exec()for process creation was an inspired design. In this paper, we argue that fork was a clever hack for machines and programs of the 1970s that has long outlived its usefulness and is now a liability. We catalog the ways in which fork is a terrible abstraction for the modern programmer to use, describe how it compromises OS implementations, and propose alternatives.
As the designers and implementers of operating systems, we should acknowledge that fork’s continued existence as a first-class OS primitive holds back systems research, and deprecate it. As educators, we should teach fork as a historical artifact, and not the first process creation mechanism students encounter.
As you read these papers, consider the following questions, and post a followup with your response to the Piazza announcement of section by one hour before section.
"Towards O(1) Memory": The paper claims that by "adopt[ing] file-system techniques towards memory management [we] enable most operations that currently operate on individual pages to instead operate on large extents or a whole file, and hence provide Order(1) performance." However, a processor that supports huge pages allows a single page table entry to cover a large (e.g., 2 MB or 1 GB) region of memory. The paper claims in Section 3 that huge pages don't solve many important problems. Do you buy the paper's arguments? Why or why not?
fork()in the road": The paper argues that a system call like
posix_spawn()is better than
fork(). Take a look at the man page for
posix_spawn(), and think about the semantics of the function. Do you think that, in the long term,
posix_spawn()will be able to avoid the complexity and efficiency problems that have beset
fork()? Why or why not?