Caching and Buffering
This lesson discusses different mechanisms to speed up reads and writes in the vsfs.
We'll cover the following...
As the examples in the previous lessons show, reading and writing files can be expensive, incurring many I/Os to the (slow) disk. To remedy this issue, which will be a huge performance problem, most file systems aggressively use system memory (DRAM) to cache important blocks.
Speeding up reads
Imagine the open example from the previous lesson: without caching, every file open would require at least two reads for every level in the directory hierarchy (one to read the inode of the directory in question, and at least one to read its data). With a long pathname (e.g., /1/2/3/ … /100/file.txt), the file system would literally perform hundreds of reads just to open the file!
Early file systems thus introduced a fixed-size cache to hold popular blocks. As in our discussion of virtual memory, strategies such as LRU and different variants would decide which blocks to keep in cache. This fixed-size cache would usually be allocated at boot time to be roughly 10% of total memory.
This static partitioning of memory, however, can be wasteful; what if the file system doesn’t need 10% of memory at a given point in time? With the fixed-size approach described above, unused pages in the file cache cannot be re-purposed for some other use, and thus go to waste.
Modern systems, in contrast, employ a dynamic partitioning approach. Specifically, many modern operating ...