The Problem: Poor Performance
This lesson discusses why the performance of UNIX's old file system was so poor.
The problem with the file system mentioned in the last lesson was that performance was terrible. As measured by
The main issue was that the old UNIX file system treated the disk like it was a random-access memory. The data was spread all over the place without regard to the fact that the medium holding the data was a disk, and thus had real and expensive positioning costs. For example, the data blocks of a file were often very far away from its inode, thus inducing an expensive seek whenever one first read the inode and then the data blocks of a file (a pretty common operation).
Worse, the file system would end up getting quite fragmented, as the free space was not carefully managed. The free list would end up pointing to a bunch of blocks spread across the disk, and as files got allocated, they would simply take the next free block. The result was that a logically contiguous file would be accessed by going back and forth across the disk, thus reducing performance dramatically.
For example, imagine the following data block region, which contains four files (A, B, C, and D), each of size 2 blocks:
Get hands-on with 1400+ tech skills courses.