Storing Files on Tape
Understand the different techniques used to solve the storing files on tape problem efficiently.
Introduction to storing files
Suppose we have a set of files that we want to store on magnetic tape. In the future, users will want to read those files from the tape. Reading a file from tape isn’t like reading a file from a disk; first, we have to fast-forward past all the other files, and that takes a significant amount of time. Let be an array listing the lengths of each file; specifically, file has length . If the files are stored in order from to , then the cost of accessing the -th file is
Expected cost of accessing a random file
The cost reflects the fact that before we read file , we must first scan past all the earlier files on the tape. If we assume for the moment that each file is equally likely to be accessed, then the expected cost of searching for a random file is
If we change the order of the files on the tape, we change the cost of accessing the files; some files become more expensive to read, but others become cheaper. Different file orders are likely to result in different expected costs. Specifically, let denote the index of the file stored at position on the tape. Then, the expected cost of the permutation is
Finding the best order to store files on tape
Which order should we use if we want this expected cost to be as small as possible? The answer seems intuitively clear: sort the files by increasing length. But intuition can be tricky.
Lemma 1: is minimized when for all .
Proof: Suppose for some index . To simplify notation, let and . If we swap files and , then the cost of accessing increases by , and the cost of accessing decreases by . Overall, the swap changes the expected cost by . But this change is an improvement because . Thus, if the files are out of order, we can decrease the expected cost by swapping some misordered pairs of files.
This is our first example of a correct greedy algorithm. To minimize the total expected cost of accessing the files, we put the file that is cheapest to access first and then recursively write everything else; no backtracking, no dynamic programming, just make the best local choice and blindly plow ahead. If we use an efficient sorting algorithm, the running time is clearly , plus the time required to actually write the files. To show that the greedy algorithm is actually correct, we proved that the output of any other algorithm could be improved by some sort of exchange.
Let’s generalize this idea further. Suppose we’re also given an array of access frequencies for each file; file will be accessed exactly times over the lifetime of the tape. Now, the total cost of accessing all the files on the tape is
As before, reordering the files can change this total cost. So, what order should we use if we want the total cost to be as small as possible? (This question is similar in spirit to the optimal binary search tree problem, but the target data structure and the cost function are both different, so the algorithm must be different too.)
We already proved that if all the frequencies are equal, we should sort the files by increasing size. If the frequencies are all different, but the file lengths are all equal, then intuitively, we should sort the files by decreasing access frequency with the most-accessed file first. In fact, this isn’t hard to prove by modifying the proof of Lemma 1. But what if the sizes and the frequencies both vary? In this case, we should sort the files by the ratio .
Lemma 2: is minimized when for all .
Proof: Suppose for some index . To simplify notation, let and . If we swap files and , then the cost of accessing increases by , and the cost of accessing decreases by . Overall, the swap changes the total cost by . But this change is an improvement because
Thus, if any two adjacent files are out of order, we can improve the total cost by swapping them.
Create a free account to access the full course.
By signing up, you agree to Educative's Terms of Service and Privacy Policy