Huffman Codes
Learn about the Huffman codes problem and its solution using greedy algorithms.
Binary codes and prefix-free codes
A binary code assigns a string of 0s and 1s to each character in the alphabet. A binary code is prefix-free if no code is a prefix of any other. (Confusingly, prefix-free codes are also commonly called prefix codes.) 7-bit ASCII and Unicode’s UTF-8 are both prefix-free binary codes. Morse code is a binary code with symbols • and —, but it is not prefix-free because the code for E ( • ) is a prefix of the codes for I ( • • ), S ( • • • ), and H ( • • • • ).
Binary trees and code words
Any prefix-free binary code can be visualized as a binary tree with the encoded characters stored at the leaves. The code word for any symbol is given by the path from the root to the corresponding leaf; 0 for left, 1 for right. Thus, the length of any symbol’s codeword is the depth of the corresponding leaf in the code tree. Although they are superficially similar, binary code trees are not binary search trees; the order of symbols on the leaves is irrelevant.
Optimizing prefix-free binary codes
Suppose we want to encode a message written in an in-character alphabet so that the encoded message is as short as possible. Specifically, given an array of frequency counts , we want to compute a prefix-free binary code that minimizes the total encoded length of the message:
This is exactly the same cost function we considered for optimizing binary search trees, but the optimization problem is different because code trees are not required to keep the keys in any particular order.
In 1951, as a Ph.D. student at MIT, David Huffman developed the following greedy algorithm to produce such an optimal code:
Huffman’s algorithm
Huffman’s algorithm is best illustrated through an example. Suppose we want to encode the following helpfully self-descriptive sentence, discovered by Lee Sallows:
This sentence contains three a’s, three c’s, two d’s, twenty-six e’s, five f’s, three g’s, eight h’s, thirteen i’s, two l’s, sixteen n’s, nine o’s, six r’s, twenty-seven s’s, twenty-two t’s, two u’s, five v’s, eight w’s, four x’s, five y’s, and only one z.
To keep things simple, let’s ignore the forty-four spaces, nineteen apostrophes, nineteen commas, three hyphens, and only one period, and encode only the letters as though the message were written in scriptio continua:
Here is the frequency table for Sallows’ sentence:
Huffman’s algorithm picks out the two least frequent letters, breaking ties arbitrarily in this case, say, Z and D and merges them together into a single new character DZ with frequency 3. This new character becomes an internal node in the code tree we are constructing, with Z and D as its children; it doesn’t matter which child is which. The algorithm then recursively constructs a Huffman code for the new frequency table.
After 19 merges, all 20 letters have been merged together. The record of merges gives us our code tree. The algorithm makes a number of arbitrary choices; as a result, there are actually several different Huffman codes. One such Huffman code is shown below; numbers in non-leaf nodes are frequencies for merged characters. For example, the code for A is , and the code for S is ...