Amortized Time Complexity
Get familiar with amortized time complexity and sequence of operations.
Introduction to amortized time complexity
Usually, an algorithm behaves differently with different inputs. Going back to our algorithm that linearly searched for an element in an array, we were analyzing a case where the key was not in the array at all. For that algorithm, that was the worst case—that is, it used the most resources the algorithm will need. The best case refers to the least amount of resources the algorithm will need, whereas the average case specifies the amount of resources the algorithm will use on average with different inputs.
The standard library usually refers to the amortized running time of functions that operate on containers. If an algorithm runs in constant amortized time, it means that it will run in
To understand amortized time complexity, we will spend some time thinking about std::vector::push_back()
. Let's assume that the vector internally has a fixed-size array to store all its elements. If there is room for more elements in the fixed-size array when calling push_back()
, the operation will run in constant time,
if (internal_array.size() > size) {internal_array[size] = new_element;++size;}
But what happens when the internal array is full? One way to handle the growing vector is to create a new empty internal array with a bigger size and then move all the elements from the old array to the new one. This is obviously not constant time anymore since we need one move per element in the array—that is, push_back()
is push_back()
many times, we know that the expensive push_back()
can't happen very often, so it would be pessimistic, and not very useful, to say that push_back()
is