Amortized Time Complexity

Get familiar with amortized time complexity and sequence of operations.

Introduction to amortized time complexity

Usually, an algorithm behaves differently with different inputs. Going back to our algorithm that linearly searched for an element in an array, we were analyzing a case where the key was not in the array at all. For that algorithm, that was the worst case—that is, it used the most resources the algorithm will need. The best case refers to the least amount of resources the algorithm will need, whereas the average case specifies the amount of resources the algorithm will use on average with different inputs.

The standard library usually refers to the amortized running time of functions that operate on containers. If an algorithm runs in constant amortized time, it means that it will run in O(1)O(1) in almost all cases, except very few where it will perform worse. At first sight, the amortized running time can be confused with an average time, but as we will see, they are not the same.

To understand amortized time complexity, we will spend some time thinking about std::vector::push_back(). Let's assume that the vector internally has a fixed-size array to store all its elements. If there is room for more elements in the fixed-size array when calling push_back(), the operation will run in constant time, O(1)O(1)—that is, it's not dependent on how many elements are already in the vector as long as the internal array has room for one more:

Get hands-on with 1400+ tech skills courses.