Amortized Analysis
In this lesson, we'll cover the amortized analysis of algorithms.
This is used for algorithms where only a few of the operations are slow. We need to prove that the worst case of the entire algorithm is lower than the worst case of the particular slow operation times the number of operations.
Consider this algorithm: We start with an array of size 2 and each operation adds one element to the array, we do this operation N times. If the array is full, we see the current size of array say sz
. Allocate the 2*sz
memory and copy the array to its location so we have space for the new sz
elements.
- Adding to the array if it’s not empty:
- Copying array of size
sz
to a new location:
The second operation is obviously slower. We do this for N elements so the worst case would be . Yes! But it’s easy to see that the second operation is not that frequent and will happen only times. That means we can arrive at a more strict upper bound than .
Let’s list the steps:
Step | Operations | Array Size / Max size |
---|---|---|
insert e1 |
1 | |
insert e2 |
1 | |
copy array of size 2 to new location of size 4 then insert e3 |
2 + 1 | |
insert e4 |
1 | |
copy array of size 4 to new location of size 8 then insert e5 |
4 + 1 | |
insert e6 |
1 | |
insert e7 |
1 | |
insert e8 |
1 | |
copy array of size 8 to new location of size 16 then insert e9 |
8 + 1 |
Total number of operations:
…
=> N times
So, the complete algorithm runs in time
In the next lesson, we will compare different runtimes and visualize growth.
Get hands-on with 1400+ tech skills courses.