Given that the merge function runs in Θ(n) time when merging n elements, how do we get to showing that mergeSort()
runs in Θ(nlogn) time? We start by thinking about the three parts of divide-and-conquer and how to account for their running times. We assume that we're sorting a total of n elements in the entire array.
The divide step takes constant time, regardless of the subarray size. After all, the divide step just computes the midpoint q of the indices p and r. Recall that in big–Θ notation, we indicate constant time by Θ(1).
The conquer step, where we recursively sort two subarrays of approximately n/2 elements each, takes some amount of time, but we'll account for that time when we consider the subproblems.
The combine step merges a total of n elements, taking Θ(n) time.
If we think about the divide and combine steps together, the Θ(1) running time for the divide step is a low-order term when compared with the Θ(n) running time of the combine step. So let's think of the divide and combine steps together as taking Θ(n) time. To make things more concrete, let's ...