Introduction

In the asymptotic analysis, we want to estimate the running time of an algorithm AA when it runs on very large inputs of length nn. The most common way to do this is analyzing the worst-case running time f(n)f(n), where we want to estimate a bound on the largest possible running time of the algorithm, depending on the input size nn, and examine how f(n)f(n) grows with nn. According to Sipser (2012), this happens “by considering only the highest order term of the expression for the running time of the algorithm, disregarding both the coefficient of that term and any lower order terms, because the highest order term dominates the other terms on large inputs” (Michael Sipser, 2012Michael Sipser. Introduction to the Theory of Computation. Thomson Course Technology. Boston, Massachusetts, 2012. Cengage Learning.).

Example 1

Let AA be an algorithm that takes at most 1.75n2+7.5n+21.75 n^{2}+7.5 n+2 steps. In asymptotic analysis, we say AA grows like n2n^{2}.

O,Ω\mathcal{O}, \Omega, and Θ\Theta notations

Asymptotic upper bounds

When we want to describe the worst-case running time f(n)f(n) of a certain algorithm AA on an input size nn, we determine its asymptotic upper bound. We use the big-O\mathcal{O} notation to give an upper bound on a function.

Big-O\mathcal{O} notation

Let f(n)f(n) be a function and nn its input size. We say f(n)=O(g(n))f(n)=\mathcal{O}(g(n)) if there exist constants c>0c>0 and n00n_{0} \geq 0 such that 0f(n)cg(n)0 \leq f(n) \leq c \cdot g(n) for all nn0n \geq n_{0}, or formally:

c>0 n00 nn0:0f(n)cg(n)\exists c>0 \ \exists n_{0} \geq 0 \ \forall n \geq n_{0}: 0 \leq f(n) \leq c \cdot g(n)

When f(n)=O(g(n))f(n)=\mathcal{O}(g(n)), we say that g(n)g(n) is an asymptotic upper bound for f(n)f(n) (Thomas H. Cormen. in 2009, Jon Kleinberg and Éva Tardos in 2013 and Michael Sipser in 2012(Leiserson, Charles Eric, Ronald L. Rivest, Thomas H. Cormen, and Clifford Stein. Introduction to algorithms. Vol. 3. MIT press, 1994. MIT Press.), (Jon Kleinberg and Éva Tardos. Algorithm Design: Pearson New International Edition. Pearson Education Limited, 2013), (Michael Sipser. Introduction to the Theory of Computation. Thomson Course Technology. Boston, Massachusetts, 2012. Cengage Learning.)).

The idea behind the Big-O\mathcal{O} notation is shown in this figure.

Example

We generalize the case of this example by considering an algorithm whose running time is f(n)=pn2+qn+rf(n)=p n^{2}+q n+r for positive constants p,qp, q and rr. This example claims that we can say this function is O(n2)\mathcal{O}\left(n^{2}\right). This is true because for all n1n \geq 1, it holds that qnqn2q n \leq q n^{2} and rr2r \leq r^{2}. Thus, we can conclude that

f(n)=pn2+qn+rpn2+qn2+rn2=(p+q+r)n2f(n)=p n^{2}+q n+r \leq p n^{2}+q n^{2}+r n^{2}=(p+q+r) n^{2}

for all n1n \geq 1. This fulfils the equation of the definition (Big OnotationBig \space\mathcal{O}-notation) because 00 \leq f(n)cg(n)f(n) \leq c \cdot g(n), with c=p+q+rc=p+q+r.

Figure 1:

Get hands-on with 1400+ tech skills courses.