The Algebra of Big-O

Steven J. Zeil

Old Dominion University, Dept. of Computer Science

Table of Contents

1. Basic Manipulation
2. Dropping Constant Multipliers
2.1. Intuitive Justification
2.2. Proof: `O(c*f(N)) = O(f(N))`
3. Larger Terms Dominate a Sum
3.1. Intuitive Justification
3.2. Proof: Larger Terms Dominate a Sum
4. Logarithms are Fast
5. Summary of big-O Algebra
6. Always Simplify!

In the previous lesson we saw the definition of time proportional to (big-O), and we saw how it could be applied to simple algorithms. I think you will agree with me that the approach we took was rather tedious, and probably you wouldn't want to use it in practice.

We are going to start working towards more usable approach to match the complexity of algorithm. We will start by looking at rather peculiar algebraic rules that can be applied for manipulating big-O expressions. In the next lesson, we shall then look at the process of taking an algorithm the way you and I actually write it in typical programming languages and analyzing that to produce the original big-O expressions.

For now, though, let us start by discussing how we might manipulate big-O expressions once we've actually got them. The first thing we have to do is to recognize that the algebra of big-O expressions is not the same good old fashioned algebra you learned back in high school. The reason is that when we say something like`O(f(N)) = O(g(N))`, we are not comparing two numbers to one another with that '=', nor are we claiming that f(N) and g(N) are equal. We are instead comparing two sets of programs (or functions describing the speed of programs) and we are stating that any program in the set `O(f(N))` is also in the set `O(g(N))` and vice-versa.

We will now explore the peculiar algebra of big-O, and, through that algebra, will see why big-O is appropriate for comparing algorithms.

In the Forum:

(no threads at this time)