Algorithm complexity analysis (The Big O notation)

Note: These examples are done in Java, but can be applied to other language as well.

Id like to take a moment to talk about analyzing algorithm complexity or the “Big O notation” in software development.

Many of you you, just like my self once, don’t really understand this concept or don’t know how to actually apply it in real life while developing software, so i thought it would be helpful to explain it as simple as possible for others to understand it as well.

So what is the “Big O notation”?

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann-Landau notation or asymptotic notation. (See

OK I admit that explanation was a little too much. Even for me. The simple version is that the Big O notation

  • is used in computer science to describe the performance or complexity of an algorithm.
  • can be used to determine how much space is used by the algorithm in memory or in disk.
  • is used to determine how fast an algorithm runs relative to the input, as the input grows. How the algorithm scales.
  • Describes the worst case scenario for the algorithm relative to space/time

Phew!..With me so far? If not, please take a few more seconds to re-read the definitions above before moving along.

So you may be asking do i do this? How do i analyze the performance of an algorithm?

What is an algorithm?

First of all lets start with the basics, what is an algorithm?

An algorithm is a process or set of rules to be followed in calculations. A set of of steps to accomplish a task.

Real life example of an algorithm is getting to work.

  1. Walk to train station
  2. Get inside train
  3. Get out of train
  4. Walk to office
  5. Enter office

Thats it! those 5 simple steps are called an algorithm.

Now lets see a code sample of what an algorithm is.

Pretty simple right?

O(1) – Constant

Thats cool and all but what about in code? Well here is an example of a very basic and simple algorithm in code:

Pretty simple right!
Ok so we have this method or algorithm, so how fast is this method?? The method runs in O(1). This is the speed of the method.

So why is this method O(1) fast? Because no matter what the size of the listOfNumbers is, this only requires 1 step to do what is being asked to be done. If the listOfNumbers only has 1 item this will be O(1), if it has 10,000 items it will still be O(1). O(1) is called constant time. It never changes.

O(n) – Linear

That wasn’t too bad. Lets try another one using the example from before.

Ok so this method runs in O(n) time, which is also called “linear time”, where n is the number of items, in this case numbers, in the array. So basically if the array has 3 items inside of it, we would have to print 3 times. If the array has 51 items then we would have to print 51 times. Makes sense right?

NOTE: The n can either be the SIZE of a list or an actual number that is passed into a method. Keep this in mind.

Both of these methods run in O(n). The input type of the method does not matter. This is very important

O(n^2) – Quadratic

Lets go one step further.

This is an extremely simple piece of code. All it does is loop through 2 different loops and it prints out the items in each loop. Simple as that. EVERY time the outer loop loops (iterates), the inner loop loops through ALL of its items. So if the inner loop has 10 items, and the outer loop has 2 items, this will print 20 times. Because of this, this method runs in O(n^2)  speed or “quadratic time”. If the outer array array has 10 items and the inner array has 5 items, we have to print 50 times. If the outer array has 20 items and the inner array has 20 items as well then we have to print 400 times. The ‘O‘ of this method is just the size of the 2 arrays multiplied together. Not so complicated right?

O(n^3) – Cubic

Same as in O(n^2) each inner loop iterates when each  outer loop iterate. Every time the first outer loop iterates, the second outer loop then starts to iterate all of its item, but for every iteration that the second outer loop does, the third/last inner loop iterates all of its items.

Lets try some math examples to help us see how this can be calculated by just using numbers. Lets say we have the following equation:  25n^3 + 10n^2 + 16


25(1^3) + 10(1^2) + 16 = 51

n = 2

25(2^3) + 10(2^2) + 16 = 256

n = 5

25(5^3) + 10(5^2) + 16 = 3,391

n = 10

25(10^3) + 10(10^2) + 16 = 26,016

What can we see here? The larger that is the less relevant the 10(n^2) and 16 becomes.

Lets remove n^2 and see what we get when n = 10.

25(10^3) + 10(10^2) = 26,000

Now lets remove 10(n^2) when n = 10;

25(10^3) = 25,000

The difference is not that big, and the larger gets the less and less this part of the algorithm will matter.

So as this algorithm scales the part that really has a lot to do with the final answer that we get is actually the n^3. Yes its not even the 25. With that said we can determine that this algorithm is O(n^3).


Final words

That is all for now. Please stay tuned for more in following posts where i talk about a bit more complex Big O notations such as O(log N) and O(n log N) and how to analyze space complexity of an algorithm using the Big O notation. I highly recommend re-reading this until it is fully understood, try some examples of your own or read more online.