### Introduction

There are usually multiple ways to solve the problem using a computer program. For instance, there are several ways to sort items in an array - you can use merge sort, bubble sort, insertion sort, and so on. All of these algorithms have their own pros and cons and the developer's job is to weigh them to be able to choose the best algorithm to use in any use case. In other words, the main question is *which algorithm to use to solve a specific problem* when there exist multiple solutions to the problem.

*Algorithm analysis* refers to the analysis of the complexity of different algorithms and finding the most efficient algorithm to solve the problem at hand. *Big-O notation* is a *statistical measure used to describe the complexity of the algorithm*.

In this guide, we'll first take a brief review of algorithm analysis and then take a deeper look at the Big-O notation. We will see how Big-O notation can be used to find algorithm complexity with the help of different Python functions.

**Note:** Big-O notation is one of the measures used for algorithmic complexity. Some others include Big-Theta and Big-Omega. Big-Omega, Big-Theta, and Big-O are intuitively equal to the **best**, **average**, and **worst** time complexity an algorithm can achieve. We typically use Big-O as a measure, instead of the other two, because it can guarantee that an algorithm runs in an acceptable complexity in its *worst* case, it'll work in the average and best case as well, but not vice versa.

### Why is Algorithm Analysis Important?

To understand why algorithm analysis is important, we will take the help of a simple example. Suppose a manager gives a task to two of his employees to design an algorithm in Python that calculates *the factorial* of a number entered by the user. The algorithm developed by the first employee looks like this:

```
def fact(n):
product = 1
for i in range(n):
product = product * (i+1)
return product
print(fact(5))
```

Notice that the algorithm simply takes an integer as an argument. Inside the `fact()`

function a variable named `product`

is initialized to `1`

. A loop executes from `1`

to `n`

and during each iteration, the value in the `product`

is multiplied by the number being iterated by the loop and the result is stored in the `product`

variable again. After the loop executes, the `product`

variable will contain the factorial.

Similarly, the second employee also developed an algorithm that calculates the factorial of a number. The second employee used a recursive function to calculate the factorial of the number `n`

:

```
def fact2(n):
if n == 0:
return 1
else:
return n * fact2(n-1)
print(fact2(5))
```

The manager has to decide which algorithm to use. To do so, they've decided to choose which algorithm runs faster. One way to do so is by finding the time required to execute the code on the same input.

In the Jupyter Notebook, you can use the `%timeit`

literal followed by the function call to find the time taken by the function to execute:

```
%timeit fact(50)
```

This will give us:

```
9 µs ± 405 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
```

The output says that the algorithm takes *9 microseconds* (plus/minus 45 nanoseconds) per loop.

Similarly, we can calculate how much time the second approach takes to execute:

```
%timeit fact2(50)
```

This will result in:

```
15.7 µs ± 427 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
```

The second algorithm involving recursion takes *15 microseconds* (plus/minus 427 nanoseconds).

The execution time shows that the first algorithm is faster compared to the second algorithm involving recursion. When dealing with large inputs, the performance difference can become more significant.

However, execution time is *not a good metric* to measure the complexity of an algorithm since it depends upon the hardware. **A more objective complexity analysis metric for an algorithm is needed.**

This is where the

Big O notationcomes into play!

### Algorithm Analysis with Big-O Notation

The *Big-O notation* signifies the relationship between the input to the algorithm and the steps required to execute the algorithm.

It is denoted by a capital "O" followed by an opening and closing parenthesis. Inside the parenthesis, the relationship between the input and the steps taken by the algorithm is presented using "n".

The key takeaway is - the Big-O isn't interested in a

particularinstance in which you run an algorithm, such as`fact(50)`

, but rather, in how well itscalesgiven increasing input. This is a much better metric for evaluating than concrete time for a concrete instance!

For example, if there is a * linear relationship* between the input and the step taken by the algorithm to complete its execution, the Big-O notation used will be

*O(n)*. Similarly, the Big-O notation for

*is*

**quadratic functions***O(n²)*.

To build intuition:

*O(n)*: at`n=1`

, 1 step is taken. At`n=10`

, 10 steps are taken.*O(n²)*: at`n=1`

, 1 step is taken. At`n=10`

, 100 steps are taken.

At `n=1`

, these two would perform the same! This is another reason why observing the relationship between the input and the number of steps to process that input is better than just evaluating functions with some concrete input.

The following are some of the most common Big-O functions:

Name | Big O |
---|---|

Constant | O(c) |

Linear | O(n) |

Quadratic | O(n²) |

Cubic | O(n³) |

Exponential | O(2ⁿ) |

Logarithmic | O(log(n)) |

Log Linear | O(nlog(n)) |

You can visualize these functions and compare them:

Generally speaking - anything worse than linear is considered a bad complexity (i.e. inefficient) and should be avoided if possible. Linear complexity is okay and usually a necessary evil. Logarithmic is good. Constant is amazing!

**Note:** Since Big-O models **relationships** of input-to-steps, we usually drop constants from the expressions. `O(2n)`

is the same type of relationship as `O(n)`

- both are linear, so we can denote both as `O(n)`

. Constants don't change the relationship.

To get an idea of how a Big-O is calculated, let's take a look at some examples of constant, linear, and quadratic complexity.

#### Constant Complexity - *O(C)*

The complexity of an algorithm is said to be constant if the steps required to complete the execution of an algorithm remain constant, irrespective of the number of inputs. The constant complexity is denoted by *O(c)* where *c* can be any constant number.

Let's write a simple algorithm in Python that finds the square of the first item in the list and then prints it on the screen:

```
def constant_algo(items):
result = items[0] * items[0]
print(result)
constant_algo([4, 5, 6, 8])
```

In the script above, *irrespective of the input size*, or the number of items in the input list `items`

, the algorithm performs only 2 steps:

- Finding the square of the first element
- Printing the result on the screen.

Hence, the complexity remains constant.

If you draw a line plot with the varying size of the `items`

input on the X-axis and the number of steps on the Y-axis, you will get a straight line. Let's create a short script to help us visualize this. No matter the number of inputs, the number of executed steps remains the same:

```
steps = []
def constant(n):
return 1
for i in range(1, 100):
steps.append(constant(i))
plt.plot(steps)
```

#### Linear Complexity - *O(n)*

The complexity of an algorithm is said to be linear if the steps required to complete the execution of an algorithm increase or decrease linearly with the number of inputs. Linear complexity is denoted by *O(n)*.

In this example, let's write a simple program that displays all items in the list to the console:

```
def linear_algo(items):
for item in items:
print(item)
linear_algo([4, 5, 6, 8])
```

Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually *learn* it!

The complexity of the `linear_algo()`

function is linear in the above example since the number of iterations of the for-loop will be *equal to the size of the input items array*. For instance, if there are 4 items in the

`items`

list, the for-loop will be executed 4 times.Let's quickly create a plot for the linear complexity algorithm with the number of inputs on the x-axis and the number of steps on the y-axis:

```
steps = []
def linear(n):
return n
for i in range(1, 100):
steps.append(linear(i))
plt.plot(steps)
plt.xlabel('Inputs')
plt.ylabel('Steps')
```

This will result in:

An important thing to note is that with large inputs, constants tend to lose value. This is why we typically remove constants from Big-O notation, and an expression such as O(2n) is usually shortened to O(n). Both O(2n) and O(n) are linear - the linear relationship is what matters, not the concrete value. For example, let's modify the `linear_algo()`

:

```
def linear_algo(items):
for item in items:
print(item)
for item in items:
print(item)
linear_algo([4, 5, 6, 8])
```

There are two for-loops that iterate over the input `items`

list. Therefore the complexity of the algorithm becomes *O(2n)*, however, in the case of infinite items in the input list, the twice of infinity is still equal to infinity. We can ignore the constant `2`

(since it is ultimately insignificant) and the complexity of the algorithm remains *O(n)*.

Let's visualize this new algorithm by plotting the inputs on the X-axis and the number of steps on the Y-axis:

```
steps = []
def linear(n):
return 2*n
for i in range(1, 100):
steps.append(linear(i))
plt.plot(steps)
plt.xlabel('Inputs')
plt.ylabel('Steps')
```

In the script above, you can clearly see that *y=2n*, however, the output is linear and looks like this:

#### Quadratic Complexity - *O(n²)*

The complexity of an algorithm is said to be quadratic when the steps required to execute an algorithm are a quadratic function of the number of items in the input. Quadratic complexity is denoted as *O(n²)*:

```
def quadratic_algo(items):
for item in items:
for item2 in items:
print(item, ' ' ,item2)
quadratic_algo([4, 5, 6, 8])
```

We have an outer loop that iterates through all the items in the input list and then a nested inner loop, which again iterates through all the items in the input list. The total number of steps performed is n*n, where n is the number of items in the input array.

The following graph plots the number of inputs against the steps for an algorithm with quadratic complexity:

#### Logarithmic Complexity - *O(logn)*

Some algorithms achieve logarithmic complexity, such as *Binary Search*. Binary Search searches for an element in an array, by checking the *middle* of an array, and pruning the half in which the element isn't. It does this again for the remaining half and continues the same steps until the element is found. In each step, it *halves* the number of elements in the array.

This requires the array to be sorted, and for us to make an assumption about the data (such as that it's sorted).

When you can make assumptions about the incoming data, you can take steps that reduce the complexity of an algorithm. Logarithmic complexity is desirable, as it achieves good performance even with highly scaled input.

#### Finding the Complexity of Complex Functions?

In previous examples, we had fairly simple functions on input. However, how do we calculate the Big-O of functions that call (multiple) other functions on the input?

Let's take a look:

```
def complex_algo(items):
for i in range(5):
print("Python is awesome")
for item in items:
print(item)
for item in items:
print(item)
print("Big O")
print("Big O")
print("Big O")
complex_algo([4, 5, 6, 8])
```

In the script above several tasks are being performed, first, a string is printed 5 times on the console using the `print`

statement. Next, we print the input list twice on the screen, and finally, another string is printed three times on the console. To find the complexity of such an algorithm, we need to break down the algorithm code into parts and try to find the complexity of the individual pieces. Mark down the complexity of each piece.

In the first section, we have:

```
for i in range(5):
print("Python is awesome")
```

The complexity of this part is *O(5)* since five constant steps are being performed in this piece of code irrespective of the input.

Next, we have:

```
for item in items:
print(item)
```

We know the complexity of the above piece of code is *O(n)*. Similarly, the complexity of the following piece of code is also *O(n)*:

```
for item in items:
print(item)
```

Finally, in the following piece of code, a string is printed three times, hence the complexity is *O(3)*:

```
print("Big O")
print("Big O")
print("Big O")
```

To find the overall complexity, we simply have to add these individual complexities:

```
O(5) + O(n) + O(n) + O(3)
```

Simplifying the above we get:

```
O(8) + O(2n) = O(8+2n)
```

We said earlier that when the input (which has length n in this case) becomes extremely large, the constants become insignificant i.e. twice or half of the infinity still remains infinity. Therefore, we can ignore the constants. The final complexity of the algorithm will be *O(n)*!

#### Worst vs Best Case Complexity

Usually, when someone asks you about the complexity of an algorithm - they're interested in the worst-case complexity (Big-O). Sometimes, they might be interested in the best-case complexity as well (Big-Omega).

To understand the relationship between these, let's take a look at another piece of code:

```
def search_algo(num, items):
for item in items:
if item == num:
return True
else:
pass
nums = [2, 4, 6, 8, 10]
print(search_algo(2, nums))
```

In the script above, we have a function that takes a number and a list of numbers as input. It returns true if the passed number is found in the list of numbers, otherwise, it returns `None`

. If you search for 2 in the list, it will be found in the first comparison. This is the best-case complexity of the algorithm in that the searched item is found in the first searched index. *The best case complexity*, in this case, is *O(1)*. On the other hand, if you search 10, it will be found at the last searched index. The algorithm will have to search through all the items in the list, hence *the worst-case complexity* becomes *O(n)*.

**Note:** The worst-case complexity remains the same even if you try to find a non-existent element in a list - it takes *n* steps to verify that there is no such element in a list. Therefore the worst-case complexity remains *O(n)*.

In addition to best and worst-case complexity, you can also calculate *the average complexity* (Big-Theta) of an algorithm, which tells you "Given a random input, what is the expected time complexity of the algorithm"?

#### Space Complexity

In addition to the time complexity, where you count the number of steps required to complete the execution of an algorithm, you can also find the *space complexity* which refers to the amount of space you need to allocate in memory during the execution of a program.

Have a look at the following example:

```
def return_squares(n):
square_list = []
for num in n:
square_list.append(num * num)
return square_list
nums = [2, 4, 6, 8, 10]
print(return_squares(nums))
```

The `return_squares()`

function accepts a list of integers and returns a list with the corresponding squares. The algorithm has to allocate memory for the same number of items as in the input list. Therefore, the space complexity of the algorithm becomes *O(n)*.

### Conclusion

The Big-O notation is the standard metric used to measure the complexity of an algorithm. In this guide, we studied what Big-O notation is and how it can be used to measure the complexity of a variety of algorithms. We also studied different types of Big-O functions with the help of different Python examples. Finally, we briefly reviewed the worst and the best case complexity along with the space complexity.