The other day, I came across a post on StackOverflow which read “Is theoretical computer science(TCS) useful?”. I was completely caught off guard because, well, I was in the middle of a learning about asymptotic behavior complex math notations and couldn’t think of one use-case where I could apply it to my day-to-day coding. This post made me think “Why do I even need to know about theoretical CS if it is not used practically. Especially when my natural interests and abilities lie more in programming than in mathematical analysis.

So over the weekend, I did some research. Why is* *this important? Is the scope of the concepts I am learning everyday restricted only to answering algorithms complexity interview questions? What about their application after that?

There has to be more!

**Is Theoretical Computer Science(TCS) useful?**

Big YES!

TCS isn’t only about mathematical analysis or should be learned by programmers only for a job interview. If you plan to make a career as a programmer, your job will probably involve not just building coolest and most useful software today, but building it in a way that makes it useful and most efficient to use. You will also have to think about how and why what you are doing works, as well as in which bounds it operates.

It’s like in the Karate Kid movie scene with Jackie Chan and Jaden Smith. Here, Jackie Chan tells Dre to put on and take off his jacket, over and over again.

So, do you think Jackie was teaching him how to put on his jacket and take it off? NO! He was teaching him the skills necessary to become highly successful in kung-fu.

This is what TCS does for us. It lays a solid foundation of fundamental abstract concepts which help us make the right choices as a developer.

In this article, targeted at programmers who know all about coding but who don’t have any TCS background, I present to you one of the most important theoretical concepts of computer science: After reading this post, you will be able to understand all the common terms computer scientists use such as “algorithms”, “algorithm complexity analysis”, “Big Θ”, “asymptotic behavior” and “worst-case analysis”.

This post does not have any mathematical prerequisites and I plan to build a firm basics background needed to study different algorithms with a firmer understanding of the theory behind them.

Let’s get started!

**A) What is an Algorithm?**

An Algorithm can be defined as a list of steps that you can follow to complete a task.

You might be thinking, well, that was a bouncer. Didn’t the title read something like a **beginners **guide to study of algorithms?

Okay! Let’s take an example to simplify the definition. Let’s say I want to shop for a book online. Read the definition again after going through the below example.

BuyBook(bookname){ 1. Open Amazon 2. Search for the bookname 3. If in stock, add to the cart 4. Proceed with checkout process 5. Wait dearly for your book to be shipped } BuyBook('Into The Water')

So, I want to buy a book named ‘Into The Water’ online. So, the first thing I did was visited Amazon. I searched on Amazon for my book, ‘Into the Water’ and found it was in stock. I proceeded with adding to the cart and with the final checkout process. Done!

Can you try to connect the dots with the definition now?

What I did above was followed a series of steps in order to achieve some goal(buying book in our case). This is what algorithm primary purpose is

or see the below graphic! ?

Okay then.

**B) What is an analysis of algorithm? Why do we even have to analyze them?**

Analysis of algorithms can be defined as a theoretical study of computer-program

performanceand resource usage.

So, I’ve written word performance in above definition in bold words. Simply because our main focus throughout this article would be about computer program performance.

But, before we start talking about it, can you think of any other thing more important than the performance in programming?

Um, like the program maybe super-duper fast, but gives wrong output?

Will this work? No!

There are many things like correctness, simplicity, maintainability, robustness, security, functionality and most important user-friendliness which is way more important than the performance of the program.

Example: Snapchat CEO Evan Spiegel plans to redesign Snapchat.

The Snapchat app works as it is supposed to be, but still, Evan Spiegel plans to redesign it. Because based on feedback, they found out the app was a little hard to understand and they plan to improve it by making it easier to use. Here, user-friendliness clearly outweighs algorithms.

*So, clearly, performance lies at the bottom of the heap as compared to above -mentioned features. Then, why am I writing a post about performance then?*

Because sometimes user-friendliness is directly related to performance.

Imagine sitting in front of the website which takes forever to load. Often, performance measures the line between feasible and in-feasibility. In real time setting, if an application is not fast enough, it is simply not functional. Or if it uses too much memory, it is not going to work for us.

Armed with this knowledge, let’s try to analyze a simple problem of sorting:

Now, we are going to use an algorithm called Bubble Sort. I’ll write this algorithm in pseudo-code(kind of programming language written in English)

Bubblesort( var a as array ) for i from 1 to N for j from 0 to N - i if a[j] > a[j + 1] swap( a[j], a[j + 1] ) end func

Here, the input will have a sequence of numbers and output will give a sorted list of numbers. The function Bubblesort accepts an array of data as input and will generate a sorted list of numbers.

Difficult to comprehend? Let’s take a simple example

Here our input to our BubbleSort function is {6, 5,3}. We want our algorithm to give us our desired output, sorted list like this {3,5,6}

How our algorithms works:

**Round 1:**

# Our aim to to sort our list from smallest to largest. If we find numbers out of order, we swap them.

# Start with first element and compare it with next. {**6, 5**, 3}

# Compare 6 and 5. Since 6 > 5 -> Perform swap{5, 6,3}

# Now,look at second and third element and compare. {5,**6, 3**}

# Compare 6 and 3. Since 6 > 3 -> Perform swap {5,3, 6}

# Round 1 output:{5, 3, 6} # The list is still not sorted.

Here, we start off by comparing first and second number, 6 and 5 in our case and swap them as they are out of order. We proceed with comparing 2nd and 3rd element,6 and 3 and swap them too as 3 is a smaller number than 6. We continue this till we reach the last element and we get a list {5,3,6} like this.

The list is still not sorted and we continue with the similar approach till the entire list gets sorted.

**Round 2:**

# First round gave us list {5, 3, 6}. We continue with our steps of comparing and swapping if we find elements out of order.

# Start with first element and compare it with next. {**5, 3**, 6}

# Compare 5 and 3. Since 5 > 3-> Perform swap{3, 5,6}

# Now,look at second and third element and compare. {3,**5, 6**}

# Compare 5 and 6. Since 5 < 6 -> NO SWAPPING {3, 5, 6}

# The list is sorted now. We got our desired output.

Our List is finally sorted.

Simple, right? (In case, you are finding it difficult to understand Bubble Sort or want to know more about it, I’ve covered all its nitty-gritty in this post.)

**C) Let’s analyze the above algorithm**

Every developer has time running on their mind. How long will my program take to give me my desired output?

But the running time of the algorithm is dependent on many things:

**Depends on Input :**

Now let’s say the input list({3, 5**, **6})given to our Bubble sort was already sorted. In this case, Bubble sort doesn’t have much to do. But, what if the input list(6,5,3) is reverse sorted. Bubble sort will have to do a lot of work to give us a sorted list as each element needs to be swapped.

2.** Depends on the Input size**:

Now imagine sorting a list with 6 elements vs a list having 6 * 10⁹ elements. Which program will give us a faster-sorted list?

So, when we talk about analyzing an algorithm with running time, we generally look at the *upper bound*. For example, If I say this algorithm will not take more than 3 seconds to execute, we have a max limit and we know how this will work in real life setting. This upper bound gives a guarantee to the user that time taken to accomplish this task will be no more than this amount.

**Different Type of Analysis Done**:

In analyzing an algorithm, rather than a piece of code, we will try and predict the number of times *“the principle activity”* of that algorithm is performed. For example, if we are analyzing sorting algorithm like Bubble Sort, we might count the number of comparisons performed.

**Worst case (Done usually)**:

In the worst case analysis, we calculate upper bound on running time of an algorithm. It is generally a case that causes a maximum number of operations to be executed over all inputs of size n. For example, in Bubble sort, a maximum number of comparisons takes place when the array list is reverse sorted.

2)** Average case:**(**Sometimes done**)

Arguably, average case is the most useful measure. Unfortunately, this is typically a difficult thing to measure. In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all the calculated values and divide the sum by a total number of inputs. We must know (or predict) distribution of cases throughout all data sets of size n.

3) **Best Case : (Not Generally Used)**

In the best case analysis, we calculate lower bound on running time of an algorithm. It is a case that causes a minimum number of operations to be executed from an input of size n. For example, we might get the best behavior from Bubble sort algorithm if the input to it is already sorted.

### Algorithm Complexity:

So, let’s try to calculate Bubble Sort worst-case time? Let’s try to formally measure how fast this algorithm runs.

But where should I run this algorithm? Your computer or mine? Wait, if I run it on a supercomputer, it will run super-duper fast.

BIG PROBLEM!

We clearly need something which compares two algorithms at the idea level ignoring low-level details such as the implementation programming language, the hardware the algorithm runs on etc.

**Showtime! Asymptotic Analysis!**

Here, we ignore machine dependent constants and instead of looking at the actual running time look at the growth of running time.

Wait! Whaaaaaat?

Trust me read this definition again after going through the below example.

The following 3 asymptotic notations are mostly used to represent time complexity of algorithms.

1) Θ Notation (Theta notation)

2) Big O Notation

3) Ω Notation (Omega notation)

Let’s take a look at Θ Notation to understand how it solves our problem.

The Θ Notation notation follows simple two rules:

- Drop low order terms
- Ignore leading constants.

Consider this example:

3n³ + 40n² - 10n + 409

# Let's look at Θ Notation first point, drop lower order terms # Here, n^3 is a bigger term then n2. So we drop all lower order n terms and 409 # Now, our equation looks like 3n³. Let's look at second point now. # Ignore leading constants. So, we drop 3 too. # So, we finally get n³

3n³ + 40n² - 10n + 409 = Θ(n³)#Theta of n cube

# This is an engineering way of manipulating theta notation. # There is a mathematical way too.

*We call this function, i.e. what we put within Θ( here ), the time complexity or just complexity of our algorithm.*

As n approaches infinity, Θ(n²) will always beat Θ(n³). It doesn’t matter what lower order terms or leading constants were. Even if you run Θ(n²) algorithm on slower computer and Θ(n³) algorithm on a faster computer, when we look at the growth, as the size of n increases, Θ(n²) algorithm will be faster.

In complexity analysis, we only care about how many times our the *principle activity *of our algorithm is performed as the program input (n) grows large. This is in line with our “worst-case scenario” behavior. If algo1 beats algo2 for a very large input number n in time, it’s obvious that it will do so even when the size of input n is small. So, considering this, we’ll drop all the terms that grow slowly and only keep the ones that grow fast as n becomes larger.

This filter of “dropping all factors” and of “keeping the largest growing term” as described above is what we call *asymptotic behavior*.

**Oh Boy! How Big This Post Is!**

Just concluding it. I have introduced two more notations Big O and Omega Notation in above section. Big O is highly popular as it deals with upper bound of an algorithm(Remember our worst-case time complexity). I’ve covered it in another post, do have a look!. (It isn’t as long as this post, I promise!)

Resources Used For This Article:

https://www.geeksforgeeks.org/analysis-of-algorithms-set-1-asymptotic-analysis/

## 4 Comments

Its really nice and simple explanation! Good Work. Thanks!

Simple and easy explainintion , it is worth reading 100 books and taking courses in algorithms.

Very Simple and easy explainations. I never came across any material or lectures on algorithms which explains in so simple words without using jargons.

Thank you for your kind words. 🙂