The Big O

Henry Pendleton
5 min readAug 25, 2021

The light is at the end of the tunnel. I just had to double check that I was right in saying that it has been 12 weeks since I started this Flatiron Software Engineer program. I feel solid in my foundation, and my abilities to create a rudimentary full stack web application. Frankly, I have been impressed with the amount of knowledge and ideas that have been thrown at me and even more with the amount I have actually retained and what I can create with said ideas. When I was considering attending one of these coding programs I was told that it would not be a substitute for a full Computer Science degree, and that is absolutely correct. We are taught the need to know fundamentals to hit the ground running as a junior web developer with a hell of a lot more to learn.

While going through this Flatiron program, from my understanding, we have glanced over a lot of the more theoretical aspects of talking with computers, especially since we are dealing with such high level languages like Javascript and Ruby. We do have to learn the basics like manipulating arrays, functional programming, object oriented programming, making algorithms to sort, add, or find an integer or object. A lot of good stuff but it is more of how and less why. In an effort to dip my toe into the more high minded pools of thought that are edified in lofty CS degrees and to prepare for what I have been assured will be included in the rigorous interview process of becoming a junior web developer I have been reading up on Big O notation and time complexity.

Before I get into this I would like to preface this with a similar warning to my other blogs. This is me talking out loud to get a better understanding of a topic I would like to learn more about, and if there are any glaring errors, please be understanding and/or reach out and let me know where I went wrong. But, I think if things are glaringly wrong I hope the old internet adage of the best way to find an answer to a question is not by asking forthright but to assert a wrong answer and wait for the internet to tell you how wrong you are.

Alright, three paragraphs of opinion and fluff, let’s get to it. Time complexity and Big O notation are the expression of time increases as the input to your function increases. So in computer programming the time complexity is basically how fast your algorithm is. Yes, it is pretty easy to assume that a complicated function with a lot of steps will take longer for a computer to process than a simple function. But how much longer? Well, we are going to look at some functions and talk about it.

For this blog we will be touching on the 3 most basic time complexities, constant, linear, and quadratic. The first, constant time is the most basic. It is just that, constant time as the input changes. The time it takes to run the function will always be the same no matter the size of the input. In Big O notation, this is written as O(1). An example of a constant time is accessing a value in an array.

const arr = [ 24, 6, 89, 12]

Arr[2] = 89

Or another example would be a very basic conditional where there are no further callback functions. On a graph constant time complexity would look like this.

The second type is Linear time complexity. This is written in Big O as O(n). Linear complexity is basically the classic y=mx+b growth. As the input grows the time to complete will grow in direct relation. An example of this is a for loop over an array to get the sum of all values.

And plotted on a graph, linear complexity would look like:

The third and final type we are going to touch on today is quadratic time. In O notation this is O(n²). This means that it is exponential. As the input increases the time it takes to complete the computation will be increasing. An example of this would be bubble, selection, or insertion sort methods. Really a good way of determining if something is quadratic time complexity is if there is a for loop inside another for loop. This plotted on a graph would look like:

From my understanding of what I read on this and my college calculus class big O notation is essentially a derivative. I am sure that this is not the case and that it is much more complicated than that, but for how I see this being useful it makes sense. It is just the measure over time. All computers are different, different hardware, different software running in the background hogging memory. The measure of the change of time as the input changes is a good way of evening the playing field and determining how fast and efficient your algorithm is. With this I can now see that if you understand the time complexity of a given function you can determine what the most efficient way of solving a problem is.

Further Reading:

--

--