Understanding Time complexity

Gokul Menon
3 min readFeb 25, 2022

--

Time in science is probably the hardest thing to understand. It’s intriguing to study it also, Physicists define time as the progression of events from the past to the present into the future. Basically, if a system is unchanging, it is timeless. While Sci-Fi world is in mission to figure out loop holes in classic time paradox, Lets in real world “computer science” , understand How Time complexity works !

First thing first — Time complexity is not equal to time taken. I have seen this definition quite a lot in many resources available online, lets take an example — a simple sorting algorithm in this case lets use Arrays.Sort() with a greater input elements say — 10000 run on two different machines — first being a Mac book pro and second a classic 2006 model dual core computer. Its pretty obvious that the time taken to run the program in first machine would be less than the second one. But the complexity of Arrays.Sort() remains the same , hence Time complexity can be defined as a function at which the algorithm runs across various inputs size , how the algorithm performs when the size of input is increased / decreased.Time complexity is the Amount of work the CPU has to do (time complexity) as the input size grows (towards infinity). Time complexity is generally calculated in the worst case. ie, how the algorithm will perform when the input size is increased largely.

Time complexity analysis is based on the amount of work done by the algorithm. It expresses the relationship between the size of the input and the run time for the algorithm. Time complexity is usually expressed as proportionality, rather than an exact function.

For Example — Let’s take an algorithm which is represented as f(x) takes f(N) + f(NlogN) + 10000 time to complete , we could round the time complexity of the algorithm as f(NlogN) since when we take a large set of data constant and lesser values don't matter.Basically which means we can omit all constant values and values which are way lesser than the greater values within the function .F(N) can be represented as Big O notation which is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.

source : google.com/

In the graph above which explains the relation between time and Data input , if we observe for the small set of data constant time might take a little longer but that can be negotiated as when we talk about time complexity we always talk about the worst case. Here are some resources which I found really helpful for understanding time complexity in depth , Have a look :

Happy Coding ❤

--

--

Gokul Menon
Gokul Menon

Written by Gokul Menon

Engineering at FreeNow Tech.

No responses yet