A kid has a set of 100 toys. His friend pays him a visit and they want to play a specific toy. With this large number of toys, the search becomes difficult. John knows search algorithm A, while Paul Knows Algorithm B.
The kids have at their disposal two algorithms: A & B. How would they compare their algorithms so as to choose the one that helps them find the specific toy as quick as possible? Giving that Paul has only 30 minutes to be with John.
Big O provides a means by which algorithms can be compared in terms of speed of execution. The kids have to choose between two algorithms: A & B.
Lets consider that A is some sort of sequential search while B is some sort of optimized search. Assuming it takes 1 millisecond to check one toy. With algorithm A, the kids have to to check 100 toys, so the search takes 100ms to run. With algorithm B, an optimized search, the kids have only to check 7 toys and this takes log2100 which gives 7 (don’t worry on the math).
By increasing the number of toys to 100 000 000 toys. With algorithm A, the kids take 27 hours. By this time, Paul has left John’s house and is very angry. He no longer wishes to speaks to John. On the otherhand, if they use algorithm B, it would only take 27s. This gives enough time for Paul to enjoy the toy.
The run times for A and B don’t grow at the same rate. As the list of items gets bigger, B becomes a lot faster than A.
By knowing the Big O of both algorithms and choosen the algorithm with the lowest big O, the kids would continue to be friends and Paul would keep paying visits to John so that they may enjoy the toy.
2.1 Characteristics of Big O Notation
Big O notation tells you how fast an algorithm is. Big O does not measure speed in seconds but the number of operations. It is written as follows:
Consider a list of size n. Algorithm A needs to check each element, so it will take n operations. The Big O notation is O(n). Algorithm B needs log n operations to check the same list. Its Big O notation is given as O(log n).
The graph of O(n) increases faster than O(log n). Thus an algorithm with O(n) would take much time to execute ( slower) than an algorithm with O(log n).
Big O notation measures the worst case scenario. It identifies how an algorithm behaves when giving large scale inputs (infinity).If we’re using algorithm A to look for a toy. In the worst case, we will have to go through each toy and finally get the required toy at the end i.e O(n).
Big O measures the big picture. In such situations, we are not interested with the “exact” relation between input and run time. Expressions like O(n+3) and O(n2 + 3n + 4) are not accepted as at large scale input the 3 and 3n+4 terms are of negligible importance . Since we just need a gist of the complexity we just talk of O(n) and O(n2) respectively.
Here are the main take aways:
- Algorithm speed isn’t measured in seconds but in growth of the number of operations
- We measure an algorithm by idenfying how quickly its run time increases as the size of the input increases
- Run time is expressed in Big O notation
- O(log n) is faster than O(n) but it gets a lot faster as the list of items you’re searching grows.