209k views
2 votes
Al and Bob are arguing about their algorithms. Al claims his O(nlogn)- time method is always faster than Bob's O(n^2 )- time method. To settle the issue, they perform a set of experiments. To Al's dismay, they , find that if n<100 the O(n^2)-time algorithm runs faster, and only when n>=100 is the O(nlogn)-time one better.Explain how this is possible?

User Luc C
by
8.1k points

1 Answer

3 votes
Enormous O unpredictability is in reference to the most exceedingly terrible conceivable development rate of the calculation. So O(N log N) implies that it will never keep running in some time more terrible than O(N log N). So in spite of the fact that Al's calculation scales superior to Bob's quadratic algo, it doesn't really mean it is better for ALL info sizes.
Maybe there is critical overhead in building up it, for example, making a lot of clusters or factors. Remember that even an O(N log N) calculation could have 1000 non settled circles that official at O(N) and still be viewed as O(N log N) the length of it is the most exceedingly awful part.
User Maarvd
by
8.3k points