Sven Groot said:
A lot of times you are required to use ONLY Big-O. That's why I am talking about this.
Required? By whom?
And Bass is right: assuming large n, the constants matter less. Let's assume n=1,000,000,000. That means with O(100n) it'd be 100,000,000,000. But with O(n2), it's 1,000,000,000,000,000,000. With O(2n) or O(n!), the numbers are so
large that Windows Calculator can't represent them anymore. Compare to that, a constant factor of 100 is peanuts.
And optimization (of the implementation, without changing the algorithm) can reduce the constants. But you can
never optimize it so that it improves the order without changing the algorithm. I can easily write two versions of quicksort where one is twice as fast as the other, but they'd both still be O(n log n). And I can
never do better than O(n log n) while still using quicksort. That's what Big-O is telling you. I can run it on a 286 16Mhz or on a Core i7 3Ghz, and even though it'd run much faster on the Core i7, it's still O(n log n).