Coffeehouse Thread
8 postsForum Read Only
This forum has been made read only by the site admins. No new threads or comments can be added.
Asymptotic Analisys

Comment removed at user's request.

Comment removed at user's request.

Shmuelpro wrote:oh common can anyone at least tell me that they know what it is
I think we most often refer to "BigO notation" in discussing the magnitude of algorithms... is that where you're going with this?

amotif wrote:
Shmuelpro wrote: oh common can anyone at least tell me that they know what it is
I think we most often refer to "BigO notation" in discussing the magnitude of algorithms... is that where you're going with this?
There's several versions of algoritihmic analysis. Big O is the most common, and often it's quite intuitive. IIRC, O is the maximum time/resource that will ever be spent. You have other symbols for things, like "minimum possible resource expenditure", and average. Average runtime analysis is hard, but needed.
Quicksort is a O(n^2) algorithm. Like Bubble sort. That is, the runtime of the algorithm is proportional to the square of the length of the input (the number of items to be sorted).
If you do propability analysis on the formular however, you will find the run time to near O(n log n), which is a theoretical lower bound for comparison based sorted (bucket sort is in theory faste, as it does not compare, in order to sort). QS will only ever hit the worst time, when it's splitting the array to be sorted, when selection of the pivot point results in a constant split, such as:
[2 elem] [pivot elem] [rest of elem]
However, bad luck like this is extremely unlikely, and as long as you're getting splits, like this:
[x % of elems] [pivot elem] [rest of elem]
Then the time it takes, will be O(n log n).
I don't know what Wikipedia says on the topic, but the page on QS is awefully long: http://en.wikipedia.org/wiki/Quicksort
A pretty standard book on the analysis of algorithms and datastructures is the "Introduction to Algorithms" from MIT Press, by Cormen, et al. 
Comment removed at user's request.

In my experience at Microsoft, it's relatively common to do perf analysis this way. Certainly we talk about "n squared" vs "n log n" vs "linear" algorithms quite frequently.
In interviews, I often expect candidates to be able to intelligently discuss the performance characteristics of the algorithm they're using. More often than not, this means talking in "BigO" notation. 
Yeah, I've never done any recurance equations or actual math to figure it out. I must say that after several advanced algorithms classes where you have to do that stuff over and over again you get to the point where you can look at code and you just know it's runtime characteristics.
That being said, during interviews I ask questions that have multiple possible answers, each having different runtime characteristics. The inteviewee must be able to select the best one and choose why (often the best solution depends on context).
I have had to do the math a few time to select which algorithm/approach would be best for certain conditions (which will perform best with 100,000 sorted items vs 100 unsorted items?). When you work on a performance critical system you have to be able to defend your choices in algorithms and data structures.
It is important stuff to learn and anyone who is serious about getting into software development should learn algorithm analysis.
Conversation locked
This conversation has been locked by the site admins. No new comments can be made.