CM10135
/ Programming II: Lecture
5
Yet
More on Algorithms and Complexity:
The
Complexity of Various Sort Algorithms
I. Criteria for Evaluating an Algorithm, Rerevisited
 What do you need to know about algorithms and complexity?
 Worst
case,
 If everything is organized in the worst possible way, how
long will it take?
 Best
case,
 If you are really lucky, what's the fastest it could run?
 Average
or expected case.
 What is the most probable situation?
 Big O notation should be used to give a boundary for the worst case.
 Notes from Dr. Paddon's old lectures say "Average case analysis
is mathematically much harder to perform, and for many algorithms is
not known."
 One practical way to determine average case is with a stopwatch (well, anyway,
to run statistics!)
 Here's some folks who used a stopwatch
on the sorting algorithms.
 Notice they get more precise results than we'll get below.
 But notice also the difference between O(n^{2})
and O(nlogn) matters a lot more!
 Things to remember about big O values:
 Since it an upper bound on running time, performance could be much better.
 The input that causes worst case performance could be very rare
 and it might be recognizable or avoidable.
 You don't know what the constants are 
 May be very large.
 Even a constant of 2 matters a lot if your operation is going
to take 3 years (e.g. database conversions.)
II. How Bad is Bad?
Another table shamelessly cribbed from Gerald Kruse's page on Algorithm
Efficiency.
Table of growth rates
Linear
N

logarithmic
log_{2}N

n*log_{2}N

quadratic
N^{2}

cubic
N^{3}

exponential
2^{N}

exponential
3^{N}

factorial
N!

1

0

0

1

1

2

3

1

2

1

2

4

8

4

9

2

4

2

8

16

64

16

81

24

8

3

24

64

512

256

6561

40320

16

4

64

256

4096

65,536

43,046,721

2.09E+013

32

5

160

1024

322,768

4,294,967,296

…1.85E+15

2.63E+035

64

6

384

4096

262,144

1.84E+17
(Note
1)

…3.43E+30

1.27E+089

128

7

896

16,384

2,097,152

3.4E+38
(Note 2)

…1.18E+61

3.86E+215

256

8

2048

65,536

1,677,216

1.16E+77 ???

…1.39E+122

Find other calculator

Note 1: The
value here is approximately the number of
machine instructions executed by a 1 gigaflop computer in 5000 years,
or 5 years on a current supercomputer (teraflop computer)
Note 2: The value
here is about 500 billion times the age of the universe in nanoseconds,
assuming a universe age of 20 billion years.
 Some people think that it doesn't really matter how complex an
algorithm is because computers are getting so much faster.
 They are wrong.
It matters.
 See the table above  not going to get much done in less than a
nanosecond!
 Let's
say that for each type of problem space, there's a largest size N for
how
long we can wait around for our program to finish.
 The time we are waiting is always the same, but the size is
different because the complexity is different.
 N1N5
 This table (from Dr. Paddon's notes last year) tells us how much
faster computers will help us.

log_{2}n

n

nlog_{2}n

n^{2}

2^{n}

Current
Computers

N1

N2

N3

N4

N5

Ten
times faster

N1 x 30

N2
x 10

N3 x 3

N4 x 3

N5 + 3

Thousand
times faster

N1 x 9,000

N2
x 1,000

N3 x 111

N4 x 31

N5 + 10

Million
times faster

N1 x 19E+6

N2
x 1E+6

N3 x 5,360

N4 x 1,000

N5 + 20

 You can see that there are no big wins once you have an
exponential algorithm!
III. The Complexity of Our Sort Algorithms
Fore the algorithms you've already seen, I've linked to the pseudo code
from
Lecture 2. You may want to open the links in another browser
window
(use the right mouse button) so you can look at the analysis & the
algorithm
at the same time.
 Most expensive / basic operation is the comparison, if (sorta[searchIx] < min).
 How often do we do it?
 Amounts to:
for n
do
for 1, 2, 3, 4...n1 do
(average value (n1)/2)
 This
is equiv to the arithematic series (1+2+3+4+...n1) == n*(n+1)/2
 n*(n+1)/2 = 1/2*n^{2} + 1/2*n which means its O(n^{2})  n^{2} dominates ^{}
 Caveat: the equation
of that
arithmatic series might be wrong  I'm no mathematician, and the
notes
& web pages I've found don't agree with each other! But it's
something
quite like that... Anyway, that's what O notation is for, so we
know what's really important!
 Remember the one I did with two arrays?
 How often do we do if
(messy[searchIx] < min)?
 Amounts to:
for n
do
for n do
 n^{2} which means its O(n^{2})
 So no difference at
least in O notation! (though see the charts
on this page I mentioned earlier  these things do matter a
little.)
 How often do we do
the comparison?
(sorta[searchingIx] < currentItem)
 In the worst case, started in reverse order, have to move every
item past all the sorted items.
 for n
do
for 1, 2, 3, 4...n1 do...
 Look familiar? Also O(n^{2})
 So again, no difference
at least in O notation!
 Insertion & Selection sort are said to be algorithms in the
same class, because
effectively they take the same time.
 Here is my favorite sort algorithm in O(n^{2}) (yes, I
have a favorite sort algorithm. Yes, I'm a nerd! Nerd Pride)
iv) Bubble Sort
 What
you've hopefully noticed is that all of these sorts have an outer for
loop as long as
the array, and an inner loop that's almost as long as the array, so
I'll
just call the indecies "out" and "in".
 In this one, you just swap / "bubble" any large numbers up out of
the way!
array int sorta; // this comes from somewhere in a messy state
int temp; // for the swap
int outIx, inIx; // these are indecies
// outer loop over the sorted array  when everything's sorted we're done
for (outIx gets the values sorta.length down to 1) {
// from the beginning of the list to the end of the unsorted bit
for(inIx = 1; inIx < outIx1; inIx++) {
// If adjacent items are out of order, swap them!
if (sorta[inIx1]>sorta[inIx]) {
temp = sorta[inIx1];
sorta[inIx1] = sorta[inIx];
sorta[inIx] = temp;
}
}
 How simple is that? But we may as well do it this way since
they're all O(n^{2}) anyway.
 Here's a drawing
of what happens
(I did one in class on the board too.) Basically on each pass,
the
largest number will get carried up to the top of the unsorted area.
 Don't forget (this page is linked from the class page) that John
Morris has sorting movies.
 Quick sort is different (hence its name.)
 Actually, it's worst case (array is already sorted backwards) is
the same as the others.
 But, in the best case
(random distribution), the pivot point will be at the middle, so the
array keeps getting sliced in half.
 If this is true, you will have log_{2}n times
that you slice the array up,
 For each time you do the slicing, you do n
comparisons in total.
 So best case is exactly O(n log n).
 Apparently, the average case is 38% worse than this.
 My favorite O(n log n)
sort is called Merge Sort. (Yes, I have a favorite O(n log n)
sort...)
vi) Merge Sort
 Split the array in halves recursively until each half has only
one element.
 You can now assume the sub lists are sorted (1 number can't be
unsorted.)
 Merge the two halves together by interleaving them:
 Run two indecies, one from the beginning of each array.
 Which ever index points at the smallest number, take that
number as the next in your merged array.
 This is guaranteed to be O(n log n),
since you are certain to split things correctly.
 But empirically it's usually slower than quick sort  probably
because it uses twice as much memory.
IV. What have we learned?
 Complexity matters (again.)
 The sorts you learned come in two different classes, O(n^{2})
and O(n log n).
 A couple of easy sorts in each of these classes.
 Hopefully getting an intuition about how to recognize a log n
algorithm (or algorithm component.)
 This strategy of chopping problems in half recursively to get a
log n algorithm is often called divide and conquer.
 It's a good strategy!
page author: Joanna Bryson
22 Feb 2005 (one link updated 6 Feb 2007)