Comment on page

# 35.1 Counting Sort

Imagine if instead of driving a slow Honda Civic, we started driving a fast Ferrari. Unfortunately, we won't actually be driving in a Ferrari today, but we will witness a blazing fast algorithm that's just as fast called Radix Sorts.

When sorting an array, sorting requires

$\Omega(N \log N)$

compare operations in the worst case (array is sorted in descending order). Thus, the ultimate comparison based sorting algorithm has a worst case runtime of $\Theta(N \log N)$

.From an asymptotic perspective, that means no matter how clever we are, we can never beat Merge Sort’s worst case runtime of

$\Theta(N \log N)$

. But what if we don't compare at all?Left is original, right is ordered output

Essentially what just happened is that we first made a new array of the same size and then just copied all of the # indexes to the correct location. So first we look at 5 Sandra Vanilla Grimes and then copy this over to the 5th index in our new array.

This does guarantee

$\Theta(N)$

worst case time. However what if we were working with - Non-unique keys.
- Non-consecutive keys.
- Non-numerical keys.

All of these cases are complex cases that aren't so simple to deal with. Essentially what we can do is create a simpler method which is to:

- Count number of occurrences of each item.
- Iterate through list, using count array to decide where to put everything.

Bottom line, we can use counting sort to sort

$N$

objects in $\Theta(N)$

time. Last modified 7mo ago