29.4 Insertion Sort
Naive Insertion Sort
In insertion sort, we start with an empty output sequence. Then, we select an item from the input, inserting into the output array at the correct location.
Naively, we can do this by creating a completely seperate output array and simply putting items from the input there. You can see a demo of this algorithm here.
In-place Insertion Sort
We can improve the time and space complexity of insertion sort by using in-place swaps instead of building a seperate output array.
You can see a demo of this algorithm here.
Essentially, we move from left to right in the array, selecting an item to be swapped each time. Then, we swap that item as far to the front as it can go. The front of our array gradually becomes sorted until the entire array is sorted.
Insertion Sort Runtime
Insertion Sort Advantages
Note that on sorted or almost-sorted arrays, insertion sort does very little work. In fact, the number of swaps that it does is equal to the number of inversions in the array.
A less obvious empirical fact is that insertion sort is extremely fast on small arrays, usually of size 15 or less. The analysis of this is beyond the scope of the course, but the general idea is that divide-and-conquer algorithms (like heapsort and mergesort) spend too much time on the "dividing" phase, whereas insertion sort starts sorting immediately. In fact, the Java implementation of mergesort uses insertion sort when the split becomes less than 15 items.
Last updated