We'll illustrate the process of doing average-case analysis by looking at the ordered insertion algorithm.
We start, as usual, by marking the simple bits O(1).
Next we note that the loop body can be reduced to O(1).
The complexity of the loop condition and body do not vary from one iteration to another.
It can execute
0 times (if x
is larger than anything
already in the array),
1 time (if x
is larger than all but
one element already in the array),
and so on up to a maximum of n times (if x
is smaller than
everything already in the array).
What we don't know are the probabilities to associate with these different numbers of iterations.
Depends upon the way the successive inputs to this function are distributed.
When I used this algorithm as part of a spell checking programs, I saw two different examples of possible input patterns:
In some cases, we were getting elements that were already in sorted order (e.g., reading from a sorted dictionary file).
We also faced input distributions where successive values of x were coming in essentially random order (e.g., adding words from the document into a "concordance" - a set of all words known to have appeared in the document).
In the Forum: