Tree traversal with static data

I would like to get into the meat of algorithm books, but malloc() and structs loom. If you initialize an array to represent a tree, you can fake the rest.

tree:               table:
         1          node  child1 child2
       /   \        1     2      3
      2     3       2     4      5
     / \   /        3     6
    4  5  6         4
                    5
                    6

For investigating techniques, this seems doable. The (gist) demonstrates pre-order, in-order, and post-order traversal of this tree. My assumption is stacks and queues can be expressed even more easily with two position markers, an insert_at variable and a fetch_from variable.

If the world state is sufficiently large to represent all possible state transforms, there’s little reason to write a dynamic allocation scheme for a data structure. At least for initial learning purposes.

Advertisements

Counting words

Early on in LaValle’s “Planning Algorithms” (2006), there is a formulation of finite state machines when the state space X is finite. I’ve been drawing circles with arrows since the beginning with this word counting algorithm in K&R C, and I could never figure out the systematic approach to their solution:

...
if (blank)
  state = OUT
else if (OUT)
  state = IN
  word++
...

Then I was inspired by this book to describe the word-counting as in the explanation in Chapter 2: the state space X is {IN, OUT} and the actions U = {<letter>, <blank>}. So given some input of a letter or blank (newline, tab, or space), we have a function

f(u,x) = u && x = x'

The table of the transition states is then shown to be

f(letter, IN) = IN
f(letter, OUT) = IN
f(blank, IN) = OUT
f(blank, OUT) = OUT

The first conditional is then if (u = blank), which is described by the two latter state transitions: a blank character means we’re out of a word regardless of whether we’re in or out of a word.

The next conditional is distinguishing whether we are transitioning into the first letter of a word from a blank, or if we are transitioning within a word, letter to letter. It is self-evident that, between the first and second transition functions, the conditional “if (state == OUT)” is the only one to let us count words without counting letters within the word.

This is the best explanation I have for the word-counting algorithm. I like it very much.

The increment picture

TL;DR: When the loop exits, i = n. Claim remains true (even after i increments to i+1, equaling n, and the body is not executed).

Maybe the definition of madness is repeatedly explaining the same concept over and over.

I neglected an operation that participates in the framing of the loop invariant: the incrementing of the loop variable. Its mention is done in passing; I place it here to make it explicit. Here is code to find the maximum of a 0-based array:

currentMax <- A[0]
for i = 1; i < n; i++
  if currentMax < A[i] then
    currentMax = A[i]

The claim is at the beginning of iteration i, the variable currentMax will have the maximum value of the first i elements in the array A. The author goes on to say “if this claim is true, it will be true after it for i+1.” What does that mean?

(I noticed we don’t really look at the logic of the loop body; for our purposes, we are concerned with instruction counts versus correctness.)

It’s important for the claim to remain true after i+1 because when i = n, the loop ends and the body is never executed. Our loop’s last moments:

at the beginning of the (n-1)th iteration,
  currentMax is the maximum value of the first (n-1) elements of A
  (i = i + 1)
at the beginning of the nth iteration, 
  currentMax is the maximum value of the first n elements of A
  (i < n is false because i = n, so exit)

Why does it “remain true?” I took the quote out of context; it should be “remains true after [it] for i+1.” Substituting for the “for” (because it is a loop keyword),

… remains true after [it] when i is incremented by one.

Which pieces of the claim remain true?

before the i-th iteration    TRUE
during the body              TRUE
after the body               TRUE
incrementing i               TRUE
before the (i+1)th iteration (TRUE)

I forget what the loop invariant is for: is it to demonstrate that the algorithm is true? Is it to conceptualize its instructions as discrete operations? Seeing that the next step is to total them to determine the function f(n) to find growth rate, it is less debugging and more… something else. I will have to read some more.

Diagnosing mergesort

The partitioning calculation can be printed first, giving the result of q = (p+r)/2 for each division of the array into its sub-arrays:

q = floor((p+r)/2)

q is 0 when there is one element, because of integer division of a fraction 1/2; then you’re at a single element. The function returns because p \ge r.

The second place to print is during assignment of the left and right subarrays; this shows you the merge order. For terminating single elements, L and R are one-element arrays (not counting sentinel value). These get “merged” because L and R get larger toward the end:

mergesort

 

“Larger toward the end?” Wat

How do we get a logarithmic running time?

Merge sort divides an array of size n into n/2 smaller arrays by calculating the middle index. It does this recursively until each partition is one element. Then the algorithm merges them together by selecting the smaller number from each successive pile of left and right numbers.

Describing merge sort is like drawing out an upside-down tree: at the root you have the initial running time of the function T(n), and the last nodes are single elements: an element by itself is already sorted! Sorting a single element takes constant time; there is no real processing to be done. So you have n elements which each take c time for a width of c*n.

Then you imagine the tree having a height of (lg n + 1), which is calculating the number of nodes that a single-element tree has and proving the inductive hypothesis of a 2^i …, &tc, and you get T(n) as an area equal to cn * (lg n + 1) or cn * (cn lg n).

Logarithmic algorithms take less time than linear, so why do we pick the worst-case with the “faster” running time? The more dependent factor of a recursive solution may be the number of divisions needed. Division time D(n) is a quick calculation of floor(q) = (p+r)/2 of elements from p to q, and merge time C(n) takes n steps (linear) to compare two piles, each with n/2 or half the numbers.

I thought I could explain it, but I will have to go back and reread it. It is a fun read.

How do I remember floor and ceiling?

Imagine a vertical number line, and put x at a point in the middle. Then the next point above and below x are the “caps” for x, ceiling and floor (respectively).

^                ^
|                |
+ ceil(x)        + 6
|                |
+ x              . 5.7
|                |
+ floor(x)       + 5
|                |
v                v

This makes more sense with real numbers (decimals), when x is desired to be an integer. For example, merge sort partitions the array and we need integers for the indices. You can’t get the 5.5th index of a number, although you might get that dividing an 11-element array.

How is insertion sort a quadratic function?

In the worst case, insertion sort takes on the characteristic of a quadratic function, which is to say there is a squared term in the total runtime T(n). First, let’s determine the cost and number of times each instruction takes. Some notes:

  • a 1-based array
INSERTION-SORT(A)            cost  times   
for j = 2 to length[A]       c1    n
  do key <- A[j]             c2    (n-1)
    i <- j-1                 c3    (n-1)
    // a comment             0     (n-1)
    while i>0 and A[i]>key   c5    sum(j=2..n) t
      do A[i+1] <- A[i]      c6    sum(j=2..n) t-1    
        i <- i-1             c7    sum(j=2..n) t-1
    A[i] <- key              c8    (n-1)

Each loop runs one more time as part of its “loop check” to exit: the for loop runs from 2 to n, which is (n-1) times, plus one additional time to exit. Most loops I write start from 1 or 0 to n or (n-1), and those would run n times plus 1, or (n+1), times.

The sum(j=2..n) notation is summation from 2 to n, where t is the number of times per term: when j=2, t=2; when j=3, t=3; and so on. This summation includes the extra once of checking the loop exit, which is the i>0 check. When j=5, i initializes to 4 and takes on the values 3,2,1 and 0: the zero is the loop check.

If t is the number of times the loop runs, the instructions within the loop runs one less time, or (t-1); that’s why the while body runs (t-1) times.

The constants are numbered from c1 to c8 to represent a definite time to execute that instruction. We multiply this constant by the number of times to get a term to describe the subtotal cost; then T(n) is the sum of all these subtotals.

The shortcut of calculating the summation sum(j=2..n) t is n*(n+1)/2 – 1, which yields the n-squared. So in the worst case of a reversed list of numbers, insertion sort would take time-squared of the input size.

Why is the inner loop a summation?

Insertion sort time of the inner loop is considered a summation instead of just n or n-1 like the outer loop. That’s because the inner loop runs a certain number of times per outer loop:

j    i    values of i
--   --   ------------
2    1    1, 0
3    2    2, 1, 0
4    3    3, 2, 1, 0
5    4    4, 3, 2, 1, 0
6    5    5, 4, 3, 2, 1, 0
...  ...  ...
n    n    n

The total time is a summation of 2 + 3 + 4 + … + n for the inner loop. The zero value is the check that exits the loop. The outer for loop has a similar check, which is why it is n+1 time while its instructions are time n.

INSERTION-SORT(A)
for j <- 2 to length[A]
  key <- A[j]
  i <- j - 1
  while i > 0 and A[i] > key
    A[i+1] <- A[i]
    i <- i - 1
  A[i+1] <- key