How do we get a logarithmic running time?

Merge sort divides an array of size n into n/2 smaller arrays by calculating the middle index. It does this recursively until each partition is one element. Then the algorithm merges them together by selecting the smaller number from each successive pile of left and right numbers.

Describing merge sort is like drawing out an upside-down tree: at the root you have the initial running time of the function T(n), and the last nodes are single elements: an element by itself is already sorted! Sorting a single element takes constant time; there is no real processing to be done. So you have n elements which each take c time for a width of c*n.

Then you imagine the tree having a height of (lg n + 1), which is calculating the number of nodes that a single-element tree has and proving the inductive hypothesis of a 2^i …, &tc, and you get T(n) as an area equal to cn * (lg n + 1) or cn * (cn lg n).

Logarithmic algorithms take less time than linear, so why do we pick the worst-case with the “faster” running time? The more dependent factor of a recursive solution may be the number of divisions needed. Division time D(n) is a quick calculation of floor(q) = (p+r)/2 of elements from p to q, and merge time C(n) takes n steps (linear) to compare two piles, each with n/2 or half the numbers.

I thought I could explain it, but I will have to go back and reread it. It is a fun read.