Записки программиста, обо всем и ни о чем. Но, наверное, больше профессионального.

2016-03-11

Algorithms, Part I. Princeton University, Robert Sedgewick

Еще шесть недель обучения прошло.
Одолел один из обязательных курсов по программированию:
Algorithms, Part I. Princeton University, Robert Sedgewick

Как сказано в эпиграфе
essential information that
every serious programmer
needs to know about
algorithms and data structures

За 6 недель изучены материалы 3-х глав:

Chapter 1: Fundamentals introduces a scientific and engineering basis for comparing algorithms and making predictions. It also includes our programming model.
Chapter 2: Sorting considers several classic sorting algorithms, including insertion sort, mergesort, and quicksort. It also includes a binary heap implementation of a priority queue.
Chapter 3: Searching describes several classic symbol table implementations, including binary search trees, red-black trees, and hash tables.

А именно:

Week 1

Lecture: Union-Find.
We illustrate our basic approach to developing and analyzing algorithms by considering the dynamic connectivity problem.
We introduce the union-find data type and consider several implementations
(quick find, quick union, weighted quick union, and weighted quick union with path compression).
Finally, we apply the union-find data type to the percolation problem from physical chemistry.

Lecture: Analysis of Algorithms.
The basis of our approach for analyzing the performance of algorithms is the scientific method.
We begin by performing computational experiments to measure the running times of our programs.
We use these measurements to develop hypotheses about performance.
Next, we create mathematical models to explain their behavior. Prove model by experiment.
Finally, we consider analyzing the memory usage of our Java programs

Week 2

Lecture: Stacks and Queues.
We consider two fundamental data types for storing collections of objects: the stack and the queue.
We implement each using either a singly-linked list or a resizing array.
We introduce two advanced Java features—generics and iterators—that simplify client code.
Finally, we consider various applications of stacks and queues ranging from
parsing arithmetic expressions to simulating queueing systems.

Lecture: Elementary Sorts.
We introduce the sorting problem and Java's Comparable interface.
We study two elementary sorting methods (selection sort and insertion sort) and a variation of one of them (shellsort).
We also consider two algorithms for uniformly shuffling an array.
We conclude with an application of sorting to computing the convex hull via the Graham scan algorithm.

Week 3

Lecture: Mergesort.
We study the mergesort algorithm and show that it guarantees to sort any array of N items with at most NlgN compares.
We also consider a nonrecursive, bottom-up version.
We prove that any compare-based sorting algorithm must make at least ∼NlgN compares in the worst case.
We discuss using different orderings for the objects that we are sorting and the related concept of stability.

Lecture: Quicksort.
We introduce and implement the randomized quicksort algorithm and analyze its performance.
We also consider randomized quickselect, a quicksort variant which finds the kth smallest item in linear time.
Finally, consider 3-way quicksort, a variant of quicksort that works especially well in the presence of duplicate keys.

Week 4

Lecture: Priority Queues.
We introduce the priority queue data type and an efficient implementation using the binary heap data structure.
This implementation also leads to an efficient sorting algorithm known as heapsort.
We conclude with an applications of priority queues where we simulate
the motion of N particles subject to the laws of elastic collision.

Lecture: Elementary Symbol Tables.
We define an API for symbol tables (also known as associative arrays)
and describe two elementary implementations using a sorted array (binary search) and an
unordered list (sequential search).
When the keys are Comparable, we define an extended API that includes the additional methods
min, max, floor, ceiling, rank, and select.
To develop an efficient implementation of this API, we study the binary search tree data structure and analyze its performance.

Week 5

Lecture: Balanced Search Trees.
In this lecture, our goal is to develop a symbol table with guaranteed logarithmic performance
for search and insert (and many other operations).
We begin with 2-3 trees, which are easy to analyze but hard to implement.
Next, we consider red-black binary search trees, which we view as a novel way to implement 2-3 trees as binary search trees.
Finally, we introduce B-trees, a generalization of 2-3 trees that are widely used to implement file systems.

Lecture: Geometric Applications of BSTs.
We start with 1d and 2d range searching, where the goal is to find all points in a given 1d or 2d interval.
To accomplish this, we consider kd-trees, a natural generalization of BSTs when the keys are points in the plane (or higher dimensions).
We also consider intersection problems, where the goal is to find all intersections among a set of line segments or rectangles.

Week 6

Lecture: Hash Tables.
We begin by describing the desirable properties of hash function and how to implement them in Java,
including a fundamental tenet known as the uniform hashing assumption that underlies the potential success of a hashing application.
Then, we consider two strategies for implementing hash tables—separate chaining and linear probing.
Both strategies yield constant-time performance for search and insert under the uniform hashing assumption.
We conclude with applications of symbol tables including
sets, dictionary clients, indexing clients, and sparse vectors.

Digest


Union-Find


Dynamic connectivity problem: given a set of N objects
– connect 2 objects (sets actually)
– check if 2 objects/sets connected
– API: union(x, y); connected(x, y)
– x, y is a numbers 0:N-1

'is connected to' is an equivalence relation:
– reflexive: p is connected to p
– symmetric: if p is connected to q, then q is connected to p
– transitive: if p is connected to q and q is connected to r, then p is connected to r

Connected Components
– maximal set of objects that are mutually connected
– find query: return CC id for object given
– connected query: check if 2 objects are in the same component
– union command: replace 2 components with their union

Quick Find (slow union) eager implementation of UF
– id[]: array of components id, length N
– p and q are connected iff (if and only if) they have the same id: id[p] == id[q]
– union(p, q): time O(N). foreach id: if id == pid: id = qid

Quick Union (not very quick) lazy implementation of UF
– tree structure
– id[]: array of parents id, length N
– id[p] = n, n is parent of p; id[n] is parent of n, ...
– if id[p] == p: p is a root of CC
– 'connected': check if p and q have the same root: root(p) == root(q)
– 'union': set new root for p: id[root(p)] = root(q)
– root(n): while n != id[n]: n = id[n]; return n
– 'root' time O(N)

QU improvements
Quick-Union weighted:
– keep track of size of each tree (extra array sz[])
– link root of smaller tree to root of larger tree (balance by doubling small tree => lg)
– parent T.size must be >= 2 * child T.size
– 'union' improved: set new root for smaller tree:
if sz[root(p)] < sz[root(q)]: id[root(p)] = root(q), sz[root(q)] += sz[root(p)]
– root time O(lg N)
Quick-Union path compression:
– update/shift/flatten parent while finding root
– 'root' improved: while n != id[n]: id[n] = id[id[n]], n = id[n]; return n
Max height of a weighted quick-union with path compression forest with N sites ~ lg N

QU WPC / WQUPC analisys, for N objects, M ops
– time O(N + M lg* N)
– lg* N: iterated log function => number of times you have to take lg to get '1'; in practice < 5
– linear time in practice, almost
– simple algorithm but way too complex analysis

Union-Find applications
– percolation: open/blocked cells in grid
percolates iff top and bottom are connected by open cells
– Kruskal's minimum spanning tree:
sort edges by weight, ascending; add edge to a tree unless that creates a cycle (both vertices in the same set)

Analysis of Algorithms


Reasons to analyze algorithms
– predict performance
– compare algorithms
– provide guarantees
Practical reason: avoid performance bugs

Knuth, use scientific method to understand performance
– observe some feature of the natural world
– hypothesize a model that is consistent with the observations
– predict events using the hypothesis
– verify the predictions by making further observations
– validate by repeating until the hypothesis and observations agree
– experiments must be reproducible
– hypotheses must be falsifiable (experiment can show that hypothesis is wrong)

3-Sum example
– brute force:
for i = 0..N: for j = i+1..N: for k = j+1..N: test a[i] + a[j] + a[k]
– run time 1/6 N^3

Empirical analisys, power law a*N^b
– Observations: s = new Stopwatch; ThreeSum; s.elapsedTime
– measure runnung time for various input sizes
– double input each time
– log-log plot: lg(N) vs lg(T(N))
– y = b*x + c, run regression for detecting b and c (assume power law: a*N^b)
– T(N) = 2^c * N^b: lg(T(N)) = b*lg N + c
– Hypothesis: running time ~ a N^b
– Predictions: n seconds for N = x, m seconds for N = y
– Observations: predictions verified
– model enables us to make predictions
– need to validate mathematical models

Doubling hypothesis
– quick way to estimate b in a power-law relationship
– run program, doubling the size of the input
ratio is: time(2N) / time(N)
b = lg ratio
– running time = aN^b
– a = time / N^b

Experimental algorithmics
– system independent: algorithm, input data => exponent b in power law
– system dependent: hardware (CPU, RAM, cache, …), software (compiler, OS, …) => constant a in power law

Mathematical models
– total running time = sum of cost * frequency for all operations
– cost depends on machine, compiler
– frequency depends on algorithm, input data
– analysing program determine set of operations
– enables us to explain behavior
2-Sum example
– for i in 0..N: for j in i+1..N: if a[i] + a[j] == 0: cnt++;
– equal to compare freq: 0 + 1 + 2 … + (N — 1) = ½ N * (N -1) ~ ½ N^2
– ½ N^2 ~ N choose 2
– variable declarations, assignments, array access, increment, … too many ops
– count the number of most expensive ops (recordings, multiplications, ...)

Cost model
– use some basic ops as a proxy for running time
– most often or/and most expensive
– for 2-Sum: array access = N(N — 1)
– use tilde notation: approximation
estimate running time/memory as a function of input size N
ignore lower order terms
– technical definition: f(N) ~ g(N) means lim/N->inf: f(N)/g(N) = 1
– for 2-Sum: array access ~ N^2
– for 3-Sum: array access ~ 1/2N^3 (N choose 3 ~ 1/6N^3)

Math.model bottom line
– in principle, accurate math.models are available
– in practice, exact models best left for experts
– we use approximate models T(N) ~ cN^b

Order-of-growth
– small set of functions: 1: constant, logN: logarithmic, N: linear, NlogN: linearithmic, N^2, N^3, 2^N
– make shure that your algorithm better than N^2
Binary search: logarithmic
Determine how many keys in a sorted array are equal to a key n: logarithmic time, two binary searches
– trivial? 20 years for debug:
lo = 0, hi = len-1; while lo <= hi: mid = lo + ½(hi-lo);
if key < a[mid] hi = mid-1; elif key > a[mid] lo = mid+1; else return mid;
– invariant: key between lo, hi if exists
– math.analysis: T(N) <= T(N/2) + 1
and telescoping, solve recurrence: T(N) <= T(N/4) + 1 + 1 …
= 1 + lgN
3-Sum: N^2 lg N algorithm, quadratic
– sort array; for each pair do binary search for a[k] == -1 * (a[i] + a[j])

Theory of algorithms, types of analyses
– best case: lower bound on cost, determined by easiest input
– worst case: upper bound on cost, determined by most difficult input
– average case: expected cost for random input
approaches:
– 1- design for the worst case;
– 2- randomize, depend on probabilistic guarantee
goals: establish 'difficulty' of a problem; develop 'optimal' algorithms
– approach: suppress details, analyze 'to within a constant factor'; focus on the worst case
– optimal algorithm: performance guarantee for any input (to within a constant factor);
no algorithm can provide a better performance guarantee

Theory of algorithms, notations
– Tilda: leading term, provide approximate model
– Big Theta: asymptotic order of growth, classify algorithms
– Big Oh: Big Theta and smaller, develop upper bounds
– Big Omega: Big Theta and larger, develop lower bounds

ToA Example, 1-Sum
– upper bound, brute-force: O(N) running time
– lower bound: have to examine all N entries, optimal running time is Omega(N)
– Optimal algorithm: O(N) = Omega(N), Brute-force is optimal, running time Theta(N)

ToA Example, 3-Sum
– upper bound, brute force: O(N^3)
– upper bound, bin.search: O(N^2 lgN)
– lower bound: have to examine all N entries: Omega(N)
– open problems: optimal algorithm? subquadratic alg? quadratic lower bound?
– optimal algorithm somewhere between lower and upper bounds

Memory cost (JVM)
– Byte: 8 bits
– Megabyte (MB): 2^20 bytes
– Gigabyte (GB): 2^30 bytes
– 64-bit machine: 8 byte pointers
– boolean: 1 byte
– char: 2 bytes
– int, float: 4 bytes
– long, double: 8 bytes
– array: 24 bytes + N*itemSize
– object overhead: 16 bytes + padding to 8 bytes
– reference: 8 bytes

Stacks and Queues


Fundamental data types
collection of objects
– ops: insert, remove, iterate, isEmpty
– which item do we remove?
Stack: LIFO, most recently added
Queue: FIFO, least recently added

Consider (modularity, abstraction, isolation):
– client: program using ops defined in interface
– implementation: actual code implementing ops
– interface: description of data type, API

Stack
– ops: push, pop
linked-list: pointer to first node, insert/remove from front
– pop: item = first.item; first = first.next; return item
– push: save = first; first = new Node(item, save);
– time O(1)
array: predefined (or resizable) array of items
– pop/push: return arr[--count] / arr[count++] = item
– avoid loitering: pop, arr[count] = null

Stack, resizing arrays, doubling
– need to copy all items to a new array on resize
– ensure that array resizing happens infrequently
– repeated doubling:
push: if count == arr.size: resize(2*count)
– resize: copy = new array, for i in 0..count: copy[i] = arr[i]; arr = copy;
– amortized cost of adding to a stack N items: cumulative average, linear ~ 3N
– shrink array: halve size when array is one-quarter full
pop: if count == arr.size/4: resize(arr.size/2)
– invariant: array is between 25% and 100% full
– running time performance, order of growth: push/pop best = 1, worst = N, amortized = 1
– memory usage: ~ 8N when full, ~ 32N when one-quarter full.

resizing array vs. linked list
– linked list: constant time in the worst case; extra time, space
– resizing array: amortized constant time; less wasted space

Queue
– ops: enqueue, dequeue
– linked list: pointers to first and last nodes,
– add to the last, remove from the front
– dequeue: item = first.item; first = first.next; return item
– enqueue: save = last, last = new Node(item), save.next = last

Resizing arrays implementation (queue)
– add: if count == q.size: resize(2*count); q[tail++] = item, count++
– remove: item = q[head]; head++, count--; if count == q.size/4: resize(q.size/4)
update head, tail modulo arr.size: if tail = q.size: tail = 0
resize: copy = new arr, for i in 0..count: copy[i] = q[(head + i) % q.size]

Generics
– array of Objects? no, casting in runtime is error-prone
– compile-time errors are better: generics: Stack<Apple> s = new Stack<>();
– but, generic array creation not allowed in Java: arrays are covariant, generics are not
Stack<Item> … s = (Item[]) new Object[capacity];
– autoboxing: for primitive data types: Stack<Integer> … s.push(17)

Iterators, java.lang.Iterable
– iteration over stack? queue? unintrusively?
– implements java.lang.Iterable
public class Queue<Item> implements Iterable<Item>
    public Iterator<Item> iterator(): return new ArrayIterator()
    private class ArrayIterator implements Iterator<Item> 
        public boolean hasNext()
        public Item next()
for(String s: queue) ...
– bag API: add, size, iterator – based on stack or queue

Applications: Stack, Queue
– Java collections: bloated, poorly-designed API => use algs4 lib
– stack implements recursion
arithmetic expression evaluation, 2-stack (Dijkstra):
(1+((2+3) …
value: vals.push; operator: ops.push; '(': ignore;
')': res = ops.pop apply(vals.pop, vals.pop); vals.push(res)
– if the operator occurs after the two values:
postfix or 'reverse Polish' notation: (1((2 3 + … => 1 2 3 + ...

Elementary Sorts


Goal: sort any type of data
– Double[a] … Insertion.sort(a) …
– String[a] … Insertion.sort(a) …
sort function calls back objects's compareTo method
– Java: interfaces, C: function pointers, C#: delegates, Python: first-class functions (functions as arguments)

Total order: a binary relation '<=' that
– antisymmetry: if v <= w and w <= v then v == w
– transitivity: if v <= w and w <= x then v <= x
– totality: either v <= w or w <= v or both
Example: alphabetical order for strings; chronological order for date/time
Not all orders is total: paper-stone-scissors not transitive

Comparable API, Java
– implement v.compareTo(w):
make sure it's a total order; ret -1, 0, +1 if v is less, equal, greater than w; throws an exception if null
– less: helper, is v less than w?
– exch: helper, swap a[i] with a[j]
– isSorted: a[i-1] less a[i]
double and fudge factor epsilon: transitivity is violated:
if (this.v < that.v — eps); consider a,b,c: 10.16, 10.08, 10.0 and eps = 0.1

Selection sort
always: ½ N^2 compares, N swaps, not stable, insensitive to input
procedure
– for i in 0..N: find idx of min in i+1..N; swap a[i] and a[idx]
– invarians: left part sorted, immutable; no entry in right part is smaller than any left entry
for (int i = 0; i < N; i++) {
    int min = i;
    for (int j = i+1; j < N; j++) 
        if (less(a[j], a[min])) min = j;
    exch(a, i, min); }
– uses only a constant amount of memory

Insertion sort
average: ¼ N^2 compares, ¼ N^2 swaps; N compares if sorted; stable, inplace
worst case: inv.sorted, ~ ½ N^2 compares, swaps.
procedure
– for each item: keep sorted order in left part
– for i in 0..N: for j in i..0: if a[j] less a[j-1]: swap
– invariants: left part sorted, mutable; right part not yet visited
for (int i = 0; i < N; i++) 
    for (int j = i; j > 0 && less(a[j], a[j-1]); j--) 
        exch(a, j, j-1);
– on average: expect each entry to move halfway back: ¼ N^2
– good for partially sorted arrays: linear time
inversions
– if number of inversions <= cN: array is partially sorted
– inversion: pair of keys that are out of order
– why linear? number of swaps = number of inversions
– number of compares = swaps + N-1 = number of inversions + N-1
– for array half sorted ascending, half descending (e.g. 123321)
number of inversions ~ 1/4N^2.
– Array [0 0 0 1 1 1]: inversions 0 => compares ~N.
– Array [1 0 3 2 5 4 7 6]: inversions ½ N => compares ~ 3/2 N.
Only one stable, inplace sort here

Shellsort: insertion sort with strides
worst: for 3x+1 incr.: ~2.5N lgN; O(N^3/2) compares; not stable, inplace
– using partially sorted property for insertion sort
– h-sorted array: move entries with h-size stride
insertion sort, j decrements by h
– Shellsort: h-sort for decreasing sequence of h values
– g-sorted array remains g-sorted after h-sorting it
– which increment? Power of 2? no, empirical:
1,5,19,41,109,209,505,929,... or
Knuth's 3x+1 increments: 3x+1: 1,2,13,...
int h = 1;
while (h < N/3) h = 3*h + 1;
while (h >= 1) {
    for (int i = h; i < N; i++) // h-sort the array
        for (int j = i; j >= h && less(a[j], a[j-h]); j -= h)
            exch(a, j, j-h);
    h /= 3;}
– fast enough for small arrays
– tiny footprint for code (embedded)
– number of compares to shellsort [1, 2, 3] is 2 but for [3, 2, 1] is 3
– if array is 3-sorted and 5-sorted: it also 6-, 8-, 9-, 10-sorted
– Sorted array, shellsort uses ~ N log_3 N compares: ~log_3 passes,
each pass ~ N compares
– In the worst case, order of growth of the number of compares:
depends on the increment sequence; for Knuth's 3x+1: N^3/2,
for other can be N^4/3 or N log^2 N

Shuffling (array rearrange, sorting-like proc)
goal: rearrange array so that result is a uniformly random permutation
– shuffle sort: generate a random real number for each array entry; sort => NlgN time
or: compareTo: r = Math.random, if r < 0.5 return -1 …
– can do in linear time
– Knuth shuffle: for i 0..N: r = uniform(0, i), swap(i, r)
also work with r = uniform(i, N) or r = i + uniform(0, N-i)
each item swaps at least once, linear time
int n = a.length;
for (int i = 0; i < n; i++) {
    int r = i + uniform(n-i);     // between i and n-1
    swap(a, i, r) }
– Uniformly shuffle: linear time, constant memory, Knuth' shuffle
– use a hardware random-number generator
– pool sampling problem: use that approach

Sorting application: Convex Hull (Graham scan, NlgN time)
– CH: is the smallest perimeter fence enclosing all the points
– CH output: sequence of vertices in counterclockwise order
– Graham scan: starting from point p with lowest y-coordinate: check each vector p-point in polar-angle-order
making only left turns (discard unless it creates a ccw turn)
– ccw calc:
area2 = (b.x-a.x)*(c.y-a.y) — (b.y-a.y)*(c.x-a.x);
if area2 < 0: clockwise; elif area2 > 0: counterclockwise; else: collinear
// preprocess so that points[0] has lowest y-coordinate
Arrays.sort(points);
// sort by polar angle with respect to base point points[0],
Arrays.sort(points, 1, N, points[0].polarOrder());
hull.push(points[0]);       // p[0] is first extreme point
// find index k1 of first point not equal to points[0]
for (k1 = 1; k1 < N; k1++)
    if (!points[0].equals(points[k1])) break;
// find index k2 of first point not collinear with points[0] and points[k1]
for (k2 = k1 + 1; k2 < N; k2++)
    if (Point2D.ccw(points[0], points[k1], points[k2]) != 0) break;
hull.push(points[k2-1]);    // points[k2-1] is second extreme point
// Graham scan; note that points[N-1] is extreme point different from points[0]
for (int i = k2; i < N; i++) {
    Point2D top = hull.pop();
    while (Point2D.ccw(hull.peek(), top, points[i]) <= 0) 
        top = hull.pop();
    hull.push(top);
    hull.push(points[i]); }

Mergesort


Mergesort: recursive, stable, O(N lgN) compares + O(6N lgN) accesses, N extra space, not adaptive
one of system sorts
– good for: stability; linked lists, random access cost > sequential access cost
– a stable sort preserves the relative order of items with equal keys (long-distance swaps is no-no)
– idea (divide & conquer): split in the middle; recursively sort each half; merge 2 halves:
for each item: copy smallest from left or right; if left == right: copy left => stable
// sort
    int mid = lo + (hi - lo) / 2;
    sort(a, aux, lo, mid);
    sort(a, aux, mid + 1, hi);
    merge(a, aux, lo, mid, hi);
// merge
    for (int k = lo; k <= hi; k++)
        aux[k] = a[k];
    int i = lo, j = mid+1;
    for (int k = lo; k <= hi; k++) {
        if      (i > mid)              a[k] = aux[j++];
        else if (j > hi)               a[k] = aux[i++];
        else if (less(aux[j], aux[i])) a[k] = aux[j++];
        else                           a[k] = aux[i++]; }
– number of compares, array accesses: solve using D(N) = 2 D(N/2) + N recurrence
– N extra space, not in-place (in-place if it uses <= c logN extra memory)
– recursive version typically faster than the bottom-up version
– It's possible (but not in practice) to design in-place merging algorithm with linear number of compares.
– If we divide array into 3 subarrays (3-way mergesort), number of compares ~ 2N log_3 N
but the constant is worse than with 2-way.
– Reverse-sorted array: number of compares ~ ½ N lg N.
– Reasons to use: stability, guaranteed linearithmic performance, sequential access preferred

Mergesort practical improvements
– insertion sort for small subarrays, cutoff for ~7 items
if(hi <= lo + cutoff - 1): insertion.sort(a, lo, hi) // may be 20% faster
– don't do merge if a[mid+1] >= a[mid] // sorted already
– save time by switching the role of the input and aux array in each recursive call

Bottom-up mergesort: no recursion
– loop: merge subarrays of size 1, 2, 4, …
for (int n = 1; n < N; n = n+n) {
    for (int i = 0; i < N-n; i += n+n) {
        int lo = i;
        int m  = i+n-1;
        int hi = Math.min(i+n+n-1, N-1);
        merge(a, aux, lo, m, hi);     } }
– lgN passes * N compares
– 10% slower than recursive in average: cache matters and array size not always is 2^x

Mergesort is optimal, why? Computational compexity
– N lg N guaranteed, and it's lower bound for compare-based sort
Computational complexity: framework to study efficiency of algorithms, particular problem
– model of computations: allowable ops.
– cost model: ops count
– upper bound: cost guarantee provided by some algorithm
– lower bount: proven limit on cost guarantee of all algorithms
– optimal algorithm: algorithm with best possible cost guarantee
Example: sorting
– model of computation: decision tree
– cost model: # compares
– upper bound: ~ N lg N from mergesort
– lower bound: in the worst case at least N lg N compares (lg N! ~ N lg N by Stirling)
– optimal algorithm = mergesort
– not optimal w.r.t. space
Lower bound may not hold:
– initial order of the input: insertion sort takes N-1 compares if arr.is sorted
– distribution of key values: duplicate keys, 3-way quicksort, ~N compares if keys all equal
– keys representation: radix sort O(N)

Java comparators: alternate order, multiple ordering
– comparable interface only for natural order
public final class Point2D implements Comparable<Point2D>
– comparator interface: sort using an alternate order
public Comparator<Point> slopeOrder() 
    return new SlopeOrder(this);
    ...
    private class SlopeOrder implements Comparator<Point> 
    ...
        public int compare(Point p1, Point p2) 
        ...
must be a total order

Quicksort


Quicksort: recursive, inplace, not stable, average ~1.39N lgN, worst case ½ N^2
– outline: shuffle (probabilistic guarantee for NlgN); partition; repeat for left and right
– partition invariant: for pivot v: a[v] is in place, left side <= a[v], right side >= a[v]
– recursion after partition (vs. mergesort: recursion before partition)
– Uniformly random array: number of compares ~ 2N ln N
Typically, qsort does more compares than mergesort, but less data moves and qsort good for cache.
– Expected maximum function-call stack size when quicksorting an array of N distinct keys ~ 4.31 ln N.

procedure:
– scan i from left to right so long as (a[i] < a[v])
– scan j from right to left so long as (a[j] > a[v])
– swap a[i], a[j]
StdRandom.shuffle(a);
sort(a, 0, a.length - 1);
// sort
    if (hi <= lo) return;
    int j = partition(a, lo, hi);
    sort(a, lo, j-1);
    sort(a, j+1, hi);
// partition
    int i = lo;
    int j = hi + 1;
    Comparable v = a[lo];
    while (true) { 
        while (less(a[++i], v))
            if (i == hi) break;
        while (less(v, a[--j]))
            if (j == lo) break;      // redundant since a[lo] acts as sentinel
        if (i >= j) break;
        exch(a, i, j);
    }
    exch(a, lo, j);
    return j;
performance
– average number of compares: ~ 2N ln N, number of swaps ~ 1/3N ln N
Cn = 2(N+1) * (1/3 + ¼ + 1/5 + … 1/(N+1))
39% more compares than mergesort, but less data movement
– worst case: N + (N-1) + (N-2) … +1 ~ ½ N^2 : need shuffle
– logarithmic extra space for recursion: guaranteed if recurring shorter part first
– not stable: long-distance swaps

Quicksort practical improvements
– cutoff to insertion sort for ~10 items (could delay ins.sort until one pass at end), ~20% faster
– median of sample: best choice of pivot item = median; ~10% faster
estimate median by taking median of sample; median-of-3 random items.
m = medianOf3(a, lo, mid, hi); swap(a, lo, m); j = partition(a, lo, hi); …
private static int medianOf3(double[] a, int i, int j, int k) {
    return (less(a[i], a[j]) ?
           (less(a[j], a[k]) ? j : less(a[i], a[k]) ? k : i) :
           (less(a[k], a[j]) ? j : less(a[k], a[i]) ? k : i)); }

Quickselect: find k-th smallest, average ~2N time
half of quicksort algorithm
– faster than sort for: find min: k=0; max: k=N-1; median: k=N/2
– theory: NlgN upper bound; N upper bound for k = 1,2 or 3; N lower bound.
quickselect procedure:
– partition: a[v] is in place, left smaller, right greater
– repeat for one side depending on v, until v = k
StdRandom.shuffle(a);
int lo = 0, hi = a.length - 1;
while (hi > lo) {
    int i = partition(a, lo, hi);
    if      (i > k) hi = i - 1;
    else if (i < k) lo = i + 1;
    else return a[i];
}
return a[lo];
– analisys: linear time on average
N + N/2 + N/4 + ...1 ~ 2N compares
~ ½ N^2 in the worst case

3-way quicksort, duplicate keys: average ~NlgN, linear when few distinct keys
used for: find equal keys, remove duplicates, …
– mergesort: ½ NlgN .. NlgN
– quicksort: quadratic unless partitioning stops on equal keys
goal: partition array into 3 parts:
– entries between lt, gt equal to pivot v
– no larger items to left of lt
– no smaller items to right of gt
procedure: let v = a[lo]
– scan i from left to right (lo .. hi)
– a[i] < v: swap a[lt], a[i], lt++, i++
– a[i] > v: swap a[gt], a[i], gt--
– a[i] == v: i++
StdRandom.shuffle(a);
sort(a, 0, a.length - 1):
    if (hi <= lo) return;
    int lt = lo, gt = hi;
    Comparable v = a[lo];
    int i = lo;
    while (i <= gt) {
        int cmp = a[i].compareTo(v);
        if      (cmp < 0) exch(a, lt++, i++);
        else if (cmp > 0) exch(a, i, gt--);
        else              i++; }
    sort(a, lo, lt-1);
    sort(a, gt+1, hi);
– Number of partition steps is no larger than the number of distinct keys =>
linear time when few distinct keys

System sort: tuned qsort, mergesort
– different applications have diverse attributes
stable? inplace? parallel? distinct keys? deterministic? key types? lists/arrays? large/small? guaranteed perf?
– cannot cover all combinations of attributes
– Is the system sort good enough? usually

Priority Queues


Priority queue: collection, ordered
ops: insert, delete items.
Which item to delete?
– stack: most recently added; queue: least recently added; rand.queue: random item
– priority queue: remove largest/smallest item
key must be comparable
API
– insert, delMax, isEmpty, max, size
application
– find M largest items in a stream of N items
add to minPQ, delMin if size > M : N lg M time using binary heap

unordered array implementation
– add to the tail
– delMax: find max, swap with tail, return tail

ordered array implementation
– add: bin.search, shift right part to 1 cell, write cell
– delMax: return tail

Binary heap: array representation of a complete binary tree
– binary tree: empty or node with links to left and right binary trees
– complete b.tree: perfectly balanced, except for bottom level
– complete b.tree height h: lg N (h increases when N is a power of 2)
binary heap: array representation of a heap-ordered complete b.tree
– heap-ordered: keys in nodes; parent's key >= children's keys
– array: indices start at 1 (pointers arithmetic is way simpler); nodes in level order; no explicit links
i-th childrens: i*2, i*2+1; i-th parent: i/2
– largest key is a[1], root of b.tree
– In the best case, inserting a key takes only 1 compare
– Complete binary tree height = floor(lg N).
– Because the tree is complete, we can use an array for binary heap.
– Max height of a (perfectly balanced) 4-way heap (each node has 4 children) with N keys ~ ½ lg N.
– Max height of a (perfectly balanced) ternary heap (each node has 3 children) with N keys ~ log_3 N.

swim up: bring tree to heap-ordered order
– child becomes larger than parent, insert subroutine
– swap child, parent; repeat up until in order
private void swim(int k)
    while (k > 1 && less(k/2, k)) {
        exch(k, k/2);
        k = k/2; }
– time lgN
Insert: add to end, swim up: at most 1+lg N compares
pq[++N] = x, swim(N)

sink down: bring tree to heap-ordered order
– parent becomes smaller than child, delMax subroutine
– find larger child, swap parent, child; repeat down until in order
private void sink(int k) 
    while (2*k <= N) {
        int j = 2*k;
        if (j < N && less(j, j+1)) j++;
        if (!less(k, j)) break;
        exch(k, j);
        k = j; }
– time 2 lg N at most
delMax: swap root, tail; sink root down
max = pq[1], swap(1, N--), sink(1);

binary heap considerations
– minPQ: replace 'less' with 'greater'
– overflow: use resizing array
– use immutable keys

Heapsort: unstable, inplace, guaranteed time 2N lg N
– heap construction: create binary heap (max heap): bottom-up sink, linear time
– sortdown: remove max.key N times: swap max-end; sink top
for (int k = N/2; k >= 1; k--)
    sink(pq, k, N);
while (N > 1) {
    exch(pq, 1, N--);
    sink(pq, 1, N); }
– shift-by-1 indexing compensated in 'less' and 'swap'
analisys
– heap construction: linear, <= 2N compares and swaps
– heapsort <= 2N lg N compares and swaps
– NlgN guarantee for inplace sort, it's important
– simple and good, but: poor use of cache, not stable, slower than qsort
– Heapsort vs mergesort, compares in worst case: hs ~ 2N lg N, ms ~ N lg N.

PQ application: event-driven simulation
N moving particles, elastic collision
time-driven: too slow: time for each cycle ~N^2 /2; if dt is too large may miss collisions
event-driven: update state only when something happens
– init: predict events/collisions for each particle
– loop: get nearest (PQ) event, update system state, process event, update involved
– maintain PQ of collision events, prioritized by time
– remove min: get next collision
outline:
– pq.delMin: get next collision
– if event has been invalidated: ignore (using particle.countCollisions attrib.)
– move all particles to time t, straight-line
– update the velocities of the colliding particles
– predict future collisions involving colliding particles, pq.insert
– event data type: time; particle a,b; countA, countB // collision counts

Elementary Symbol Tables


Searching construct, associative array/symbol table:
– key-value abstraction
– insert a value with specified key
– given a key, get corresponding value
– example: DNS lookup: address by name
API (values are not null, for simplicity):
– put/get
– delete
– contains
– isEmpty, size
– keys

ST keys and values
– value: any generic type
– key: comparable, compareTo; or equals, hashCode
– comparable: ordered ops supported
– not comparable: no order, use hashCode (see Hash Tables)

Java equivalence
– reflexive: x.equals x is true
– symmetric: x.equals y iff y.equals x
– transitive: if x.equals y and y.equals z => x.equals z
– Non-null: x.equals null is false
– default implementation: x == y => refer to the same object
public boolean equals(Object other)
    if (other == this) return true;
    if (other == null) return false;
    if (other.getClass() != this.getClass()) return false;
    Date that = (Date) other;
    return (this.month == that.month) && (this.day == that.day) && (this.year == that.year);
– make 'compareTo' consistent with 'equals'

application of ST: frequency counter
while (!StdIn.isEmpty()) {
    String key = StdIn.readString();
    if (key.length() < minlen) continue;
    words++;
    if (st.contains(key)) st.put(key, st.get(key) + 1);
    else { st.put(key, 1); distinct++; }
}
String max = ""; st.put(max, 0);
for (String word : st.keys())
    if (st.get(word) > st.get(max))
        max = word;

ST elementary implementation: linked list, sorted array
linked list: unordered, using 'equal'
– get: scan until find a match
– put: get? update or add to front
– performance: linear time
ordered array: binary search, using 'compareTo'
– rank helper: how many keys < k: binary search in keys array
lo = 0, hi = N-1
while lo <= hi: mid = lo + (hi-lo)/2
cmp = k.compareTo keys[mid]
if cmp < 0: hi = mid-1 else if cmp > 0: lo = mid+1 else return mid;
return lo
– get: i = rank(k), if keys[i].compareTo(k) == 0: return vals[i] else null
– insert: get? update or shift all greater items then write
– performance: search, get: lgN; insert linear : better than linked list

Ordered operations: comparable keys, get range from ST
– API: min/max; floor/ceiling; getKeyByRank/select; keys(lo, hi); size(lo, hi);
– ordered array is good, but insert/delete needs linear time
– use Binary Search Tree: BST, 1.39 lg N

Binary Search Tree: explicit tree data structure
avarege performance: 1.39 lg N (or 2 ln N)
– A BST is a binary tree in symmetric order
– each node (key, value, left right): empty or BST (each node is a root of a subtree)
– symmetric order: every node key is: larger than in left, smaller than in right
BST search
private Value get(Node x, Key key)
    if (x == null) return null;
    int cmp = key.compareTo(x.key);
    if      (cmp < 0) return get(x.left, key);
    else if (cmp > 0) return get(x.right, key);
    else              return x.val;
BST insert: search for key, 2 cases: key in tree: reset value or: add new node
private Node put(Node x, Key key, Value val)
    if (x == null) return new Node(key, val, 1);
    int cmp = key.compareTo(x.key);
    if      (cmp < 0) x.left  = put(x.left,  key, val);
    else if (cmp > 0) x.right = put(x.right, key, val);
    else              x.val   = val;
    x.N = 1 + size(x.left) + size(x.right); // update tree size
    return x;
– number of compares: 1+node-depth
– many BST correspond to same set of keys, tree shape depends on order of insertion
– ordered linked list in the worst case
– 1:1 correspondence between BSTs and qsort partitioning: select root, smaller to left, greater to right
– bottom line: BST average search/insert logarithmic, but worst case still linear

– Insert N distinct keys in random order into BST: expected height ~ 4.31 ln N.
– Because the keys satisfy symmetric order:
it's possible to reconstruct the shape of the BST by preorder or postorder traversal.
– No compare-based algorithm can construct a BST in fewer than lg(N!) ~ NlgN
key compares in the worst case: sorting lower bound ~ NlgN and inorder traversal geves as sorted array.
– The successor (next largest) of x can be a grandparent of x
(if x is right child and x has no right child), consider level-order 6 3 5 where x=5;
or parent of x; or leftmost node in right subtree.
– To find minimum follow the left: no key compares are needed.
– One reason for storing the subtree counts in each node:
to support the range count ops.
– Inserting [2 1 3] or [2 3 1] results in the same BST.
– The predecessor is the rightmost node in the left subtree, may have a left child.
– The number of key compares to search for x in the BST can be <, > or ==
than searching in sorted array: suppose that key x is in the root of the BST
but is not the middle in the sorted array.

Symbol Table ordered ops in BST
hint: save subtree size; do recursion
– min/max: recur. to left/right
– floor: largest key that <= a given key (symmetric for ceiling)
3 cases:
– k == key => floor is key
– k < key => go left
– k > key => go right, if null: floor is key
private Node floor(Node x, Key key)
    if (x == null) return null;
    int cmp = key.compareTo(x.key);
    if (cmp == 0) return x;
    if (cmp <  0) return floor(x.left, key);
    Node t = floor(x.right, key);
    if (t != null) return t;
    else return x;

rank, select: store the number of nodes in subtree in that subtree root
private int rank(Key key, Node x)
    if (x == null) return 0;
    int cmp = key.compareTo(x.key);
    if      (cmp < 0) return rank(key, x.left);
    else if (cmp > 0) return 1 + size(x.left) + rank(key, x.right);
    else              return size(x.left);
– inorder traversal: inorder(left), enqueue(this), inorder(right)
– bottom line: ordered ops performance ~ height of BST, logarithmic in average

BST deletion: lazy vs. Hibbard
lazy: set node value to null (make tombstone), skip nulled while search
– not good enough: memory hog
Hibbard deletion: 3 cases
– node with 0 children: set parent link to null
– node with 1 child: set parent link to that child
– node with 2 children: find node successor=min(right); delMin(right); put successor on node's spot
private Node delete(Node x, Key key) 
    if (x == null) return null;
    int cmp = key.compareTo(x.key);
    if      (cmp < 0) x.left  = delete(x.left,  key);
    else if (cmp > 0) x.right = delete(x.right, key);
    else { // 3 cases
        if (x.right == null) return x.left;
        if (x.left  == null) return x.right;
        Node t = x;
        x = min(t.right);
        x.right = deleteMin(t.right);
        x.left = t.left; } 
    x.N = size(x.left) + size(x.right) + 1;
    return x;
– deleteMin: go left untill null, replace node by its right link
private Node deleteMin(Node x)
    if (x.left == null) return x.right;
    x.left = deleteMin(x.left);
    x.N = size(x.left) + size(x.right) + 1;
    return x;
private Node min(Node x)
    if (x.left == null) return x;
    else                return min(x.left);
– why successor and not predecessor? no particular reason
– Hibbard deletion analisys: tree height become sqrt(N) => ops performance ~ sqrt(N)

Binary Tree Traversal: BFS, DFS:
BFS — level-order; DFS — preorder, inorder, postorder.
– Preorder traversal: root, left, right.
– Inorder traversal: left, root, right.
– Postorder traversal: left, right, root.

A nice concise description of when you would want to use the different types of depth-first search:
– Pre-order traversal while duplicating nodes and values can make a complete duplicate of a binary tree.
It can also be used to make a prefix expression (Polish notation) from expression trees:
traverse the expression tree pre-orderly.
– In-order traversal is very commonly used on binary search trees because it returns values
from the underlying set in order, according to the comparator that set up the binary search tree (hence the name).
– Post-order traversal while deleting or freeing nodes and values can delete or free an entire binary tree.
It can also generate a postfix representation of a binary tree.
It boils down to the logistical needs of an algorithm.
For example, if you don't use post-order traversal during deletion,
then you lose the references you need for deleting the child trees.

Balanced Search Trees


– ultimate (ordered) symbol tables
– 2-3 trees
– left-leaning red-black BST
– B-tree

2-3 Search Tree: abstract model
hard to implement
– 2-node: one key, two children
– 3-node: two keys, three children
– temporary 4-node: three keys, four children
– symmetric order: inorder traversal yields keys in ascending order
– perfect balance: every path from root to null link has same length
insertion:
– changing 2-node to 3-node
– changing 3-node => 4-node, split, insert middle key to parent node, recurse
tree height changes only when insert to root and root is 3-node: all nodes in insertion path is 3-nodes
– local transformations (const. number of ops): 2node → 3node, 3node → 4node, splitting 4node
– global properties: invariant: symmetric order and perfect balance after each transformation
– performance: worst case: lg N for all 2-nodes; best case: log3 N if all 3-nodes; guaranteed logarithmic
– implementation: compliated, there's a better way: LLRB BST

Left Leaning Red-Black BST: 2-3 tree emulator
– represent 2-3 tree as a BST
– red links is a inner 3-node connection
– black links connect 2-nodes and 3-nodes
definition:
– no node has two red links connected to it (only one red link per node)
– every path from root to null link has the same number of black links (perfect black balance)
– red links lean left
ops implementations: same as elementary BST (except insert, delete)
– link color encoded in child node: link is red if x.left.color == RED
– node.color = RED when new node created (create 3-node)

LLRB BST insert
maintain 1:1 correspondence with 2-3 tree
– create 3-node, restore invariants (perfect black balance; symmetric order)
reduce one case to another, sequence: rotate left, rotate right, flip. recursive
private Node put(Node h, Key key, Value val)
    if (h == null) return new Node(key, val, RED, 1);
    int cmp = key.compareTo(h.key);
    if      (cmp < 0) h.left  = put(h.left,  key, val);
    else if (cmp > 0) h.right = put(h.right, key, val);
    else              h.val   = val;
    // fix-up any right-leaning links
    if (isRed(h.right) && !isRed(h.left))      h = rotateLeft(h);
    if (isRed(h.left)  &&  isRed(h.left.left)) h = rotateRight(h);
    if (isRed(h.left)  &&  isRed(h.right))     flipColors(h);
    h.N = size(h.left) + size(h.right) + 1;
    return h;

LLRB BST, rotations
rebalance: local transformations maintaining invariants
– left rotation: rigth-leaning link to lean left
invariants: symmetric order and perfect black balance
put node.right in node place: node.right = right.left, right.left = node, node.color = red
private Node rotateLeft(Node h)
    Node x = h.right;
    h.right = x.left;
    x.left = h;
    x.color = x.left.color;
    x.left.color = RED;
    x.N = h.N;
    h.N = size(h.left) + size(h.right) + 1;
    return x;
– right rotation: left-leaning ling to lean right (needed if n.left and n.left.left is red)
mirrored left rotation
private Node rotateRight(Node h)
    Node x = h.left;
    h.left = x.right;
    x.right = h;
    x.color = x.right.color;
    x.right.color = RED;
    // same as left
– flip colors: recolor to split 4-node
if n.left and n.right is red: n.color = red, n.left = n.right = black

LLRB BST, delete: more sophisticated, see textbook

LLRB BST height
– height of the tree <= 2 lg N in the worst case
– same number of black links, never two red links in-a-row:
=> black + red links = 2 * lgN
– height of tree is ~ 1.0 lg N in average

– Any BST can be transformed into any other BST by a sequence of left and right rotations.
– The order of growth of the min.number of nodes in a red-black BST of height h is 2^(h/2).
– Consider two paths from the root to a null link in a red-black BST:
extreme case : one path length ~ lgN; other ~ 2lgN.

B-Trees: huge symbol tables using 2-3 tree technique
cost ~ log_m N
– page: block of data, e.g. 4KB chunk
– probe: first access to a page, e.g. load page to RAM from disk/tape
– probe time much larger than page data access time (after probe), page loaded
– cost model: number of probes
– goal: access data using minimum number of probes
B-Tree: generalize 2-3 tree by allowing up to M-1 key-link pairs per node
– M as large as possible so that M links fit in a page, e.g. M = 1024
– at least 2 key-link pairs at root
– at least M/2 key-link pairs in other nodes
– external nodes (bottom layer) contain client keys
– internal nodes contain copies of keys to guide search
– keys in nodes in sorted order
– nodes always between full and half full (except root)

B-tree search
– start at root
– find interval for search key and take corresponding link
– search terminates in external node
private Value search(Node x, Key key, int ht)
    Entry[] children = x.children;
    // external node
    if (ht == 0)
        for (int j = 0; j < x.m; j++)
            if (eq(key, children[j].key)) return (Value) children[j].val;
    // internal node
    else
        for (int j = 0; j < x.m; j++)
            if (j+1 == x.m || less(key, children[j+1].key))
                return search(children[j].next, key, ht-1);
    return null;

B-tree insert
– search for new key
– insert at bottom
– split nodes with M key-link pairs on the way up the tree
public void put(Key key, Value val)
    Node u = insert(root, key, val, height);
    N++;
    if (u == null) return;
    // need to split root
    Node t = new Node(2);
    t.children[0] = new Entry(root.children[0].key, null, root);
    t.children[1] = new Entry(u.children[0].key, null, u);
    root = t;
    height++;

private Node insert(Node h, Key key, Value val, int ht)
    int j; Entry t = new Entry(key, val, null);
    // external node
    if (ht == 0)
        for (j = 0; j < h.m; j++)
            if (less(key, h.children[j].key)) break;
    // internal node
    else
        for (j = 0; j < h.m; j++)
            if ((j+1 == h.m) || less(key, h.children[j+1].key)) {
                Node u = insert(h.children[j++].next, key, val, ht-1);
                if (u == null) return null;
                t.key = u.children[0].key; t.next = u;
                break;
            }
    for (int i = h.m; i > j; i--)
        h.children[i] = h.children[i-1];
    h.children[j] = t;
    h.m++;
    if (h.m < M) return null;
    else         return split(h);

private Node split(Node h)
    Node t = new Node(M/2);
    h.m = M/2;
    for (int j = 0; j < M/2; j++)
        t.children[j] = h.children[M/2+j];
    return t;

B-Tree performance: order M with N keys
– search/insertion between log_M-1 N and log_M/2 N probes
– internal nodes have between M/2 and M-1 links
– in practice: number of probes < 5 (M — 1024, N = 62 billion)
– always keep root page in memory

BST Applications
– LLRB BST are widely used as system symbol tables
– java.util.TreeMap, TreeSet; C++ STL map. multiset
– B-trees (and variants: B+ tree, B*tree, B#tree, …) are widely used for file systems and DBMS
NTFS, HFS, XFS, Ext3FS, Oracle, DB2, PostgreSQL, …

Geometric Applications of BSTs


– ordered operations, intervals, search, …
– intersections among geometric objects
– CAD, games, GIS, …
– BST as efficient solution

1D range search: extension of ordered symbol table (points on a line; app: line intersections)
– BST API extension: range search/count
– range search: find keys between (lo, hi)
– range count: number of keys between (lo, hi)
– geometric: points on a line; app: line intersections
ordered array:
– range count lg N;
– range search R + lg N;
– insert N
BST range count: use rank helper, lgN
public int size(Key lo, Key hi)
    if (lo.compareTo(hi) > 0) return 0;
    if (contains(hi)) return rank(hi) - rank(lo) + 1;
    else              return rank(hi) - rank(lo);
BST range search: R + lg N
– recursively find all keys in left subtree (if could fall in range)
– check key in current node
– right subtree
private void keys(Node x, Queue<Key> queue, Key lo, Key hi)
    if (x == null) return;
    int cmplo = lo.compareTo(x.key);
    int cmphi = hi.compareTo(x.key);
    if (cmplo < 0) keys(x.left, queue, lo, hi);
    if (cmplo <= 0 && cmphi >= 0) queue.enqueue(x.key);
    if (cmphi > 0) keys(x.right, queue, lo, hi);

App: line segment intersection, orthogonal lines: sweep line + BST 1d range search
given N horizontal and vertical line segments, find all intersection
– assume all x- and y-coordinates are distinct
– naive: quadratic time
Sweep line: sweep vertical line from left to right (using PriorityQueue or sort)
– x-coordinates define events (event-driven problem)
– horiz. segment, left endpoint: insert y-coordinate into BST
– horiz. segment, right endpoint: remove y-coordinate from BST
– vert. segment: BST range search for interval of y-endpoints
– time: N lg N + R to find all R intersections among N orthogonal line segments (all ops <= R+NlgN)
– bottom line: sweepline reduces 2D problem to 1D range search

KD-Trees, space partitioning: 1D, 2D, … range search/count, nearest
– ordered symbol-table API extension: 2D keys, range search/count
– apps: networking, circuit design, databases
– geometry: keys are points in the plane, find/count points in a given horiz-vert. rectangle
– grid, 2D tree, BSP tree, kD tree
KD-Tree: recursively partition k-dimentional space into 2 halfspaces
– implement: BST, cycle through dimensions ala 2D trees
– efficient and simple data structure, widely used

Grid implementation: evenly binning
– divide space into M-by-M grid of squares
– create list of points contained in each square
– use 2D array to directly index relevant square
– insert: add x,y to list for corresponding square
– range search: examine only squares that intersect 2D range query
analysis: space-time tradeoff
– space: M^2 + N
– time: 1+ N/M^2 per square examined, on average
– rule of thumb: grid sqrtN x sqrtN
– bottom line: fast and simple on average; have to deal with clustering

Space-partitioning trees: recursive subdivisions
– use a tree to represent a recursive subdivision of 2D space
– grid: divide space uniformly into squares
– 2D tree: recursively divide space into two halfplanes
– BSP tree: into two regions
– quadtree: into four quadrants

2D tree: BST, root consider x-coordinates as a key
root.left/right: y-coordinates, …
– root and other odd levels: consider X as a key
– second level and other even levels: consider Y as a key
– node represents a rectangle, bounding box

2D tree insert: like elementary BST
private Node put(Node root, Point2D p, boolean vert)
    if (root == null)
        count++; return new Node(p, vert);
    int cmp = comparePoints(p, root.point, vert);
    if      (cmp < 0) root.left = put(root.left, p, !vert);
    else if (cmp > 0) root.right = put(root.right, p, !vert);
    else if (p.equals(root.point)) root.point = p;
    // p on the split line, go right
    else              root.right = put(root.right, p, !vert);
    return root;

2D tree range search: find all points in a query rectangle
– check node point inside query rect
– recursively search left/right trees, if bounding boxes intersects
private void range(Node root, RectHV rect, Queue<Point2D> res)
    if (root == null) return;
    int cmp = compareLine(root.point, root.vert, rect);
    if      (cmp < 0) range(root.right, rect, res);
    else if (cmp > 0) range(root.left, rect, res);
    else { // intersection
        if (rect.contains(root.point)) res.enqueue(root.point);
        range(root.left, rect, res);
        range(root.right, rect, res); }
– time: average R + lg N; worst case (for balanced tree) R + sqrt N

2D tree nearest neighbor: average: lg N, worst: N
– check distance from query point to node, update min
– recursively search left, if left bbox could contain a closer point; … right …
– left/right choose acoording to distance to query point
private Point2D nearest(Node root, Point2D p, double currDist, Point2D cp)
    if (root == null) return cp;
    Point2D res = cp; // current nearest point
    double newDist = root.point.distanceSquaredTo(p);
    if (newDist < currDist) res = root.point;
    else newDist = currDist; // current nearest distance 
    double leftDist  = dist(p, root, true);  leftDist *= leftDist;
    double rightDist = dist(p, root, false); rightDist *= rightDist;
    if (leftDist < rightDist) {
        if (leftDist < newDist) { // go left
            res = nearest(root.left, p, newDist, res);
            newDist = res.distanceSquaredTo(p); }
        if (rightDist < newDist) { // go right
            res = nearest(root.right, p, newDist, res);
            newDist = res.distanceSquaredTo(p); } }
    else {
        if (rightDist < newDist) { // go right
            res = nearest(root.right, p, newDist, res);
            newDist = res.distanceSquaredTo(p); }
        if (leftDist < newDist) { // go left
            res = nearest(root.left, p, newDist, res);
            newDist = res.distanceSquaredTo(p); } }
    return res;
– time: average: log N, worst case: N

BST app: N-body simulation
simulate the motion of N particles, mutually affected by gravity
– brute force: time per step is N^2
– for each pair: compute force: F = Gm1m2 / r^2
Appel's algorithm: treat cluster of particles as a single aggregate particle
– compute force between particle and center of mass of aggregate
– build 3D-tree with N particles as nodes
– store center-of-mass of subtree in each node
– to compute total force acting on a particle, traverse tree, but stop
as soon as distance from particle to subdivision is sufficiently large
– time per step: N log N

Interval search tree, BST: hold set of (overlapping) intervals
used for finding overlapping rectangles
– intervals (lo, hi): put, get, intersects
– intersects: given an interval (lo, hi) find all intervals that intersects
Interval search tree
– left endpoint as BST key
– right endpoint: store subtree max endpoint in subtree root
– insert: put in BST using lo as the key, update max in each node on search path
– search:
if node interval intersects query interval, return it
elif left subtree is null: go right
elif max endpoint in left subtree is less than query lo: go right
else: go left
– analysis: 2 cases
1. if search goes right, then no intersection in left
2. if search goes left, then there is either an intersection in left subtree or not
– using red-black BST to guarantee performance:
lg N search any one interval that intersects;
R lg N find all intervals that intersects

Rectangle intersections: 1D interval search tree + sweep line
goal: find all intersections among a set of N orthogonal rectangles
– used in microprocessor design: linearithmic alg. is necessary to sustain Moore's Law
– brute force: quadratic time for check all pairs
– suppose all x- and y-coordinates are distinct
Sweep-line algorithm
– sweep vertical line from left to right
– x-coordinates of left and right endpoints define events
– maintain set of rectangles that intersects the sweep line
in an interval search tree using y-intervals of rectangle
– left endpoint: interval search for y-interval of rectangle; insert y-interval
– right endpoint: remove y-interval
analysis
– sweep line takes time ~ N lg N + R lg N to find R intersections among N rect.
all ops in NlgN time; search intervals in NlgN + RlgN
– bottom line: reduces rectangle intersections to 1D interval search
– 1D interval search (overlapping lines): rectangles intersections
– 1D range search (points on line): lines intersections

Hash Tables


Symbol Table w/o order, 'compareTo': have only 'hashCode' and 'equals'
– save items in a key-indexed table, index is a function of the key
– hash function: method for computing array index from key
– issues: hash function? equality test? collision resolution?
classic space-time tradeoff:
– no space limits: trivial hash function
– no time limits: trivial collision resolution

Hash functions
ideal:
– scramble the keys uniformly to produce a table index;
– efficiently computable;
– each table index equally likely for each key
Java hashCode method: 32-bit int
– requirement: if x.equals(y) then x.hashCode == y.hashCode
– highly desirable: if !x.equals(y) then x.hashCode != y.hashCode
– def.implementation: memory address
– for double: 64bits = doubleToLongBits(value); return (int) 64bits ^ (64bits >>> 32)

hashCode for strings
– cached (strings are immutable)
– Horner's method to hash string of length L: L multiplies/adds
h = s[0] * 31^L-1 + … s[L-3]*31^2 + s[L-2]*31^1 + s[L-1]*31^0
hash = 0; for i in 0..L: hash = s[i] + (31 * hash)
31: prime number, small, power of two minus one: easy to compute

user defined hashCode
public int hashCode() 
    int hash = 17;
    hash = 31*hash + who.hashCode();
    hash = 31*hash + when.hashCode();
    hash = 31*hash + ((Double) amount).hashCode();
– if field is null, return 0
– need to use the whole key to compute hash code

Modular hashing
– two problems:
hash code : int from -2^31 .. 2^31 -1;
hash function : int from 0 .. M-1 (M : table size, prime or power of 2)
– good solution: idx = (key.hashCode & 0x7f ff ff ff) % M // drop the sign

about prime numbers
– prime used for its properties for division: when the user mods out by another number,
they have no common factors (using all bits in a number – modular ariphmetic)
– in short: prime gives as equal likelihood for keys distribution
– 31 is also a Mersenne prime (like 127 or 8191) which is a prime number that is one less than a power of 2
This means that the mod can be done with one shift and one subtract if the machine's multiply instruction is slow

Uniform hashing assumption
– each key is equally likely to hash to an int 0 .. M-1
– bins and balls, throw balls uniformly at random into M bins
classic probability theory / combinatoric analysis:
– Birthday problem: expect two balls in the same bin after ~ sqrt(pi M/2) tosses
– Coupon collector: expect every bin has >= 1 ball after ~ M ln M tosses
– Load balancing: after M tosses, expect θ(lgM / lg lgM) balls in most loaded bin

Collision resolution: Separate Chaining (linked lists)
collision: two distinct keys hashing to same index
Birthday problem => can't avoid collisions unless you have quadratic amount of memory
Coupon collector + Load balancing => collisions are evenly distributed
need to deal with collisions efficiently
Separate Chaining: use an array of M < N linked lists (typical: M ~ N/5)
hash: map key to int idx: 0..M-1
insert: search, if null: put at front of idx-th chain
search: need to search only in one chain, idx-th
public void put(Key key, Value val) 
    int i = hash(key);
    if (!st[i].contains(key)) N++;
    st[i].put(key, val);
public Value get(Key key) 
    int i = hash(key);
    return st[i].get(key);
private SequentialSearchST<Key, Value>[] st; // linked list symbol table 
analysis : under uniform hashing assumption
probability that the number of keys in a list is within a constant factor of N/M
is extremely close to 1 (search M times faster than sequential search).
Pf: distribution of list size obeys a binomial distribution
(binomial distribution with parameters n and p is the discrete probability distribution
of the number of successes in a sequence of n independent yes/no experiments,
each of which yields success with probability p)
typical choice: M ~ N/5 => constant time ops.

Collision resolution: Linear Probing (next empty slot, M > N)
when a new key collides, find next empty slot, and put it there
M > N, null terminated array (typical: M ~ 2N)
public Value get(Key key) 
    for (int i = hash(key); keys[i] != null; i = (i + 1) % M) 
        if (keys[i].equals(key)) return vals[i];
    return null;
public void put(Key key, Value val)
    int i;
    for (i = hash(key); keys[i] != null; i = (i + 1) % M)
        if (keys[i].equals(key)) { vals[i] = val; return; }
    keys[i] = key; vals[i] = val;
clustering problem
new keys likely to hash into middle of big clusters
Knuth's parking problem: what is mean displacement of a car?
Half full: with M/2 cars, mean displacement is ~ 3/2
Full: with M cars, mean displacement is ~ sqrt(pi M/8)
if N = a M (a < 1) : search hit ~ 1/2(1 + 1/(1-a)); search miss ~ 1/2(1 + 1/(1 — a)^2)
typical a = N/M ~ 1/2

Hash table context
– cost for hash function can be larger than ST benefits
– denial-of-service attacks: too many collisions, balanced BST can be better
– one-way hash functions: message digest, too expensive for use in ST implementation

Hash Table: separate chaining vs. linear probing
– sep.chaining: easier to implement, performance degrades gracefully
– lin.probing: less wasted space, better cache performance (problems: delete, resize)
alternatives
– two-probe hashing (separate-chaining): hash to 2 positions, insert in shorter of the two chains
– double hashing (linear probing): when search, skip a variable amount, not just 1
– Cuckoo hashing (lin.probing): hash key to two positions, insert into either;
if occupied, reinsert displaced key into its alternative position (recurrence).
Constant time in worst case for search.

Symbol Table: Hash Tables vs. Balanced Search Trees
hash tables
– simpler to code
– no effective alternative for unordered keys
– faster for simple keys
balanced search trees
– stronger performance guarantee
– support for ordered ST ops
– easier to implement correctly 'compareTo' than 'equals' and 'hashCode'

Symbol Table apps
set; white list / black list; dictionary; index; sparse vectors
– read list of words, print word that are (in, not in) the list
– dict: DNS lookup
– file indexing: ST with key: word-from-file, value: set of file-where-word is
– concordance: find a word and print it with context:
ST with key: word, value: int index. print index-n .. index+n context
– sparse vectors: HashST<int, double> // dot prod. is constant time for sparse vector:
double dot(double[] that): sum = 0.0
for i in indices: sum += that[i] * this.get(i)
matrix: each row is a sparce vector, linear time for matrix-vector multiplication.

Programming Assignments


Program 1
Percolation, Union-Find app
goal: Write a program to estimate the value of the percolation threshold via Monte Carlo simulation

– for grid NxN cells/sites, open cells at random until grid start percolating
– We say the system percolates if there is a full site in the bottom row
– A full site is an open site that can be connected to an open site in the top row
via a chain of neighboring (left, right, up, down) open sites.

– problem: implement 'open', 'percolates', 'isFull' efficiently.
– simple solution: using virtual top & virtual bottom sites,
process dynamic connectivity tasks via UnionFind class.
– simple solution leads to backwash problem
– to avoid backwash, you need to get rid of virtual bottom site
– it can be done: use states: isOpen, linked2Top, linked2Bottom for each site/component
this means, each connected component (root cell) maintain/update these states.
– if you open cell with opened neighbor that linked2top => this cell isFull
– if that cell in bottom row => grid percolates.


Program 2
Randomized Queues; Deques; pool sampling
– goal: Write a generic data type for a deque and a randomized queue;
implement elementary data structures using arrays and linked lists,
and to introduce you to generics and iterators

– Deque: A double-ended queue or deque (pronounced "deck") is a generalization of a stack and a queue
that supports adding and removing items from either the front or the back of the data structure
– Performance: Your deque implementation must support each deque operation in constant worst-case time
this leads to linked-list implementation, two-way linked list.
– removing objects from collection: avoid loitering, set ref to null

– RandomizedQueue: A randomized queue is similar to a stack or queue,
except that the item removed is chosen uniformly at random from items in the data structure.
– Performance: Your randomized queue implementation must support each randomized queue operation
(besides creating an iterator) in constant amortized time
this leads to resizable array implementation
// dequeue: select random item and swap with head
int rnd = StdRandom.uniform(itemsCount);
int idx = (head + rnd) % data.length;
Item res = data[idx];
data[idx] = data[head];

– Subset client (Reservoir sampling problem, linked to Knuth's shuffling)
Write a client program Subset.java that takes a command-line integer k;
reads in a sequence of N strings from standard input using StdIn.readString();
and prints out exactly k of them, uniformly at random
– For an extra challenge, use only one Deque or RandomizedQueue object of maximum size at most k
RandomizedQueue<String> rq = new RandomizedQueue<>();
for (int cnt = 1; !StdIn.isEmpty(); cnt++) {
    item = StdIn.readString();
    if (cnt <= k) rq.enqueue(item);
    // each item stay in accum with 1/N probability
    else if (StdRandom.uniform(cnt) < k) {
        rq.dequeue(); rq.enqueue(item);
    } }


Program 3
Collinear Points, Pattern Recognition (3-sum like problem)
Given a set of N distinct points in the plane, find every (maximal) line segment
that connects a subset of 4 or more of the points

– compare points: natural order (by y, then x-coords);
slope order, slope for segment v-p1 compareTo slope v-p2
The slopeOrder() method should return a comparator that compares its two argument points
by the slopes they make with the invoking point (x0, y0)
– Treat the slope of a horizontal line segment as positive zero;
– treat the slope of a vertical line segment as positive infinity;
– treat the slope of a degenerate line segment (between a point and itself) as negative infinity.

– Write a program BruteCollinearPoints.java that examines 4 points at a time
and checks whether they all lie on the same line segment, returning all such line segments
N^4 time, trivial.

– FastCollinearPoints.java: A faster, sorting-based solution
Given a point p, the following method determines whether p participates in a set of
4 or more collinear points:
Think of p as the origin.
Sort the points according to the slopes they makes with p.
Check if any 3 (or more) adjacent points in the sorted order have equal slopes with respect to p.
If so, these points, together with p, are collinear.

– N^2 lg N in the worst case
Best solution is a simple one. Don't overthink it.
1. sort points by natural order.
2. loop for each point (origin): copy all points array into arr2;
– sort arr2 by origin slope order (you need stable sort for keeping natural order in segments intact);
– for each point in arr2: find slope1, slope2 for arr2[1], arr2[2] (assume arr2[0] == origin);
– if slopes are equal => write points to segment.
– when segment stops, store it if origin < first seg.point.


Program 4
8-Puzzle, PriorityQueue app, A* search
A-star algorithm: graph BFS guided by heuristic

Write a program to solve the 8-puzzle problem (and its natural generalizations) using the A* search algorithm
It is played on a 3-by-3 grid with 8 square blocks labeled 1 through 8 and a blank square.
Your goal is to rearrange the blocks so that they are in order, using as few moves as possible.

– We define a search node of the game to be a board
Algorithm:
– First, insert the initial search node (the initial board, 0 moves, and a null previous search node) into a priority queue
– Then, delete from the priority queue the search node with the minimum priority (delMin)
– insert onto the priority queue all neighboring search nodes
(those that can be reached in one move from the dequeued search node)
– Repeat this procedure until the search node dequeued corresponds to a goal board

Manhattan priority function. The sum of the Manhattan distances
(sum of the vertical and horizontal distance) from the blocks to their goal positions,
plus the number of moves made so far to get to the search node
– To solve the puzzle from a given search node on the priority queue,
the total number of moves we need to make (including those already made) is at least its priority
– priority never decrease:
The Hamming and Manhattan heuristics are both admissible and consistent.
You may use this noticed property as a debugging clue: if the priority of the search node dequeued
from the priority queue decreases, then you know you did something wrong

– detect unsolvable:
use the fact that boards are divided into two equivalence classes with respect to reachability:
(i) those that lead to the goal board and
(ii) those that lead to the goal board if we modify the initial board by swapping any pair of blocks
(the blank square is not a block).


Program 5
Kd-Trees, ordered Symbol Table (Binary Search Tree) app
2D BS tree for 'range' and 'nearest' points query

Write a data type to represent a set of points in the unit square
(all points have x- and y-coordinates between 0 and 1)
using a 2d-tree to support efficient range search
(find all of the points contained in a query rectangle) and
nearest neighbor search (find a closest point to a query point).

– The idea is to build a BST with points in the nodes, using the x- and y-coordinates
of the points as keys in strictly alternating sequence
– simple enough, except some tricks/optimizations
– at the root we use the x-coordinate, and look, there's a trick: go right
(if the point to be inserted has a smaller x-coordinate than the point at the root, go left; otherwise go right)
– at the next level, we use the y-coordinate
(if the point to be inserted has a smaller y-coordinate than the point in the node, go left; otherwise go right)
– you can store bounding box in each node, then solution is rivial
I chose not to. It was the reason for some tricks in 'draw' method, where I need to pass bbox as a parameter

two more tricks:
– range search pruning rule: if the query rectangle does not intersect the rectangle corresponding to a node,
there is no need to explore that node (or its subtrees). A subtree is searched only if
it might contain a point contained in the query rectangle.
– Nearest neighbor search pruning rule: if the closest point discovered so far is closer than
the distance between the query point and the rectangle corresponding to a node,
there is no need to explore that node (or its subtrees).
That is, a node is searched only if it might contain a point that is closer than the best one found so far.
To do this, organize your recursive method so that when there are two possible subtrees to go down,
you always choose the subtree that is on the same side of the splitting line as the query point as the first subtree to explore






original post http://vasnake.blogspot.com/2016/03/algorithms-part-i-princeton-university.html

Комментариев нет:

Отправить комментарий

Архив блога

Ярлыки

linux (241) python (191) citation (186) web-develop (170) gov.ru (159) video (124) бытовуха (115) sysadm (100) GIS (97) Zope(Plone) (88) бурчалки (84) Book (83) programming (82) грабли (77) Fun (76) development (73) windsurfing (72) Microsoft (64) hiload (62) internet provider (57) opensource (57) security (57) опыт (55) movie (52) Wisdom (51) ML (47) driving (45) hardware (45) language (45) money (42) JS (41) curse (40) bigdata (39) DBMS (38) ArcGIS (34) history (31) PDA (30) howto (30) holyday (29) Google (27) Oracle (27) tourism (27) virtbox (27) health (26) vacation (24) AI (23) Autodesk (23) SQL (23) Java (22) humor (22) knowledge (22) translate (20) CSS (19) cheatsheet (19) hack (19) Apache (16) Manager (15) web-browser (15) Никонов (15) functional programming (14) happiness (14) music (14) todo (14) PHP (13) course (13) scala (13) weapon (13) HTTP. Apache (12) Klaipeda (12) SSH (12) frameworks (12) hero (12) im (12) settings (12) HTML (11) SciTE (11) USA (11) crypto (11) game (11) map (11) HTTPD (9) ODF (9) купи/продай (9) Photo (8) benchmark (8) documentation (8) 3D (7) CS (7) DNS (7) NoSQL (7) cloud (7) django (7) gun (7) matroska (7) telephony (7) Microsoft Office (6) VCS (6) bluetooth (6) pidgin (6) proxy (6) Donald Knuth (5) ETL (5) NVIDIA (5) Palanga (5) REST (5) bash (5) flash (5) keyboard (5) price (5) samba (5) CGI (4) LISP (4) RoR (4) cache (4) car (4) display (4) holywar (4) nginx (4) pistol (4) spark (4) xml (4) Лебедев (4) IDE (3) IE8 (3) J2EE (3) NTFS (3) RDP (3) holiday (3) mount (3) Гоблин (3) кухня (3) урюк (3) AMQP (2) ERP (2) IE7 (2) NAS (2) Naudoc (2) PDF (2) address (2) air (2) british (2) coffee (2) fitness (2) font (2) ftp (2) fuckup (2) messaging (2) notify (2) sharepoint (2) ssl/tls (2) stardict (2) tests (2) tunnel (2) udev (2) APT (1) CRUD (1) Canyonlands (1) Cyprus (1) DVDShrink (1) Jabber (1) K9Copy (1) Matlab (1) Portugal (1) VBA (1) WD My Book (1) autoit (1) bike (1) cannabis (1) chat (1) concurrent (1) dbf (1) ext4 (1) idioten (1) join (1) krusader (1) license (1) life (1) migration (1) mindmap (1) navitel (1) pneumatic weapon (1) quiz (1) regexp (1) robot (1) science (1) serialization (1) spatial (1) tie (1) vim (1) Науру (1) крысы (1) налоги (1) пианино (1)