Записки программиста, обо всем и ни о чем. Но, наверное, больше профессионального.

2016-05-12

Algorithms, Part II. Princeton University, Robert Sedgewick

Краткое изложение 7-недельного курса

Очень полезный и относительно тяжелый курс для wannabe программеров:
Algorithms, Part II by Robert Sedgewick
Algorithms, 4th edition textbook libraries https://github.com/kevin-wayne/algs4

Допы
-- Python implementations of selected Princeton Java Algorithms and Clients by Robert Sedgewick and Kevin Wayne
-- Scala translations of Robert Sedgewick's Java Algorthms

Про первую часть курса ищите здесь http://vasnake.blogspot.com/2016/03/algorithms-part-i-princeton-university.html

This course covers the essential information that every serious programmer needs to know about algorithms and data structures, with emphasis on applications and scientific performance analysis of Java implementations.
Part II covers graph-processing algorithms, including minimum spanning tree and shortest paths algorithms,
and string processing algorithms, including string sorts, tries, substring search, regular expressions, and data compression,
and concludes with an overview placing the contents of the course in a larger context.

Курс тяжелый по двум причинам:
– Роберт, прямо скажу, хреново читает лекции. Лично на меня его голос действует как мощное снотворное.
Несколько бодрее дело идет, если скорость воспроизведения поставить 1.25.
Мало того, некоторые важные моменты он оставляет для самостоятельного изучения, типа, читайте мою книгу.
Придется читать, книга полезная.
– Трудоемкие упражнения; непростые задачки (особенно доставляет система тестов). Надо долго думать и изучать доп. материалы.

Поэтому времени уходит масса. Но оно того стоит.

За 6 недель изучено:

Chapter 4: Graphs surveys the most important graph processing problems, including depth-first search, breadth-first search, minimum spanning trees, and shortest paths.
Chapter 5: Strings investigates specialized algorithms for string processing, including radix sorting, substring search, tries, regular expressions, and data compression.
И прочее.

А именно:

Week 1

Lecture: Undirected Graphs. We define an undirected graph API and consider the adjacency-matrix and adjacency-lists representations. We introduce two classic algorithms for searching a graph—depth-first search and breadth-first search. We also consider the problem of computing connected components and conclude with related problems and applications.

Lecture: Directed Graphs. In this lecture we study directed graphs. We begin with depth-first search and breadth-first search in digraphs and describe applications ranging from garbage collection to web crawling. Next, we introduce a depth-first search based algorithm for computing the topological order of an acyclic digraph. Finally, we implement the Kosaraju-Sharir algorithm for computing the strong components of a digraph.

Week 2

Lecture: Minimum Spanning Trees. In this lecture we study the minimum spanning tree problem. We begin by considering a generic greedy algorithm for the problem. Next, we consider and implement two classic algorithm for the problem—Kruskal's algorithm and Prim's algorithm. We conclude with some applications and open problems.

Lecture: Shortest Paths. In this lecture we study shortest-paths problems. We begin by analyzing some basic properties of shortest paths and a generic algorithm for the problem. We introduce and analyze Dijkstra's algorithm for shortest-paths problems with nonnegative weights. Next, we consider an even faster algorithm for DAGs, which works even if the weights are negative. We conclude with the Bellman-Ford-Moore algorithm for edge-weighted digraphs with no negative cycles. We also consider applications ranging from content-aware fill to arbitrage.

Week 3

Lecture: Maximum Flow and Minimum Cut. In this lecture we introduce the maximum flow and minimum cut problems. We begin with the Ford-Fulkerson algorithm. To analyze its correctness, we establish the maxflow-mincut theorem. Next, we consider an efficient implementation of the Ford-Fulkerson algorithm, using the shortest augmenting path rule. Finally, we consider applications, including bipartite matching and baseball elimination.

Lecture: Radix Sorts. In this lecture we consider specialized sorting algorithms for strings and related objects. We begin with a subroutine (key-indexed counting) to sort integers in a small range. We then consider two classic radix sorting algorithms—LSD and MSD radix sorts. Next, we consider an especially efficient variant, which is a hybrid of MSD radix sort and quicksort known as 3-way radix quicksort. We conclude with suffix sorting and related applications.

Week 5

Lecture: Tries. In this lecture we consider specialized algorithms for symbol tables with string keys. Our goal is a data structure that is as fast as hashing and even more flexible than binary search trees. We begin with multiway tries; next we consider ternary search tries. Finally, we consider character-based operations, including prefix match and longest prefix, and related applications.

Lecture: Substring Search. In this lecture we consider algorithms for searching for a substring in a piece of text. We begin with a brute-force algorithm, whose running time is quadratic in the worst case. Next, we consider the ingenious Knuth-Morris-Pratt algorithm whose running time is guaranteed to be linear in the worst case. Then, we introduce the Boyer-Moore algorithm, whose running time is sublinear on typical inputs. Finally, we consider the Rabin-Karp fingerprint algorithm, which uses hashing in a clever way to solve the substring search and related problems.

Week 6

Lecture: Regular Expressions. A regular expression is a method for specifying a set of strings. Our topic for this lecture is the famous grep algorithm that determines whether a given text contains any substring from the set. We examine an efficient implementation that makes use of our digraph reachability implementation from Week 1.

Lecture: Data Compression. We study and implement several classic data compression schemes, including run-length coding, Huffman compression, and LZW compression. We develop efficient implementations from first principles using a Java library for manipulating binary data that we developed for this purpose, based on priority queue and symbol table implementations from earlier lectures.

Week 7

Lecture: Reductions. In this lecture our goal is to develop ways to classify problems according to their computational requirements. We introduce the concept of reduction as a technique for studying the relationship among problems. People use reductions to design algorithms, establish lower bounds, and classify problems in terms of their computational requirements.

Lecture (optional): Linear Programming. The quintessential problem-solving model is known as linear programming, and the simplex method for solving it is one of the most widely used algorithms. In this lecture, we given an overview of this central topic in operations research and describe its relationship to algorithms that we have considered.

Lecture: Intractability. Is there a universal problem-solving model to which all problems that we would like to solve reduce and for which we know an efficient algorithm? You may be surprised to learn that we do no know the answer to this question. In this lecture we introduce the complexity classes P, NP, and NP-complete, pose the famous P=NP question, and consider implications in the context of algorithms that we have treated in this course.




Graph: set of vertices connected pairwise by edges

undirected graph

Vertices: integers 0..V-1.

Adjacency list, implicit edges: array size V, item n is a bag of vertices adjacent to vertex n

Edges data models:
– list of edges: edge = pair of vertices, O(E) query time;
– adjacency matrix: O(V^2) space, O(V) query time;
– adjacency list (array of lists): O(E+V) space. addEdge(v, w) => adj[v] = w; adj[w] = v

Path: sequence of vertices connected by edges.

Cycle: path whose first and last vertices are the same.

DFS (Depth First Search): mark v as visited, recursively visit all unmarked w from v.adjacent.
Write v to edgeTo[w] (cameFrom[w] = v)
properties:
– mark all connected to v vertices vs: time ~(sum degree(vs))
– bool connected: time O(1)
– pathTo: time ~(length(path))
public DepthFirstPaths(Graph G, int s) {
    this.s = s;
    edgeTo = new int[G.V()];
    marked = new boolean[G.V()];
    dfs(G, s); }
private void dfs(Graph G, int v) {
    marked[v] = true;
    for (int w : G.adj(v)) {
        if (!marked[w]) {
            edgeTo[w] = v;
            dfs(G, w); } } }

BFS (Breadth First Search): add vertex s to queue, iterations:
remove v from queue, add to queue all unmarked w from v.adjacent;
write v to edgeTo[w] (cameFrom[w] = v);
write iternum to distTo[w].
properties:
– shortest path to all connected vs (min number of edges): time ~(E+V)
public BreadthFirstPaths(Graph G, int s) {
    marked = new boolean[G.V()];
    distTo = new int[G.V()];
    edgeTo = new int[G.V()];
    bfs(G, s); }
private void bfs(Graph G, int s) {
    Queue<Integer> q = new Queue<Integer>();
    for (int v = 0; v < G.V(); v++) distTo[v] = INFINITY;
    distTo[s] = 0;
    marked[s] = true;
    q.enqueue(s);
    while (!q.isEmpty()) {
        int v = q.dequeue();
        for (int w : G.adj(v)) {
            if (!marked[w]) {
                edgeTo[w] = v;
                distTo[w] = distTo[v] + 1;
                marked[w] = true;
                q.enqueue(w); } } } }

Connected Components: maximal set of connected vertices.
– relation 'is connected to' is an equivalence relation:
reflexive (v conn to v),
symmetric (if v is conn to w, then w is conn to v),
transitive (v-w, w-x => v-x)
– For each unmarked vertex run DFS, ccId[v] = iterNum.
public CC(Graph G) {
    marked = new boolean[G.V()];
    id = new int[G.V()];
    size = new int[G.V()];
    for (int v = 0; v < G.V(); v++) {
        if (!marked[v]) {
            dfs(G, v);
            count++; } } }
private void dfs(Graph G, int v) {
    marked[v] = true;
    id[v] = count;
    size[count]++;
    for (int w : G.adj(v)) {
        if (!marked[w]) {
            dfs(G, w); } } }

problems

Is graph bipartite? See textbook, DFS solution
– v and w on each edge belongs to two different sets (blue-green-blue...)

Find a cycle? See textbook, DFS solution
– dfs stumble to a marked vertex

Euler tour: is there a cycle that uses each edge exactly once?
– Can be done in linear time.
– Connected, even degree for each vertex is a necessary condition.

Hamilton tour: is there a cycle that uses each vertex exactly once?
NP-complete, nobody knows an efficient solution for large graphs.

graph isomorphism: Are two graphs identical except for vertex names?
Open problem, n! ways to rename vertices.

Graph layout (in the plane) without crossing edges?
Linear time DFS-based solution, too complicated.

Directed graphs, Digraph: set of vertices connected pairwise by directed edges


implicit edges, adjacency list (space E+V, query time outdegree(v))
– addEdge(v, w) => adj[v] = w;
– outdegree, indegree
– directed path, cycle

Every undirected graph is a digraph with edges in both directions.

Application example: mark-sweep garbage collector: mark all reachable obj. all unmarked is garbage.

DFS on Digraphs: reachability; path finding; topological sort; directed cycle detection.

BFS: Multi-source shortest path (fewest number of edges):
– run BFS with all source vertices enqueued
– properties: computes shortest path from s to all other vertices in time ~E+V

Why web-crawler uses BFS? Using DFS you can be trapped in dynamic pages going deeper and deeper.

Topological Sort: we want linear order for items (tasks, vertices) from digraph.
For toposort we need DAG (directed acyclic graph). If cycle is found (see textbook), toposort impossible.
Toposort algorithm:
– run DFS, collect vertices in postorder (stack.push(v) when nowere to go from v, return from recursion)
– reverse postorder (pop from stack).
postorder: sequence of stopnodes, deadends: vertices put in queue after dfs
public DepthFirstOrder(Digraph G) {
    pre    = new int[G.V()];
    post   = new int[G.V()];
    postorder = new Queue<Integer>();
    preorder  = new Queue<Integer>();
    marked    = new boolean[G.V()];
    for (int v = 0; v < G.V(); v++)
        if (!marked[v]) dfs(G, v); }
private void dfs(Digraph G, int v) {
    marked[v] = true;
    pre[v] = preCounter++;
    preorder.enqueue(v);
    for (int w : G.adj(v)) {
        if (!marked[w]) {
            dfs(G, w); } }
    postorder.enqueue(v);
    post[v] = postCounter++; }

directed cycle detection (textbook): DFS + stack_for_cycle
public DirectedCycle(Digraph G) {
    marked  = new boolean[G.V()];
    onStack = new boolean[G.V()];
    edgeTo  = new int[G.V()];
    for (int v = 0; v < G.V(); v++)
        if (!marked[v] && cycle == null) dfs(G, v); }
private void dfs(Digraph G, int v) {
    onStack[v] = true;
    marked[v] = true;
    for (int w : G.adj(v)) {
        if (cycle != null) return;
        else if (!marked[w]) {
            edgeTo[w] = v;
            dfs(G, w); }
        else if (onStack[w]) {
            cycle = new Stack<Integer>();
            for (int x = v; x != w; x = edgeTo[x]) {
                cycle.push(x); }
            cycle.push(w);
            cycle.push(v);
            assert check(); } }
    onStack[v] = false; }

Strongly-Connected Components: vertices v, w are strongly connected if there is a directed path v-w and w-v.
DAG have V strong components by definition.
property:
– Strong Connectivity is an equivalence relation:
v sc to v; if v sc to w => w sc to v; if v sc to w and w sc to x => v sc to x.
SC: maximal subset of strongly-connected vertices

Application: software module dependency graph, strong component = bad design

Find SCC for digraph: two pass DFS, Kosaraju-Sharir algorithm:
kernel DAG: contract each strong component into a single vertex.
– find postorder for reversed digraph (DFS, toposort)
– for each v from reversed postorder: run DFS (or BFS) and write component id (compId[v] = count)
property: computes SC in time ~E+V
public KosarajuSharirSCC(Digraph G) {
    DepthFirstOrder dfs = new DepthFirstOrder(G.reverse());
    marked = new boolean[G.V()];
    id = new int[G.V()];
    for (int v : dfs.reversePost()) {
        if (!marked[v]) {
            dfs(G, v);
            count++; } } }
private void dfs(Digraph G, int v) {
    marked[v] = true;
    id[v] = count;
    for (int w : G.adj(v)) {
        if (!marked[w]) dfs(G, w); } }

Minimum Spanning Tree: spanning tree with min edges weight, subgraph

undirected graph, explicit edges with weight

tree: connected acyclic undirected graph;
spanning: includes all V vertices, and V-1 edges;
spanning tree: subgraph, list of edges, positive edge weights.
Goal: given graph G, find a min weight spanning tree.

assume: edge weights are distinct; graph is connected => MST exists and is unique.
Cut property: In any cut, crossing edge of min weight is in the MST
– a cut in a graph is a partition of its vertices into two nonempty sets;
– a crosing edge connects a vertex in one set with a vertex in the other;

Greedy MST algorithm:
– find cut with no black crossing edges;
– color crossing edge of min width to black;
– repeat until V-1 edges are black.

Weighted graph API with explicit edges. Edge class.
Before we had edges implicitly, using adjacency list (adj[v] = w).
Now adj. list have Edge objects, not vertices id.
int v = e.either(), w = e.other(v);

Efficient implementations of a greedy MST algorithm:
– Kruskal's (all edges PQ)
– Prim's lazy (connected edges PQ); Prim's eager (connected vertices IndexedPQ);
Variants: way to choose cut? Find min-weight edge?

Kruskal's algorithm: sort edges by weight, ascending; add edge to a tree unless that creates a cycle.
Cycle: if both edge vertices in same set => make use of UnionFind.
if !uf.connected(v, w): add edge to mst // if v and w belong to different sets
Maintain a set for each connected component in MST; adding v-w to T merge sets containing v and w (uf.union(v, w)).
Time O(E lg E) : E times call del-min for sorted edges priority queue.
public KruskalMST(EdgeWeightedGraph G) {
    MinPQ<Edge> pq = new MinPQ<Edge>();
    for (Edge e : G.edges()) pq.insert(e); 
    UF uf = new UF(G.V());
    while (!pq.isEmpty() && mst.size() < G.V() - 1) {
        Edge e = pq.delMin();
        int v = e.either(); int w = e.other(v);
        if (!uf.connected(v, w)) { // v-w does not create a cycle
            uf.union(v, w);  // merge v and w components
            mst.enqueue(e);  // add edge e to mst
            weight += e.weight(); } } }

Prim's algorithm: greedily grow MST adding min-weight edges
– start from vertex 0;
– add to tree T the min-weight edge with exactly one vertex in T;
– repeat until V-1 edges.
Use Priority Queue for finding edge => O(log E)
Overall time: O(E lg E)

Prim's lazy implementation: maintain a PQ of edges (sorted by width) with (at least) one endpoint in T;
and marked[v] array for detecting 'vertex in MST';
– del-min next candidate to adding to T;
– get next if both vertices in T;
– let w not in T: add to PQ edges incident to w;
– add w to T.
Uses space proportional to E and time proportional to E lg E (in the worst case).
public LazyPrimMST(EdgeWeightedGraph G) {
    mst = new Queue<Edge>();
    pq = new MinPQ<Edge>();
    marked = new boolean[G.V()];
    for (int v = 0; v < G.V(); v++)     // run Prim from all vertices to
        if (!marked[v]) prim(G, v);     // get a minimum spanning forest }
private void prim(EdgeWeightedGraph G, int s) {
    scan(G, s);
    while (!pq.isEmpty()) {                        // better to stop when mst has V-1 edges
        Edge e = pq.delMin();                      // smallest edge on pq
        int v = e.either(), w = e.other(v);        // two endpoints
        if (marked[v] && marked[w]) continue;      // lazy, both v and w already scanned
        mst.enqueue(e);                            // add e to MST
        weight += e.weight();
        if (!marked[v]) scan(G, v);               // v becomes part of tree
        if (!marked[w]) scan(G, w);               // w becomes part of tree } }
private void scan(EdgeWeightedGraph G, int v) {
    marked[v] = true;
    for (Edge e : G.adj(v)) 
        if (!marked[e.other(v)]) pq.insert(e); }

Prim's eager implementation: maintain a indexed PQ of vertices connected to T;
priority of vertex v = weight of lightest edge connecting v to T.
– del-min vertex v, add v-w to T;
– update PQ by considering all v-x edges incident to v;
skip if x is already in T;
add x to PQ if not in it already;
decrease weight of x if v-x is lighter than others x-T edges.
More effective: uses space proportional to V and time proportional to E lg V (in the worst case).
public PrimMST(EdgeWeightedGraph G) {
    edgeTo = new Edge[G.V()];
    distTo = new double[G.V()];
    marked = new boolean[G.V()];
    pq = new IndexMinPQ<Double>(G.V());
    for (int v = 0; v < G.V(); v++) distTo[v] = Double.POSITIVE_INFINITY;
    for (int v = 0; v < G.V(); v++)      // run from each vertex to find
        if (!marked[v]) prim(G, v);      // minimum spanning forest }
private void prim(EdgeWeightedGraph G, int s) {
    distTo[s] = 0.0;
    pq.insert(s, distTo[s]);
    while (!pq.isEmpty()) {
        int v = pq.delMin();
        scan(G, v); } }
private void scan(EdgeWeightedGraph G, int v) {
    marked[v] = true;
    for (Edge e : G.adj(v)) {
        int w = e.other(v);
        if (marked[w]) continue;         // v-w is obsolete edge
        if (e.weight() < distTo[w]) {
            distTo[w] = e.weight();
            edgeTo[w] = e;
            if (pq.contains(w)) pq.decreaseKey(w, distTo[w]);
            else                pq.insert(w, distTo[w]); } } }
Prim's running time depends on PQ implementation:
– array for dense graph (O(V) for delMin op, O(EV) total),
– binary heap for sparse (O(lg V) for delMin op, O(ElgV) total);
– 4-way heap for performance-critical.

Euclidean MST: N points in plane, edges = Euclidean distance between all pairs.
– implicit dense graph
– N^2 edges => V^2 time for Prim's algorithm => no good.
– Use 'close' subgraphs (Verona diagram), Delogne triangulation: ~(c N log N)

K-Clustering: divide a set of objects into k coherent groups
Single-link clustering:
– form V clusters, find closest pair from different clusters, merge two clusters.
– Repeat until K clusters (K connected components).
This is Kruskal's MST algorithm
Alternative: run Prim's algorithm and delete k-1 max weight edges.

Indexed Priority Queue: allow for client to change the key by specifying the index
– update data[n] and restore heap order
Implementation in the book:
– modified MinPQ code
– arrays keys[], pq[], qp
– keys[idx] = weight of vertex[idx], key for comparator
– qp[idx] = n; n is a binary heap position 1..N
– pq[n] = idx
e.g. change key for vertex #6, key = 'Y'
– change the value, keys[6] = 'Y'
– update heap, swim(qp[6])

Shortest Paths: edge-weighted digraph

find the shortest path from s to t on edge-weighted digraph.

possible variants:
vertices: source-sink; single source; all pairs
– edges: nonnegative weigths; arbitrary weights; Euclidean weights
– cycles: no directed cycles; no negative cycles

API:
– DirectedEdge: v = e.from, w = e.to, e.weight
– EdgeWeightedDigraph: edges = g.adj(v)
– SP: pathLength = sp.distTo(int v); edges = sp.pathTo(int v)

SP properties: edge relaxation; optimality condition
– SPT (Shortest Path Tree) exists => SPT can be represented as distTo[v], edgeTo[v]
edgeTo is a parent-link structure, using stack => path
public Iterable<DirectedEdge> pathTo(int v) {
    Stack<DirectedEdge> path = new Stack<DirectedEdge>();
    for (DirectedEdge e = edgeTo[v]; e != null; e = edgeTo[e.from()]) 
        path.push(e);
    return path; }

Edge relaxation: if distTo[w] > distTo[v] + e.weight: edgeTo[w] = e; distTo[w] = distTo[v] + e.weight
if found edge/path shorter than known → update distTo, edgeTo.
– distTo[v] is length of shortest known path s-v;
– distTo[w] is length of shortest known path s-w;
– edgeTo[w] is last edge on shortest known path s-w;

SP optimality: distTo[s] = 0; distTo[w] <= distTo[v] + e.weight
we have path; each edge in path is shortest alternative

Generic SP algorithm: set distTo[s] = 0; distTo[v] = infinite for all v;
– repeat until optimality: relax any edge.
implementations: how to choose which edge to relax?

Dijkstra's SP algorithm (digraph, nonnegative weights): like a Prim's eager algorithm
– select vertex (not in SPT) with lower distance from s;
– add vertex to tree, relax all edges pointing from that vertex.
Uses indexed priority queue, distTo[w] as sorting key.
public DijkstraSP(EdgeWeightedDigraph G, int s) {
    for (DirectedEdge e : G.edges()) if (e.weight() < 0) throw new IllegalArgumentException
    distTo = new double[G.V()];
    edgeTo = new DirectedEdge[G.V()];
    for (int v = 0; v < G.V(); v++) distTo[v] = Double.POSITIVE_INFINITY;
    distTo[s] = 0.0;
    pq = new IndexMinPQ<Double>(G.V());
    pq.insert(s, distTo[s]);
    while (!pq.isEmpty()) {
        int v = pq.delMin();
        for (DirectedEdge e : G.adj(v)) relax(e); } }
private void relax(DirectedEdge e) {
    int v = e.from(), w = e.to();
    if (distTo[w] > distTo[v] + e.weight()) {
        distTo[w] = distTo[v] + e.weight();
        edgeTo[w] = e;
        if (pq.contains(w)) pq.decreaseKey(w, distTo[w]);
        else                pq.insert(w, distTo[w]); } }
Running time: depends on PQ implementation:
array O(V^2) for dense graph;
binary (or 4-way) heap O(E log V) for sparse.

Spanning Tree algorithms family: distinction: rule used to choose next vertex
– Dijkstra's (closest v. to the source, directed path),
– Prim's (closest v. to the tree, undirected edge).
– DFS, BFS also in ST family.

Toposort algorithm, SP on edge-weighted DAG (neg. weights allowed, no cycles):
– using topological order select next v.
– relax all edges from v.
Time O(E + V), linear.
topoorder: reversed postorder from DFS
public AcyclicSP(EdgeWeightedDigraph G, int s) {
    distTo = new double[G.V()];
    edgeTo = new DirectedEdge[G.V()];
    for (int v = 0; v < G.V(); v++) distTo[v] = Double.POSITIVE_INFINITY;
    distTo[s] = 0.0;
    Topological topological = new Topological(G);
    if (!topological.hasOrder()) throw new IllegalArgumentException
    for (int v : topological.order())
        for (DirectedEdge e : G.adj(v)) relax(e); }
private void relax(DirectedEdge e) {
    int v = e.from(), w = e.to();
    if (distTo[w] > distTo[v] + e.weight()) {
        distTo[w] = distTo[v] + e.weight();
        edgeTo[w] = e; } }

Negative cycle: directed cycle with negative sum of edge weights.
SPT exists iff no negative cycles.

Bellman-Ford (no neg. cycles) algorithm, negative weights allowed : dynamic programming alg.
– distTo[s] = 0; distTo[v] = inf.
– V times relax E edges.
Improvement: if distTo[v] does not change while iter i => no need to relax edges v-w on i+1 iter
maintain FIFO queue of vertices with changed distTo
time O(E V)
public BellmanFordSP(EdgeWeightedDigraph G, int s) {
    distTo  = new double[G.V()];
    edgeTo  = new DirectedEdge[G.V()];
    onQueue = new boolean[G.V()];
    for (int v = 0; v < G.V(); v++) distTo[v] = Double.POSITIVE_INFINITY;
    distTo[s] = 0.0;
    queue = new Queue<Integer>();
    queue.enqueue(s); onQueue[s] = true;
    while (!queue.isEmpty() && !hasNegativeCycle()) {
        int v = queue.dequeue(); onQueue[v] = false;
        relax(G, v); } }
private void relax(EdgeWeightedDigraph G, int v) {
    for (DirectedEdge e : G.adj(v)) {
        int w = e.to();
        if (distTo[w] > distTo[v] + e.weight()) {
            distTo[w] = distTo[v] + e.weight();
            edgeTo[w] = e;
            if (!onQueue[w]) {
                queue.enqueue(w); onQueue[w] = true; } }
        if (cost++ % G.V() == 0) {
            findNegativeCycle();
            if (hasNegativeCycle()) return;  // found a negative cycle } } }

Find negative cycle using Bellman-Ford alg.:
– if any vertex v is updated in phase V, there exists a negative cycle
private void findNegativeCycle() {
    EdgeWeightedDigraph spt = new EdgeWeightedDigraph(V);
    for (int v = 0; v < V; v++)
        if (edgeTo[v] != null) spt.addEdge(edgeTo[v]);
    EdgeWeightedDirectedCycle finder = new EdgeWeightedDirectedCycle(spt);
    cycle = finder.cycle(); }
public EdgeWeightedDirectedCycle(EdgeWeightedDigraph G) {
    marked  = new boolean[G.V()];
    onStack = new boolean[G.V()];
    edgeTo  = new DirectedEdge[G.V()];
    for (int v = 0; v < G.V(); v++)
        if (!marked[v]) dfs(G, v); }
private void dfs(EdgeWeightedDigraph G, int v) {
    onStack[v] = true;
    marked[v] = true;
    for (DirectedEdge e : G.adj(v)) {
        int w = e.to();
        if (cycle != null) return;
        else if (!marked[w]) {
            edgeTo[w] = e;
            dfs(G, w); }
        else if (onStack[w]) {
            cycle = new Stack<DirectedEdge>();
            while (e.from() != w) {
                cycle.push(e);
                e = edgeTo[e.from()]; }
            cycle.push(e);
            return; } }
    onStack[v] = false; }

Negative cycle application: arbitrage detection:
– vertex = currency
– edge = transaction, weight = exch.rate
find a directed cycle with product of edge.weights > 1
– replace weights: weight = -ln(weight)
– find neg.cycle.

DAG SP application: seam carving:
grid DAG, vertex = pixel, edge to 3 downward pixels, weight = energy function of 8 neighboring pix.

DAG Longest Path application: Parallel Job Scheduling
CPM, Critical Path Method:
– create edge-weighted DAG:
source and sink vertices;
2 vertices for each job;
3 edges for each job (begin-end, s-begin, end-t);
one edge for each precedence constraint.
– use longest path
Longest Path in edge-weighted DAG: negate all weights, find SP, negate weights in result.

Maxflow-Mincut: cut of min capacity / max flow from s to t (edge-weighted digraph)

Explicit directed edges with weight='capacity', 'flow' attributes.

mincut problem: a cut of min capacity from A to B
– cut: vertices parted to 2 sets, s in set A, t in set B
cut capacity: sum of capacities of the edges from A to B (don't count from B to A)

maxflow problem: find a flow of max value
– flow: edges values from zero to capacity; inflow = outflow for each vertex (except s, t)
value of a flow: inflow at t

These two problems are dual
Flow-value lemma:
– net flow across a cut (A, B): is the sum of the flows on its edges A-B minus sum of the flows on its edges B-A.
– net flow for cut A-B == value of the flow (inflow at t) <= capacity of cut.

Maxflow-Mincut theorem:
– A flow f is a maxflow iff no augmenting paths
– value of the maxflow == capacity of mincut

Ford-Fulkerson algorithm: O(U*E^2)
– start with 0 flow;
– loop: find augmenting path,
– compute bottleneck capacity (bc) for path,
– increase flow on that path by bc.
Augmenting path: undirected path s-t where all edges not full (forward) or not empty (backward);
one full/empty edge blocking a path.
public FordFulkerson(FlowNetwork G, int s, int t) {
    while (hasAugmentingPath(G, s, t)) {
        double bottle = Double.POSITIVE_INFINITY;
        for (int v = t; v != s; v = edgeTo[v].other(v))
            bottle = Math.min(bottle, edgeTo[v].residualCapacityTo(v));
        for (int v = t; v != s; v = edgeTo[v].other(v))
            edgeTo[v].addResidualFlowTo(v, bottle); } }
private boolean hasAugmentingPath(FlowNetwork G, int s, int t) {
    edgeTo = new FlowEdge[G.V()];
    marked = new boolean[G.V()];
    Queue<Integer> queue = new Queue<Integer>();
    queue.enqueue(s);
    marked[s] = true;
    while (!queue.isEmpty() && !marked[t]) {
        int v = queue.dequeue();
        for (FlowEdge e : G.adj(v)) {
            int w = e.other(v);
            if (e.residualCapacityTo(w) > 0) {
                if (!marked[w]) {
                    edgeTo[w] = e;
                    marked[w] = true;
                    queue.enqueue(w); } } } }
    return marked[t]; }

compute mincut (A, B) from maxflow f:
– A is set of vertices connected to s by undirected paths with no full/empty edges.
– find all of the vertices reachable from s using only forward edges that aren't full
or backwards edges that aren't empty; O(E + V) time.

Number of augmenting paths could be equal to the value of the maxflow (integer capacities).
Avoid this case by using shortest/fattest path.

Augmenting path search:
– shortest path (BFS using queue) O(1/2 E V)
– DFS (stack) or random path (rand. queue) O(E U)
Augmenting path in original network = directed path in residual network
(BFS queue, using e.residualCapacityTo(w), marked[w], edgeTo[w]).
Edge v-w saved in adj[v] and adj[w].
public double residualCapacityTo(int vertex) {
    if      (vertex == v) return flow;              // backward edge
    else if (vertex == w) return capacity - flow;   // forward edge
    else throw new IllegalArgumentException("Illegal endpoint"); }
public void addResidualFlowTo(int vertex, double delta) {
    if      (vertex == v) flow -= delta;           // backward edge
    else if (vertex == w) flow += delta;           // forward edge }

Maxflow time over years: time O from E^3 to E^2 / log E

Applications: bipartite matching: students-jobs
– vertices s, t, students, jobs
– edges s-students (cap 1), jobs-t (cap 1), student-jobs (cap inf).
Mincut explains why no perfect matching: mincut (A, B) students in A > jobs in B.

Application: baseball elimination: games-teams bipartite graph, is team x eliminated?
– vertices: s, t; team pairs (games) w/o x; teams w/o x;
– edges:
s—game, capacity = num of games for t1,t2 pair;
game--teams, 2 edges, cap=inf.;
team—t, capacity=x.wins+x.remaining-team.wins
– if exist not full s—game edge, x is eliminated; mincut contains winner teams.

Radix sorts : sorting strings (sequence of characters)

Specialized sorting algorithms for strings.

string: sequence of characters
– char: 7bit, 8bit, 16bit, … // length, charAt(idx) methods O(1)
– implementation of 'substring'; 'concat': O(1), O(N) for Java 6 string
– StringBuilder: concat O(1) amortized; substring O(N)
– string implementation is important, consider 'reverse' and 'suffixes' functions

sublinear performance demonstration: longest common prefix
private static int lcp(Suffix s, Suffix t) {
    int N = Math.min(s.length(), t.length());
    for (int i = 0; i < N; i++)
        if (s.charAt(i) != t.charAt(i)) return i;
    return N; }

radix R: number of digits in alphabet
– digital key: sequence of digits over fixed alphabet
– binary: R=2, decimal: R=10, base64: R=64, unicode16: R=65536
– performance depends on a radix

key-indexed counting: radix sorts subroutine for sorting 1-digit array; linear time
– no compares, key is a small integer used as an array index; idea: R buckets approach
– a[n] is a nth key; count[r] is a number of r keys
– algorithm: compute frequencies; compute cumulates (indexes); move keys to ordered array; move back
int N = a.length;
int R = 256;   // extend ASCII alphabet size
char[] aux = new char[N];
int[] count = new int[R+1];
for (int i = 0; i < N; i++) count[a[i] + 1]++;
for (int r = 0; r < R; r++) count[r+1] += count[r];
for (int i = 0; i < N; i++) aux[count[a[i]]++] = a[i];
for (int i = 0; i < N; i++) a[i] = aux[i];
– performance: time ~ 11N + 4R array accesses; space proportional to N+R; stable

LSD radix sort: least significant digit first, stable, O(2WN), stable, N+R space
– punch-cards sort
– consider chars from right to left
– sort using d-th char as the key for key-indexed counting
public static void sort(String[] a, int W) {
    int N = a.length;
    int R = 256;   // extend ASCII alphabet size
    String[] aux = new String[N];
    for (int d = W-1; d >= 0; d--) {
        int[] count = new int[R+1];
        for (int i = 0; i < N; i++) count[a[i].charAt(d) + 1]++;
        for (int r = 0; r < R; r++) count[r+1] += count[r];
        for (int i = 0; i < N; i++) aux[count[a[i].charAt(d)]++] = a[i];
        for (int i = 0; i < N; i++) a[i] = aux[i]; } }
– sort 1 million 32-bit integers

MSD radix sort: most significant digit first, recursive; O(2NW), random ~N log_R N, N+DR space
– stable, too much space for count[] arrays
– key-indexed counting sort for 1-st column, partition array into R pieces;
– recursively sort all partitions, shifting column to right (string tails);
– charAt default value = '-1': variable-length strings
– can't reuse count arrays
public static void sort(String[] a) {
    int N = a.length;
    String[] aux = new String[N];
    sort(a, 0, N-1, 0, aux); }
private static void sort(String[] a, int lo, int hi, int d, String[] aux) {
    if (hi <= lo + CUTOFF) { insertion(a, lo, hi, d); return; }
    int[] count = new int[R+2]; // +2 because charAt default value = -1
    for (int i = lo; i <= hi; i++) { int c = charAt(a[i], d); count[c+2]++; }
    for (int r = 0; r < R+1; r++) count[r+1] += count[r];
    for (int i = lo; i <= hi; i++) { int c = charAt(a[i], d); aux[count[c+1]++] = a[i]; }
    for (int i = lo; i <= hi; i++) a[i] = aux[i - lo];
    // recursively sort for each character (excludes sentinel -1)
    for (int r = 0; r < R; r++) sort(a, lo + count[r], lo + count[r+1] - 1, d+1, aux); }
– much to slow for small subarrays (count is way bigger than a)
– huge number of small subarrays (recursion)
– switching to unicode leads to drastic performance slowdown
– performance can be sublinear: if first column is unique, no need to sort 2-nd column
– cache inefficient

3-way radix quicksort: inplace, unstable, char compares O(1.39 WN lg N), ~1.39N lg N, W+lgN space
– cache efficient, compare chars, not strings (avoid long prefix problem)
– combines the benefits of MSD (var.length, doesn't recompare same keys) and Quicksort.
– The algorithm is designed to exploit the property that in many problems, strings tend to have shared prefixes.
– radix: it decomposes a string into characters
– 3-way partitioning is less overhead than R-way partitioning in MSD: 3 recursive calls on each pass
– does not re-examine chars equal to the partition char
– implementation of 3-way string quicksort : lo, hi, d; lt, gt; if c < pivot: swap(lt++, i++); if c > pivot: swap(i, gt--) ...
public static void sort(String[] a) {
    StdRandom.shuffle(a);
    sort(a, 0, a.length-1, 0); }
private static void sort(String[] a, int lo, int hi, int d) {
    if (hi <= lo + CUTOFF) { insertion(a, lo, hi, d); return; }
    int lt = lo, gt = hi;
    int v = charAt(a[lo], d); // default charAt value = -1
    int i = lo + 1;
    while (i <= gt) {
        int t = charAt(a[i], d);
        if      (t < v) exch(a, lt++, i++);
        else if (t > v) exch(a, i, gt--);
        else              i++; }
    sort(a, lo, lt-1, d);
    if (v >= 0) sort(a, lt, gt, d+1);
    sort(a, gt+1, hi, d); }
– bottom line: method of choice for sorting strings

suffix arrays: array of all string suffixes
– useful for fast substring search: find all occurences of query string
– keyword-in-context search: text, keyword, num of chars of context
– create suffix array, sort it => repeated substrings placed together;
binary search for query, scan until mismatch
– useful for another problem: longest repeated substring: LRS
– brute force: find longest common prefix (lcp) for each pair of substrings (i,j combinations from N)
– sorting suffixes : equal prefixes grouped together, for i in 0..N find lcp(i, i+1)

suffix sorting LRS problem: very long repeated substring: quadratic sort time
– bad input: same char repeated N times
– can do linearithmic in worst case: Manber-Myers algorithm for suffix-sorting
– even linear: suffix trees (Patricia)

suffix sorting, Manber-Myers: doubling num chars compared in each pass
– suffix sorting in linearithmic time
– phase 0: sort on first char, key-indexed counting
– phase n: given array of suffixes sorted on first 2^n-1 chars, create array sorted on 2^n first chars
– lgN passes, one phase in linear time
– using 'index sort' with inverse index data we can use a trick:
results of phase n used in phase n+1
for a[0] we can find substring [4:8] in a[4]
for a[9] … in a[13]
and we already know, that a[13] is less than a[4]
so we know: a[9][4:8] is less than a[0][4:8]

bottom line
– linear-time sorts are possible, even sublinear (input size is amount of data in keys, not number of keys)
– 3-way string radix quicksort is asymptotically optimal: 1.39 N lg N chars for random data
– long strings are rarely random in practice, learn the data structure, may need specialized algorithms.

Tries: symbol table with string keys / prefixes

data structure that is as fast as hashing and even more flexible than binary search
it's a tree: recursion in use

R-way Tries, 're-trie-val': symbol table with string keys, ordered ops, prefix ops
– avoid examining the entire key, advantage of properties of strings
– tree structure, each node is a subtree root
– store char in node (not keys)
– each node has R children (char is a next node index: Node[] next = new Node[R])
– store value in node corresponding to last char in key
– all strings with the same prefix is under prefix root
public class TrieST<Value> {
    private static final int R = 256;        // extended ASCII
    private Node root;      // root of trie
    private int N;          // number of keys in trie
    private static class Node {
        private Object val;
        private Node[] next = new Node[R]; }

search in a trie: recursion: follow links for each char,
– search hit: if in last char we have a value
– search miss: null node or null value in last char node
public Value get(String key) {
    Node x = get(root, key, 0);
    if (x == null) return null;
    return (Value) x.val; }
private Node get(Node x, String key, int d) {
    if (x == null) return null;
    if (d == key.length()) return x;
    char c = key.charAt(d);
    return get(x.next[c], key, d+1); }

insertion into a trie: recursion: follow chars links,
– null node: create a new node,
– set value for last char node
public void put(String key, Value val) {
    if (val == null) delete(key);
    else root = put(root, key, val, 0); }
private Node put(Node x, String key, Value val, int d) {
    if (x == null) x = new Node();
    if (d == key.length()) {
        if (x.val == null) N++;
        x.val = val;
        return x; }
    char c = key.charAt(d);
    x.next[c] = put(x.next[c], key, val, d+1);
    return x; }

deletion in an R-way trie: recursion: find the node, set value to null
– check: if null value and all links is null – remove node ref.
public void delete(String key) {
    root = delete(root, key, 0); }
private Node delete(Node x, String key, int d) {
    if (x == null) return null;
    if (d == key.length()) {
        if (x.val != null) N--;
        x.val = null; }
    else {
        char c = key.charAt(d);
        x.next[c] = delete(x.next[c], key, d+1); }
    if (x.val != null) return x;
    for (int c = 0; c < R; c++) if (x.next[c] != null) return x;
    return null; }

R-way trie performance (char accesses): miss sublinear, hit linear
– R links for each node – huge waste
– a bit slower than hashST (L .. log_R N vs. L; ~30% in test), huge memory waste (R+1)N
– method of choice for small R

Application: spellchecker, 26-way trie, key=dictionary word

Ternary Search Tries, TST: trie with only 3 links per node
– chars, values in nodes (vs. BST symbol table: key, value in node)
– each node has 3 children: left, middle, right for smaller, equal, larger chars; char and value
– value in the last char node
public class TST<Value> {
    private int N;              // size
    private Node<Value> root;   // root of TST
    private static class Node<Value> {
        private char c;                        // character
        private Node<Value> left, mid, right;  // left, middle, and right subtries
        private Value val;                     // value associated with string }

Search in a TST: follow middle link for next char, compare node.char with current char: eq, less, greater
– if less or greater: follow left or right link, use current char
– get next char only for middle link
– non-null value for last char = hit
public Value get(String key) {
    Node<Value> x = get(root, key, 0);
    if (x == null) return null;
    return x.val; }
private Node<Value> get(Node<Value> x, String key, int d) {
    if (x == null) return null;
    char c = key.charAt(d);
    if      (c < x.c)              return get(x.left,  key, d);
    else if (c > x.c)              return get(x.right, key, d);
    else if (d < key.length() - 1) return get(x.mid,   key, d+1);
    else                           return x; }

Insert in a TST: next char in a middle link, if null create new node; put value to last char node
– if less or greater: follow left or right link, use current char
– get next char only for middle link
public void put(String key, Value val) {
    if (!contains(key)) N++;
    root = put(root, key, val, 0); }
private Node<Value> put(Node<Value> x, String key, Value val, int d) {
    char c = key.charAt(d);
    if (x == null) {
        x = new Node<Value>();
        x.c = c; }
    if      (c < x.c)               x.left  = put(x.left,  key, val, d);
    else if (c > x.c)               x.right = put(x.right, key, val, d);
    else if (d < key.length() - 1)  x.mid   = put(x.mid,   key, val, d+1);
    else                            x.val   = val;
    return x; }

TST performance: faster than hashtable in dedup test ~5%
– 4N space, like BST; hit and insert time L+ ln N char accesses
– can build balanced TST via rotations to achieve O(L+lgN) char accesses
– space efficient

can do even faster: TST with R^2 roots
– at root: R^2 nodes with 2-char prefixes and links to sub-TST
– ~10-20% faster in practice

TST vs. hashing
– hash needs a good'n'fast hash function
– hash has a stable performance
– hash doesn't support ordered ST ops
– hash works for any data
– TST faster (a bit), support ordered ops and can solve problems with prefixes
– TST works only with strings

Tries: char-based operations: prefix match, longest prefix, wildcard p.
– operations for string keys (also possible: floor, rank, ...)
– based on ordered keys iteration: collect keys
– do inorder traversal of trie, add keys to queue
– maintain sequence of chars on path from root
// R-way trie
public Iterable<String> keys() {
    Queue<String> results = new Queue<String>();
    collect(root, new StringBuilder(""), results);
    return results; }
private void collect(Node x, StringBuilder prefix, Queue<String> results) {
    if (x == null) return;
    if (x.val != null) results.enqueue(prefix.toString());
    for (char c = 0; c < R; c++) {
        prefix.append(c);
        collect(x.next[c], prefix, results);
        prefix.deleteCharAt(prefix.length() - 1); } }
// TST
private void collect(Node<Value> x, StringBuilder prefix, Queue<String> queue) {
    if (x == null) return;
    collect(x.left,  prefix, queue);
    if (x.val != null) queue.enqueue(prefix.toString() + x.c);
    collect(x.mid,   prefix.append(x.c), queue);
    prefix.deleteCharAt(prefix.length() - 1);
    collect(x.right, prefix, queue); }
– similar to BoggleSolver board traversal

prefix matches: search for prefix subtree, do collect keys
// R-way
public Iterable<String> keysWithPrefix(String prefix) {
    Queue<String> results = new Queue<String>();
    Node x = get(root, prefix, 0);
    collect(x, new StringBuilder(prefix), results);
    return results; }
// TST
public Iterable<String> keysWithPrefix(String prefix) {
    Queue<String> queue = new Queue<String>();
    Node<Value> x = get(root, prefix, 0);
    if (x == null) return queue;
    if (x.val != null) queue.enqueue(prefix);
    collect(x.mid, new StringBuilder(prefix), queue);
    return queue; }

Application: autocomplete – prefix match, find all keys (words in dictionary) starting with a given prefix

longest prefix: find longest key in ST that is a prefix of query string
– app: IP router, '128.112.136.11' – ' 128.112.136'
– recursively search query string, keep track of keys length
// R-way
public String longestPrefixOf(String query) {
    int length = longestPrefixOf(root, query, 0, -1);
    if (length == -1) return null;
    else return query.substring(0, length); }
private int longestPrefixOf(Node x, String query, int d, int length) {
    if (x == null) return length;
    if (x.val != null) length = d;
    if (d == query.length()) return length;
    char c = query.charAt(d);
    return longestPrefixOf(x.next[c], query, d+1, length); }
// TST
public String longestPrefixOf(String query) {
    if (query == null || query.length() == 0) return null;
    int length = 0;
    Node<Value> x = root;
    int i = 0;
    while (x != null && i < query.length()) {
        char c = query.charAt(i);
        if      (c < x.c) x = x.left;
        else if (c > x.c) x = x.right;
        else {
            i++;
            if (x.val != null) length = i;
            x = x.mid; } }
    return query.substring(0, length); }

Application: t9 texting – longest prefix, 8-way trie, key is a numbers like '43556', value=set of words

Patricia trie (crit-bit tree, radix tree): Practical Algorithm to Retrieve Information Coded in Alphanumeric
– remove one-way branching (any node that is an only child is merged with its parent)
– each node represents a sequence of chars

Suffix tree: Patricia trie of suffixes of a string
– linear-time construction
– linear-time: longest repeated substring, longest common substring, longest palindromic substring,
substring search, tandem repeats, …

bottom line: Trie – performance logN chars accessed; supports char-based ops.

Substring search: linear (even sublinear) time for searching pattern in a stream

the goal is to find a given substring in a large text

substring search problem: find pattern (len M) in a text (len N), N » M
– text editors, spam detection, internet traffic surveilance, ...
– brute-force solution: check for pattern starting from each text char: O(MN)
for n in N: for m in M: if txt[n+m] != pat[m]: break; if j == M return n
// explicit backup 
for(n=0, m=0; n < N && m < M; n++) if txt[n] == pat[m]: m++; else: n -= m, m = 0;

input as stream, move only forward, no backup
– brute-force is unacceptable (buffer?)

Knuth-Morris-Pratt substring searching: forward only, O(N)
– build Deterministic Finite Automaton, DFA to make desicions:
what char in pattern should be examined next
– DFA knows previous chars, no need to go back, can avoid backup
– DFA is abstract string-searching machine: matrix, rows for R chars, columns for M states
row,col value = next state
newstate = dfa[char][currctate]
– exactly one transition for each char (from current state) in alphabet
– last state = halt state, pattern found
– interpretation: state = number of chars in pattern that have been matched
prefix of the pattern p[0..state] == suffix of the txt[0..n]

KMP search implementation: running time at most N char accesses:
– given dfa[size R][size M]; while text.nextchar: j = dfa[char][j]; if j==M: done
– need to precompute dfa[][]
– text pointer never decrements (no backup)
public int search(String txt) {
    int M = pat.length(); int N = txt.length();
    int i, j;
    for (i = 0, j = 0; i < N && j < M; i++)
        j = dfa[txt.charAt(i)][j];
    if (j == M) return i - M;    // found
    return N;                    // not found }

KMP DFA construction: simple match rule, tricky mismatch rule
– match transition rule: from state j goto state j+1 if currchar == pattern[j]
public KMP(String pat) {
    dfa = new int[R][M];
    dfa[pat.charAt(0)][0] = 1;
    for (int X = 0, j = 1; j < M; j++) {
        ...
        dfa[pat.charAt(j)][j] = j+1;   // Set match case.
        ... } }

mismatch transition: stay or go back (decrease state number) if currchar != pattern[j]
– longest suffix from text matched with longest prefix from pattern gives next state number;
– if in state j and char != pat[j] : we can use the fact that: last j-1 chars of txt == pat[1..j-1] followed by char;
– idea: run pat[1..j-1] on DFA and take transition for char, getting a new state number for mismatched case.
– to do it in one pass: keep track of state X: where we would be if we had pattern shifted over 1
– for each state j and char c != pat[j], set dfa[c][j] = dfa[c][X]; then update X = dfa[pat[j]][X]
public KMP(String pat) {
    dfa = new int[R][M];
    dfa[pat.charAt(0)][0] = 1;
    for (int X = 0, j = 1; j < M; j++) {
        for (int c = 0; c < R; c++)
            dfa[c][j] = dfa[c][X];     // Copy mismatch cases.
        dfa[pat.charAt(j)][j] = j+1;   // Set match case.
        X = dfa[pat.charAt(j)][X];     // Update restart state. } }

performance
– construction: space/time ~RM
– search: O(M+N) char accesses
– overall: M+N char accesses
– improved version of KMP: constructs nfa[] in time/space ~M

Boyer-Moore substring search: check pattern right-to-left, O(MN), best ~N/M
– can skip up to M chars, scanning chars from right to left
– mismatch char heuristic: char in pattern / not in pattern
– case 1: shift pattern one char beyond mismatch (skip up to M)
– case 2: split to 2a, 2b
– case 2a: shift pattern, align text char to rightmost pattern char
– case 2b: rightmost pat.char leads to backup => shift to right by 1
– we need positions table for pattern, rightmost occurence index
right=new int[R], right.fill(-1), for n in M: right[pat[n]]=n
– search implementation
public int search(String txt) {
    for (int i = 0; i <= N - M; i += skip) {
        skip = 0;
        for (int j = M-1; j >= 0; j--) {
            if (pat.charAt(j) != txt.charAt(i+j)) {
                skip = Math.max(1, j - right[txt.charAt(i+j)]);
                break; } }
        if (skip == 0) return i;    // found }
    return N;                       // not found }
– ~N/M best, ~MN worst, but can improve worst to ~3N by adding KMP-like rule to guard against repetitive patterns

Rabin-Karp fingerprint search, modular hashing
– idea: get h=hash(pat), compute rolling txt hash, compare h and rolling hashes
– rolling hash takes each char as a number-base-R digit : consider substring as a M-digit, base-R integer 0..R^M;
– take a big prime number: Q=997 (avoid overflow), than hash h = number-base-R mod Q
– modular hash, Horner's method, hash: h=0; for m in M: h = (R*h + txt[m]) mod Q
linear-time method to evaluate degree-M polynomial
private long hash(String key, int M) {
    long h = 0;
    for (int j = 0; j < M; j++) h = (R * h + key.charAt(j)) % Q;
    return h; }
– rolling hash, update hash in constant time: for n in N: // N = text length
subtract leading digit txt[n] , add new trailing digit txt[n+M]
const RM = R^M-1; hash = (hash — txt[n] * RM) * R + txt[n+M]
public RabinKarp(String pat) {
    R = 256;
    M = pat.length();
    Q = longRandomPrime();
    patHash = hash(pat, M);
    RM = 1; // precompute R^(M-1) % Q for use in removing leading digit
    for (int i = 1; i <= M-1; i++) RM = (R * RM) % Q; }
– search routine: Monte Carlo version: return match if hash match;
Las Vegas version: check for substring match if hash match
public int search(String txt) {
    int N = txt.length();
    if (N < M) return N;
    long txtHash = hash(txt, M);
    if (patHash == txtHash) return 0;
    for (int i = M; i < N; i++) {
        // Remove leading digit, add trailing digit, check for match.
        txtHash = (txtHash + Q - RM*txt.charAt(i-M) % Q) % Q;
        txtHash = (txtHash*R + txt.charAt(i)) % Q;
        if (patHash == txtHash) return i-M+1; }
    return N; // no match }
analisis: probability of a collision: about 1/Q in practice
Monte Carlo always runs in linear time, Las Vegas worse case ~MN
– advantages: linear time; can be extended to 2D patterns or to finding multiple patterns
– disadvantages: hash calc is slower than char compares; Las Vegas requires backup, poor worst-case quarantee

– substring search summary:
– brute force: time 1.1N .. MN, space 1, need backup
– Knuth-Morris-Pratt: time 1.1N .. 3N, space MR, no backup
– Boyer-Moore: time N/M..MN, space R, need backup
– Rabin-Karp: time 7N..MN, space 1, backup for Las Vegas

Regular Expressions pattern matching

find strings from a specified set in a given text
A regular expression is a method for specifying a set of strings

RegExp: notation to specify a set of strings
– sequence of chars and meta-chars
– precedence order (4 possible items)
1 parentheses, '(AB)*(A|B)', define subpatterns
2 closure, wildcard, 'AB*A', 0..inf repetitions
3 concatenation, 'AABA', chars sequence exactly
4 or, set of two patterns 'AA | BAAB'

RegExp additional ops
– for convinience, used in real word
– wildcard '.' => 'A|B| … y|z', any char
– class '[A-Z]' => 'A|B … Z', subset of chars
– at least 1 'A+' => 'AA*', one or more
– exactly k 'A{3}' => 'AAA', k repeats
– can be expressed via basic ops

RE – DFA duality: Kleene's theorem
– RE: concise way to describe a set of strings
– DFA: machine to recognize whether a given string is in a given set
– for any DFA, there exists a RE that describes the same set of string
– for any RE, there exists a DFA that recognizes the same set of strings
– meaning: it's always possible to construct DFA for RE
– pattern matching implementation: build DFA from RE, simulate DFA with text as input
linear time, no backup
bad news: DFA may have exponential # of states;
enters NFA

RegExp and Nondeterministic Finite Automaton
– build NFA from RE, simulate NFA with text as input
– no backup, quadratic-time guarantee (linear-time typical)
– NFA: one state per RE char, match transitions, e-transitions
– match transition change state and scan to next text char
– epsilon-transition change state, don't scan text
– up to 3 transitions from each state
– not always determined what the machine going to do next: few transitions from some states
regexp example
( ( A * B | A C ) D )
forward transitions, match-transition from letter forward
( ( A  * B  | A  C  ) D  ) t
forward, epsilon-transitions
( → ( → A * → B | A C ) → D ) → t

epsilon-transitions for '*', '|'
      → 
( ( A   * B | A C ) D )
      ← 
        → '| to )'
( ( ... | A ... ) ...
         → '( to A'
– Nondeterministic: can be several applicable transitions, need to select the right one
– How? Systematically consider all possible transition sequences
traverse the graph (DFS or BFS), mark all reachable states

NFA simulation : pattern-matching, O(MN) time
– state names: 0..M, match transitions implicitly in char re[], e-transitions in digraph G
– maintain set of all possible states that NFA could be in after reading in the first i chars
– digraph: find all vertices reachable from a set of vertices : vertex = state
– digraph dfs run time ~E+V
public NFA(String regexp) {
    M = regexp.length();
    G = new Digraph(M+1);
    buildEpsilonTransitionsDigraph(G); }
public boolean recognizes(String txt) {
    Bag<Integer> pc = new Bag<Integer>(); // program counter
    DirectedDFS dfs = new DirectedDFS(G, 0);
    for (int v = 0; v < G.V(); v++)
        if (dfs.marked(v)) pc.add(v); // first set of states
    for (int i = 0; i < txt.length(); i++) {
        Bag<Integer> match = new Bag<Integer>();
        for (int v : pc) {
            if (v == M) continue;
            if ((regexp.charAt(v) == txt.charAt(i)) || regexp.charAt(v) == '.')
                match.add(v+1); } // found match transition
        dfs = new DirectedDFS(G, match);
        pc = new Bag<Integer>(); // compute new set of possible states
        for (int v = 0; v < G.V(); v++)
            if (dfs.marked(v)) pc.add(v); }
    for (int v : pc) if (v == M) return true;
    return false; }
pc: all possible states, program counter
– collect dfs.marked states accessible from state 0, store to pc
– for char in txt: check states in pc:
if state matching char: add state to matched set // implicit match transition
create new pc, collect dfs.marked states accessible from matched, store to pc
– if pc has state == M: pattern matched
– performance: for each of the N text chars, we iterate through a set of states of size <= M
and run DFS on the graph of e-transitions (edges <= 3M)

NFA construction : add edges to digraph, ~M time and space
– parsing RE using stack for keeping track of '(|)'
– states: a state for each RE symbol + an accept (termination) state
– concatenation: add match-transition edge to next state // implicit
– parentheses: add e-transition edge to next state
– closure '*' : add three e-transition edges : forward; to prev.char/lp; from prev.char/lp (lp: left parenthesis)
– or '|' : add two e-transition edges: lp to next char, to right parenthesis
public NFA(String regexp) {
    M = regexp.length();
    Stack<Integer> ops = new Stack<Integer>();
    G = new Digraph(M+1);
    for (int i = 0; i < M; i++) {
        int lp = i;
        if (regexp.charAt(i) == '(' || regexp.charAt(i) == '|') ops.push(i);
        else if (regexp.charAt(i) == ')') { // detect lp, process or
            int or = ops.pop();             // may be it's 'or' on stack?
            if (regexp.charAt(or) == '|') { // yes, or, process it
                lp = ops.pop();
                G.addEdge(lp, or+1);
                G.addEdge(or, i); }
            else if (regexp.charAt(or) == '(') lp = or; // no, it's lp }
        if (i < M-1 && regexp.charAt(i+1) == '*') { // next is closure?
            G.addEdge(lp, i+1);
            G.addEdge(i+1, lp); }
        if (regexp.charAt(i) == '(' || regexp.charAt(i) == '*' || regexp.charAt(i) == ')')
            G.addEdge(i, i+1); } }
– only e-transitions in digraph; mathing transitions stored implicitly in re[]
– performance: ~M, for each RE char we add at most 3 e-transitions and execute at most 2 stack ops

grep: O(MN), as for brute-force substring search
– re = '(.*' + arg + '.*)'
– add: wildcard, multiway or, metachars, classes, capturing, greedy/reluctant, …

complexity attacks: exponential time for some inputs
– Kleene's theorem: exponential # of states in DFA
'(a|aa)*b : aaaaaa... aaac
– typical implementations do not guarantee performance

intractable: not-so-regular expressions: backref
– back-references: '(.+)\1' // beriberi
– \1 matches subexpression that was matched earlier
– Kleene's theorem does not hold, pattern-matching is intractable

bottom line: RegExp, NFA – not so different from (Java) compiler
– substring search: DFA
– RE pattern-matching: NFA
– javac Java lang.: Java byte code

Data compression

several classic data compression schemes, lossless compression

lossless compression and expansion
– message: binary data B
– compress: representation C(B) – uses fewer bits
– expand: reconstructs original bitstream B
– compression ratio: size C(B) / size B

fixed-length code
– k-bit code supports alphabet of size 2^k

I/O API for binary data : BinaryStdIn, BinaryStdOut
– char readChar() : read 8 bits
– void write(char c) : write 8 bits
– write date as 21+3 bits; bits packed into chars, aligned to 8 bit
BinaryStdOut.write(month, 4);
BinaryStdOut.write(day, 5);
BinaryStdOut.write(year, 12);
– ways to examine the contents of a bitstream: BinaryDump, HexDump, PictureDump

universal data compression
– no algorithm can compress every bitstream: or infinite compression would be possible
– if you know algorithm that created B, you can use text of a program as C(B)

Run-Length encoding/decoding: num of 0, num of 1, num of 0, …
– best for long runs of repeated bits
public class RunLength {
    private static final int R    = 256;
    private static final int LG_R = 8;
public static void expand() {
    boolean b = false;
    while (!BinaryStdIn.isEmpty()) {
        int run = BinaryStdIn.readInt(LG_R);
        for (int i = 0; i < run; i++) BinaryStdOut.write(b);
        b = !b; }
    BinaryStdOut.close(); }
public static void compress() {
    char run = 0;
    boolean old = false;
    while (!BinaryStdIn.isEmpty()) {
        boolean b = BinaryStdIn.readBoolean();
        if (b != old) {
            BinaryStdOut.write(run, LG_R);
            run = 1;
            old = !old; }
        else {
            if (run == R-1) {
                BinaryStdOut.write(run, LG_R);
                run = 0;
                BinaryStdOut.write(run, LG_R); }
            run++; } }
    BinaryStdOut.write(run, LG_R);
    BinaryStdOut.close(); }
– best ratio: 8/255

Huffman compression: prefix-free variable-length code using binary trie
– variable-length codes: use different number of bits to encode different chars
– ex. Morse code: … --- ...
– issue: ambiguity, SOS or V7 or … // use a medium gap to separate codewords
– problem: one char may be encoded as another char prefix
– prefix-free code table: no codeword is a prefix of another // no need for medium gap
– Huffman finds optimal prefix-free codetable: with fewer bits for encode particular message.
public static void compress() {
    String s = BinaryStdIn.readString();
    char[] input = s.toCharArray();
    int[] freq = new int[R];
    for (int i = 0; i < input.length; i++) freq[input[i]]++;
    Node root = buildTrie(freq);
    String[] st = new String[R]; buildCode(st, root, "");
    writeTrie(root);
    BinaryStdOut.write(input.length);
    for (int i = 0; i < input.length; i++) {
        String code = st[input[i]];
        for (int j = 0; j < code.length(); j++) {
            if (code.charAt(j) == '0') BinaryStdOut.write(false);
            else if (code.charAt(j) == '1') BinaryStdOut.write(true); } }
    BinaryStdOut.close(); }
private static void buildCode(String[] st, Node x, String s) {
    if (!x.isLeaf()) {
        buildCode(st, x.left,  s + '0');
        buildCode(st, x.right, s + '1'); }
    else st[x.ch] = s; }
– details below

prefix-free codes: binary trie representation
– chars in leaves
– codeword is path from root to leaf: left link = 0, right link = 1
– compression: (compute trie for message) create ST of key-value pairs; write code for each char
– expansion: start at root; get 0: go left, get 1: go right; in leaf node: write char, goto root
– trie node: comparable, refs to nodes left, right; int freq, char ch
private static class Node implements Comparable<Node> {
    private final char ch;
    private final int freq;
    private final Node left, right;
    private boolean isLeaf() {
        return (left == null) && (right == null); }
    public int compareTo(Node that) {
        return this.freq - that.freq; } }

prefix-free codes in trie, expansion: linear time
– for each char in message: read bits and traverse trie (0-left, 1-right), if leaf: output char
– works for any prefix-free code, not just Huffman
public static void expand() {
    Node root = readTrie();
    int length = BinaryStdIn.readInt();
    for (int i = 0; i < length; i++) {
        Node x = root;
        while (!x.isLeaf()) {
            boolean bit = BinaryStdIn.readBoolean();
            if (bit) x = x.right;
            else     x = x.left; }
        BinaryStdOut.write(x.ch, 8); }
    BinaryStdOut.close(); }

how to store the trie: write preorder traversal (recursion)
– if leaf: write bit 'isLeaf', 8bit char; else bit 'notLeaf'; store left, store right
private static void writeTrie(Node x) {
    if (x.isLeaf()) {
        BinaryStdOut.write(true);
        BinaryStdOut.write(x.ch, 8);
        return; }
    BinaryStdOut.write(false);
    writeTrie(x.left);
    writeTrie(x.right); }

how to read in the trie: reconstruct trie from preorder traversal (recursion)
– if isLeaf: read char, return new node; else: read left, read right, return new node(left,right)
– internal nodes frequency = -1, char = 0
private static Node readTrie() {
    boolean isLeaf = BinaryStdIn.readBoolean();
    if (isLeaf) 
        return new Node(BinaryStdIn.readChar(), -1, null, null);
    return new Node('\0', -1, readTrie(), readTrie()); }

compute prefix-free code: Shannon-Fano, not optimal
– find frequences; partition set into 2 subsets of equal sum(freq), repeat
– we want more frequent char encoded with fewest bits
– how to divide? not optimal.

build trie: compute prefix-free code: Huffman codes, optimal
– count frequency for each char
– create weighted leaf-nodes, put them into minPQ
– merge 2 min-weight nodes into 1 internal (new node.freq = sum(left, right))
– repeat merging
private static Node buildTrie(int[] freq) {
    MinPQ<Node> pq = new MinPQ<Node>();
    for (char i = 0; i < R; i++)
        if (freq[i] > 0) pq.insert(new Node(i, freq[i], null, null));
    while (pq.size() > 1) {
        Node left  = pq.delMin();
        Node right = pq.delMin();
        Node parent = new Node('\0', left.freq + right.freq, left, right);
        pq.insert(parent); }
    return pq.delMin(); }

Huffman summary:
– produces an optimal prefix-free code
– compress in 2 passes: tabulate char frequencies and build trie; encode message
– time ~N + RlgR using a binary heap PQ
– can do better

LZW Compression: adaptive model, better than Huffman // longest prefix match : trie
– about different models (static – ASCII, Morse; dymanic – Huffman; adaptive – LZW)
static: same model for all texts
dymanic: preliminary pass needed to generate model; must transmit the model
adaptive: progressively learn and update model as you read text
– LZW adaptive model: codeword table(prefix: code)
– longest prefix table (trie ST); init with char:code
– reading text, update table: current prefix+next char
public class LZW {
    private static final int R = 256;        // number of input chars
    private static final int L = 4096;       // number of codewords = 2^W
    private static final int W = 12;         // codeword width
public static void compress() {
    String input = BinaryStdIn.readString();
    TST<Integer> st = new TST<Integer>();
    for (int i = 0; i < R; i++) st.put("" + (char) i, i);
    int code = R+1;  // R is codeword for EOF
    while (input.length() > 0) {
        String s = st.longestPrefixOf(input);  // Find max prefix match s.
        BinaryStdOut.write(st.get(s), W);      // Print s's encoding.
        int t = s.length();
        if (t < input.length() && code < L)    // Add s to symbol table.
            st.put(input.substring(0, t + 1), code++);
        input = input.substring(t);            // Scan past s in input. }
    BinaryStdOut.write(R, W);
    BinaryStdOut.close(); }
– we don't have transmit the table, decoder can rebuild it
– Lempel-Ziv-Welch description:
use TrieST for (string keys: W-bit codes);
find longest key in ST that is a prefix for uscanned text;
add key+ch to ST

LZW expansion example: update table(code:char) while decoding text
– basically the same algorithm, table code:string
– convert codes to text using adaptive table
– LZW expansion:
st=array(code:string) with single-char items;
read code, write st.get(code), update st from text tail
ST = array 2^W; text tail: prev. item + one char of current item
– expand example, tricky case: code appears before it writed in table:
existing prefix, prev. value + first char from prev. value
public static void expand() {
    String[] st = new String[L];
    int i; // next available codeword value
    for (i = 0; i < R; i++) st[i] = "" + (char) i;
    st[i++] = "";                        // (unused) lookahead for EOF
    int codeword = BinaryStdIn.readInt(W);
    if (codeword == R) return;           // expanded message is empty string
    String val = st[codeword];
    while (true) {
        BinaryStdOut.write(val);
        codeword = BinaryStdIn.readInt(W);
        if (codeword == R) break;
        String s = st[codeword];
        if (i == codeword) s = val + val.charAt(0);   // special case hack
        if (i < L) st[i++] = val + s.charAt(0);
        val = s; }
    BinaryStdOut.close(); }

LZW variations: how big ST, what to do if ST full, longer substrings
– many variations have been developed

compression summary
– represent fixed-length symbols with variable-length codes: Huffman
– represent variable-length symbols with fixed-length codes: LZW
– theoretical limits: Shannon entropy
– practical compression: use extra knowledge whenever possible

Reductions

A technique for studying the relationship among problems.
People use reductions to design algorithms, establish lower bounds, and classify problems in terms of their computational requirements

Our lectures this week are centered on the idea of problem-solving models like maxflow and shortest path,
where a new problem can be formulated as an instance of one of those problems,
and then solved with a classic and efficient algorithm.

reduction: problem-solving model, make use of known algorithms to solve new problems.
– maxflow-mincut, shortest path, linear programming, …
– but there's limits: intractability

classify problems: computational requirements, equivalence classes
– linear: min, max, median, Burrows-Wheeler transform, ...
– linearithmic: sorting, convex hull, closest pair, ...
– quadratic: integer multiplication, division, square root, …
– qubic: matrix multiplication, least square, determinant, ...
– NP-complete, not polynomial?: SAT, ILP, ...

reduction: problem X reduces to problem Y if you can use an algorithm that solves Y to help solve X
– cost of solving X = cost of solving Y + cost of reduction

examples
– find the median reduces to sorting: sort N items, return item in the middle
cost = N log N + 1
– element distinctness reduces to sorting: sort N items; check adjacent pairs
cost = N log N + N
– mincut reduces to maxflow: find maxflow; find the set of vertices that are reachable from s
using not full/not empty edges
reduction cost: V + E, dfs or bfs search

design algorithms using reduction: given algorithm for Y, can also solve X
– 3-collinear reduces to sorting
– CPM (critical path method) reduces to topological sort (longest path in DAG reduces to shortest path)
– arbitrage reduces to shortest path (Bellman-Ford: find cycle with negative sum)
– Burrows-Wheeler transform reduces to suffix sort

convex hull: reduces to sorting, Graham scan
– given N points in the plane, identify the extreme points of the convex hull, in counterclockwise order
– cost: N log N sort + N reduction
choose point p with smallest y-coordinate;
sort points by polar angle with p;
scan points in order, discard those that would create a clockwise turn – all turns must be left-turns:
in point where you have to make right (clockwise) turn you stop and drop this point

shortest path on undirected graph: reduces to SP on digraph
– replace 1 undirected edge with 2 directed, same weight
– negative weigths: reduction creates negative cycle, can't solve (can reduce to weighted non-bipartite matching)
– cost: E log V + E

A bipartite graph, also called a bigraph, is a set of graph vertices decomposed into two disjoint sets such that no two graph vertices within the same set are adjacent.

mincut in undirected graph: reduces to mincut in directed graph
– for each edge e create 2 antiparallel edges in digraph, each with the same capacity

linear-time reductions
– to sorting: SPT scheduling, find the median, convex hull, element distinctness
– to maxflow: product distribution, network reliability, bipartite matching
– to shortest path in digraph: arbitrage, parallel scheduling, SP in undirected graph
– to linear programming: maxflow, shortest path

Linear-time reduction: problem X linear-time reduces to Y if X can be solved with
– constant number of calls to Y, + linear number of standard computations

Establishing Lower Bounds: using reduction, prove that problem requires a certain # of steps
– establish X lower bounds by reducing X to Y, assuming cost of reduction is not too high
– if X takes Omega(N log N) steps, then so does Y
– if X takes Omega(N^2) steps, then so does Y
– can solve Y easily: could easily solve X
– can't easily solve X: can't … Y
– bottom line: you can try to find linear-time algorithm for, say, convex hull problem, or you can prove: that's impossible

example: lower bound for convex hull, linear-time reduction
– proposition: in quadratic decision tree model, any sorting algorithm requires Omega(N log N) steps
– proposition: sorting linear-time reduces to convex hull
– implication: any ccw-based convex hull algorithm requires Omega(N log N) ops
quadratic decision tree model: basic ops for geometric algorithms
allows linear or quadratic tests: x i < x j or (x j – x i ) (x k – x i ) – (x j ) (x j – x i ) < 0
– reduce sorting to convex hull: put numbers x to parabola (convert to points on the parabola, creating convex hull instance)
– taking points in ccw order we get sorted numbers

element distinctness linear-time reduces to sorting.
– Element distinctness linear-time reduces to finding the mode because if the most frequent integer
occurs more than once, then there is a duplicated integer.
– Closest pair linear-time reduces to element distinctness because the distance
between the closest pair is zero if and only if there is a duplicated integer.

Classifying Problems: X and Y have the same complexity if: X reduces to Y, Y reduces to X (in linear time)
– reducing in both directions shows that they similar in computational requirements
– if you find the way to solve reduced problem faster, you found the way for all family

example, integer arithmetic reductions:
– a*b, a/b, …: N-bit integers
– brute force: N^2 ops
– ~NlogN algorithm discovered, all ariphmetic ops will be solving faster: same complexity

example: linear algebra reduction
– matrices product, determinant, least squares, system of N linear equations, matrix inverse
– brute force: ~N^3 flops
– N^2.3727 algorithm discovered, all problems in class take advantage from new algorithms for matrix multiplication

complexity zoo : lots of complexity classes
– complexity class: set of problems sharing some computational property

reductions summary : reductions important to
– design algorithm, establish lower bounds, classify problems, design reusable modules
– determine difficulty of your problem and choose the right tool, link new problem to that you know how to solve
tractable problem: use exact algorithm
intractable problem: use heuristics

Linear Programming

The quintessential problem-solving model is known as linear programming,
and the simplex method for solving it is one of the most widely used algorithms.
In this lecture, we given an overview of this central topic in operations research.

LP: problem solving model
– how to spend all resources effectively
– optimize allocating of scarce resources, maximizing X according to constraints;
shortest path, maxflow, MST, matching, assignment, …
– widely applicable
– you have to formulate your problem as LP, reduce to Objective Function and set of Constraints
then just solve that LP problem (using any LP solver, e.g. Simplex method)
– fast commercial solvers available

LP problem
– Objective Function: maximize 13A + 23B (A=?, B=?),
maximize OF finding values for variables
– according to constraints: set of linear equalities: 5A + 15B <= 480, ...

example: Brewer's problem: how much barrels of ale/beer we should make for max profit?
– production limited by scarce resources: corn, hops, malt
– how to efficiently spend all resources to get max profit?
– we can use corn, hops, malt to produce ale or beer
– we have limited amount of resources (corn 480, hops 160, malt 1190)
– barrel of ale: $13 profit, beer: $23
– production of ale/beer require different amounts/proportions of resources
ale: 5 corn, 4 hops, 35 malt
beer: 15 corn, 4 hops, 20 malt
– all on ale: 34 barrels, $442
– all on beer: 32 barrels, $736
– can get $800, how? 12 ale, 28 beer

reducing brewer's problem to LP:
– A: number of barrels of ale; B: beer
– OF: maximize Z = 13A + 23B // where $13: profit for 1 barrel of ale, $23: profit for 1 beer
– constraint: 5A + 15B <= 480
// where 480: lbs of corn available, barrel of ale requires 5 lbs, barrel of beer requires 15 lbs
– constraint: 4A + 4B <= 160 // 160 hops
– constraint: 35A + 20B <= 1190 // 1190 malt
– constraint: A , B >= 0 // number of barrels not negative

feasible region (geometric intuition)
– inequalities define halfplanes (in A,B plane, positive quarter)
– feasible region: convex polygon limited by inequalities lines
– OF defines the slope of the line Z
– solution: point where line Z touches the feasible region
– optimal solution has to be in an extreme point somewhere
– number of extreme points to consider is finite
– but number of extreme points can be exponential
greedy property: extreme point optimal iff no better adjacent extreme point
local optima are global optima (OF is linear, f. region is convex)

standard form linear program (in general): maximize OF of n nonnegative variables, subject to m linear equasions
– linear: linear combination of variables, no x^2, xy, arccos(x), …
– n variables, m constraints; a,b,c: coefficients for linear equations
– get rid of inequalities
– x[n]: output, solution // numbers of barrels of ale, beer
– b[m]: amount of resources // lbs of corn, hops, malt
– a[m][n]: resources consumed by x[n] // ale/beer recipe
– c[n]: profit from x[n]
– matrix version: maximize c_transpose * x // OF
subject to the constraints: A * x = b
x >= 0

convert the brewer's problem to the standard form: add Z and slack variables
– Z: maximum profit
– slack: leftover from resource, how much corn, ... would be left
Original formulation
13A + 23B // max
5A  + 15B ≤ 480
4A  + 4B  ≤ 160
35A + 20B ≤ 1190
A   , B   ≥ 0

Standard form; maximize Z
13A + 23B                -Z = 0
5A  + 15B + Sc              = 480
4A  + 4B       + Sh         = 160
35A + 20B           + Sm    = 1190
A   , B ,   Sc,  Sh,  Sm    ≥ 0

Simplex algorithm: intersection of halfplanes, linear algebra
– algorithm: pivot from one extreme point to an adjacent, never decreasing OF
– define basis: m variables (from n availiable)
– set nonbasic variables (other than basis) to 0
– find BFS (basic feasible solution): solve m eqations in m unknowns, that an extreme point if unique and feasible
– pivot: select next variable for basis:
column: OF coefficient > 0, increasing OF (Bland's rule)
row: right-hand side >= 0, min-ratio rule: min RHS/coeff.
– optimality: if OF coefficients all negative, OF can only decrease now, we have to stop

Simplex tableau, init
5   15  1   0   0   480
4   4   0   1   0   160
35  20  0   0   1   1190
13  23  0   0   0   0
A = // recipe [m][n]
5   15
4   4
35  20
c = // profit from a barrel [1][n]
13  23
b = // resources [m][1]
480
160
1190
– recipe for ale: col[0]
– recipe for beer: col[1]
– row[3], col[0,1]: profit from a barrel, objective function
– col[2,3,4]: slack variables
– col[5]: right-hand side of the equations, availiable resources
– m = 3, num of constraints; n = 2, num of variables
simplex algorithm transforms tableaux, final
0   1   1/10    1/8     0   28
1   0   -1/10   3/8     0   12
0   0   -25/6   -85/8   1   110
0   0   -1      -2      0   -800
– A, B in basis
– answer: $800 for 28 barrels of beer, 12 barrels of ale

Simplex implementation, bare-bones
public class LinearProgramming {
    private static final double EPSILON = 1.0E-10;
    private double[][] a;   // tableaux
    private int M;          // number of constraints
    private int N;          // number of original variables
    private int[] basis;    // basis[i] = basic variable corresponding to row i
                            // only needed to print out solution, not book
public LinearProgramming(double[][] A, double[] b, double[] c) {
    M = b.length;
    N = c.length;
    a = new double[M+1][N+M+1];
    for (int i = 0; i < M; i++)
        for (int j = 0; j < N; j++)
            a[i][j] = A[i][j];
    for (int i = 0; i < M; i++) a[i][N+i] = 1.0;
    for (int j = 0; j < N; j++) a[M][j] = c[j];
    for (int i = 0; i < M; i++) a[i][M+N] = b[i];
    basis = new int[M];
    for (int i = 0; i < M; i++) basis[i] = N + i;
    solve(); }
– c size n => coefficients of objective function (A, B barrels, profit from barrel) => number of variables = 2
– b size m => constraints (amount of crop, …) => number of constrains = 3
– A size m*n: recipe for ale/beer
– slack diagonal = 1.0
private void solve() {
    while (true) {
        int q = bland(); // find entering column q
        if (q == -1) break;  // optimal
        int p = minRatioRule(q); // find leaving row p
        if (p == -1) throw new ArithmeticException("Linear program is unbounded");
        pivot(p, q);
        basis[p] = q; } }
– Bland's rule: for pivoting we need positive coefficient
– tableaux have m rows and m+n columns
– find column with coefficient (in row M, last row) > 0
private int bland() {
    for (int q = 0; q < M + N; q++)
        if (a[M][q] > 0) return q;
    return -1;  // optimal }
– Min-ratio rule, find leaving row p
– we selected column q already
– OF in extreme point must increase, take max step: find positive min rhs/coeff
private int minRatioRule(int q) {
    int p = -1;
    for (int i = 0; i < M; i++) {
        if (a[i][q] <= EPSILON) continue; // if (a[i][q] <= 0) continue;
        else if (p == -1) p = i;
        else if ((a[i][M+N] / a[i][q]) < (a[p][M+N] / a[p][q])) p = i; }
    return p; }
– do pivot on element row p, column q: scale tableaux entries
– pivot on entry (p, q) using Gauss-Jordan elimination
private void pivot(int p, int q) {
    for (int i = 0; i <= M; i++) // everything but row p and column q
        for (int j = 0; j <= M + N; j++)
            if (i != p && j != q) a[i][j] -= a[p][j] * a[i][q] / a[p][q];
    for (int i = 0; i <= M; i++) // zero out column q
        if (i != p) a[i][q] = 0.0;
    for (int j = 0; j <= M + N; j++) // scale row p
        if (j != q) a[p][j] /= a[p][q];
    a[p][q] = 1.0; }

Simplex running time: 2(M+N) pivots in practice
– one pivot time O(M^2)
– no pivot rule is known that is guaranteed to be polynomial
– most pivot rules are known to be exponential in worst-case

simplex problems (degeneracy, stalling, sparsity, numerical stability, infeasibility, ...)
– degeneracy: new basis, same extreme point (stalling is common in practice)
– cycling: get stuck by cycling through different bases, that all correspond to same extreme point
– Bland's rule guarantees finite # of pivots, choosing lowest valid columns index
– don't implement it, use availiable solvers

Reductions to standard form: replace min with max, inequalities, …
– replace 'min 13A + 15B' with max '-13A -15B'
– replace '4A + 4B >= 160' with '4A + 4B — Sh = 160', Sh >= 0
– replace unrestricted B with 'B = B0 — B1', B0 >= 0, B1 >= 0

Modeling: Linear 'Programming' term from 1950s, now it became 'reduction to LP'
– identify variables
– define constraints
– define objective function
example, maxflow reduced to LP problem
– variables: amount of flow on edges, Xvw = flow on edge v-w
– constraints: flow >= 0 and flow <= capacity
– constraints: inflow had to be = outflow on every vertex // flow conservation
– objective function: max flow value = flow-on-edge-35 + flow-on-edge-45 // net flow into t
– LP can be slower than graph solution, but LP is extendable and flexible: easy to add any constraints and conditions
example, bipartite matching reduced to LP
– find a matching of maximum cardinality // set of edges with no vertex appearing twice
– pair student-job = one variable, variables: all pairs
– if person i assigned to job j: xij = 1
– OF: max sum of xij
– constraints: at most one job per person: xa0 + xa1 + xa2 <= 1, …
at most one person per job: xa0 + xb0 + xd0 <= 1, ...

if you got an optimization problem you can use LP solver
No universal problem-solving model exists unless P = NP

Intractability

intractable problems: can't be solved in poly-time; prove that problem is intractable: hardly

basis for computation theory : any model of computations can be reduced to Turing machine

simple model of computation: DFA – discrete/deterministic finite automaton
– tape stores input
– finite alphabet of symbols
– tape head points to one cell of tape
– reads a symbol, moves one cell at a time

universal model of computation: Turing machine, most powerful model
– tape stores input, output, intermediate results
– finite alphabet of symbols
– tape head points to one cell of tape; reads a symbol; writes a symbol
– moves one cell at a time

Church-Turing thesis: Turing machines can compute any function that can be computed
by a physically harnessable process of natural world
– use simulation to prove models equivalent
– no need to seek more powerful machines or languages
– 8 decades without a counterexample
– many models of computation that turned out to be equivalent

Algorithms: useful in practice ('efficient') = polynomial (a*N^b) time of input size N for all inputs
– but not exponential (x^N)
– which problems have poly-time algorithms? That's the question (classification)
– can't prove intractability in general. 2 problems that provably require exponential time:
given a constant-size program, does it halt in at most K steps?
given N-by-N checkers board position, can the first player force a win?

Search problems: find a solution S for instance I of a problem, check effectively that S is a solution
– if you can't check solution (in poly-time) it's not a Search Problem
– four fundamental problems: // search problems
LSOLVE: system of linear equations (real numbers) – N^3 time using Gaussian elimination
LP: system of linear inequalities (real numbers) – poly-time Ellipsoid algorithm
ILP (system of linear inequalities, 0-1 solution) – no poly-time known
SAT (system of boolean equations, binary solution) – no poly-time known
– TSP: travel sales person, find the shortest simple cycle for visiting every vertex exactly once – not a search problem

P vs. NP, 2 major classes of problems, form a basis for intractability theory
NP: complexity class of all Search Problems; we would like to have solutions for these problems
P: complexity class of search problems solvable in poly-time (in the natural world),
NP subclass, we have effective solutions
natural world: determinism, current state + input = next state (not a many possible next states)
– N: nondeterminism (NP: search problems solvable in poly-time on a Nondeterministic Turing Machine)
– NP: nondeterministic polynomial

TM: Turing machine, NTM: nondeterministic TM
– NTM: it's abstract machine, in real life no such thing (more than 1 possible next state)
– we can emulate NFA (RegEx), trying all possible transitions
– NFA can run in exponential time for some input
– nondeterminism was used for getting practical solution

P = NP? (if we have nondeterminism in the natural world)
or problems we can solve = problems we would like to solve?
– creativity: find solution; appreciating: check solution => search problem, NP
– find solution in poly-time: no creativity, just work => P

Classifying Problems using reduction from SAT
– SAT (satisfiability), it's a search problem, intractable
x1' or x2 or x3 = true
x1 or x2' or x3 = true
x1' or x2' or x3' = true
x1' or x2' or x4 = true
– system of boolean equations
– try all 2^N truth assignments, no poly-time algorithm for SAT
– Problem X poly-time reduces to problem Y if
X can be solved with polynomial number of standard computational steps
and polynomial number of calls to Y
if we can poly-time reduce SAT to X: X is intractable (probably)
– that's the tool for problems classification

example: SAT reduces to ILP
– reduction: add variable Ci for each SAT equation (iff equation i is satisfied, Ci = 1)
– Ci >= xn if xn not negated or Ci >= 1 — xn if xn is negated
x1' or x2 or x3 = true
// maps to
C1 >= 1 — x1 
C1 >= x2
C1 >= x3
C1 <= (1 — x1) + x2 + x3
– add F = 1 iff all equations are satisfied
F <= C1
F <= C2
F <= C3
F <= C4
F >= C1 + C2 + C3 - 3
– this system have a solution if and only if SAT have a solution
and this is reduction from SAT to ILP.
=> ILP is intractable.

first step in classifying problem
– we can develop poly-time algorithm to solve a problem X: proving it's a P problem
– or, we can reduce SAT to X: proving it's an NP problem

NP-Completeness: NP problem X is NPC if all NP problems can be reduced to X in poly-time
– SAT is NP-complete; proof: convert nondeterministic Turing machine notation to SAT notation,
if you can solve SAT, you can solve any problem in NP
– poly-time algorithm for SAT exists iff P = NP
– if no poly-time algorithm for some NP problem, none for SAT

summary
– P: class of search problems solvable in poly-time
– NP: class of all search problems, some of which wickedly hard
– NP-complete: hardest problems in NP
– intractable: problems with no poly-time algorithm

intractability is good, mKay?
– you can exploit it: crypto, factor problem is intractable

coping with intractability: don't solve 'in general'; solve for 'good enough'; approximate
– intractability: can't solve ANY instance of a problem in poly-time
– solve not any, solve some particular instance // don't solve 'in general', solve special cases
– divide and conquer
– simplify the problem
– may be 'good' solution is enough, forget global optimum/best solution
– use approximation with provable quality
– in real life you can solve intractable problem in poly-time
– example: Hamilton path, exponential
public HamiltonPath(Graph G) {
    marked = new boolean[G.V()];
    for(int v = 0; v < G.V(); v++) dfs(G, v, 1); }
private void dfs(Graph G, int v, int depth) {
    marked[v] = true;
    if(depth == G.V()) log("path found");
    for(int w: G.adj(v)) if(!marked[w]) dfs(G, w, depth+1);
    marked[v] = false; }





original post http://vasnake.blogspot.com/2016/05/algorithms-part-ii-princeton-university.html

Комментариев нет:

Отправить комментарий

Архив блога

Ярлыки

linux (241) python (191) citation (186) web-develop (170) gov.ru (159) video (124) бытовуха (115) sysadm (100) GIS (97) Zope(Plone) (88) бурчалки (84) Book (83) programming (82) грабли (77) Fun (76) development (73) windsurfing (72) Microsoft (64) hiload (62) internet provider (57) opensource (57) security (57) опыт (55) movie (52) Wisdom (51) ML (47) driving (45) hardware (45) language (45) money (42) JS (41) curse (40) bigdata (39) DBMS (38) ArcGIS (34) history (31) PDA (30) howto (30) holyday (29) Google (27) Oracle (27) tourism (27) virtbox (27) health (26) vacation (24) AI (23) Autodesk (23) SQL (23) humor (23) Java (22) knowledge (22) translate (20) CSS (19) cheatsheet (19) hack (19) Apache (16) Manager (15) web-browser (15) Никонов (15) Klaipeda (14) functional programming (14) happiness (14) music (14) todo (14) PHP (13) course (13) scala (13) weapon (13) HTTP. Apache (12) SSH (12) frameworks (12) hero (12) im (12) settings (12) HTML (11) SciTE (11) USA (11) crypto (11) game (11) map (11) HTTPD (9) ODF (9) Photo (9) купи/продай (9) benchmark (8) documentation (8) 3D (7) CS (7) DNS (7) NoSQL (7) cloud (7) django (7) gun (7) matroska (7) telephony (7) Microsoft Office (6) VCS (6) bluetooth (6) pidgin (6) proxy (6) Donald Knuth (5) ETL (5) NVIDIA (5) Palanga (5) REST (5) bash (5) flash (5) keyboard (5) price (5) samba (5) CGI (4) LISP (4) RoR (4) cache (4) car (4) display (4) holywar (4) nginx (4) pistol (4) spark (4) xml (4) Лебедев (4) IDE (3) IE8 (3) J2EE (3) NTFS (3) RDP (3) holiday (3) mount (3) Гоблин (3) кухня (3) урюк (3) AMQP (2) ERP (2) IE7 (2) NAS (2) Naudoc (2) PDF (2) address (2) air (2) british (2) coffee (2) fitness (2) font (2) ftp (2) fuckup (2) messaging (2) notify (2) sharepoint (2) ssl/tls (2) stardict (2) tests (2) tunnel (2) udev (2) APT (1) CRUD (1) Canyonlands (1) Cyprus (1) DVDShrink (1) Jabber (1) K9Copy (1) Matlab (1) Portugal (1) VBA (1) WD My Book (1) autoit (1) bike (1) cannabis (1) chat (1) concurrent (1) dbf (1) ext4 (1) idioten (1) join (1) krusader (1) license (1) life (1) migration (1) mindmap (1) navitel (1) pneumatic weapon (1) quiz (1) regexp (1) robot (1) science (1) serialization (1) spatial (1) tie (1) vim (1) Науру (1) крысы (1) налоги (1) пианино (1)