Algorithms‎ > ‎

Dynamic programming

Dynamic programming.

Dynamic programming is a powerful algorithmic design paradigm. The key idea is to save state to avoid recomputation: break a large computational problem up into smaller subproblems, store the answers to those smaller subproblems, and, eventually, use the stored answers to solve the original problem. This avoids recomputing the same quantity over and over again, and a potential exponential blow-up in the running time. We now consider two examples.
  • Binomial coefficients. The binomial coefficient C(n, k) is the number of ways of choosing a subset of k elements from a set of n elements. It arises in probability and statistics, e.g., the probability of flipping a biased coin (with probability p of heads) n times and getting exactly k heads is C(n, k) pk (1-p)k. One formula for computing binomial coefficients is C(n, k) = n! / (k! (n-k)!). This formula is not so amenable to direct computation because the intermediate results may overflow, even if the final answer does not. For example C(100, 15) = 253338471349988640 fits in a 64-bit long, but the binary representation of 100! is 525 bits long. Pascal's identity expresses C(n, k) in terms of smaller binomial coefficients:

    Pascal's identity

    To understand why Pascal's identity is valid, consider some arbitrary element x. To choose k of n elements, we either select element x (in which case we still need to choose k-1 of the remaining n-1 elements), or we don't select element x (in which case we still need to choose k of the remaining n-1 elements).

    The naive implementation below fails spectacularly for medium n or k, not because of overflow, but rather because the same subproblems are solved repeatedly.

    public static long binomial(int n, int k) {
    if (k == 0) return 1;
    if (n == 0) return 0;
    return binomial(n-1, k) + binomial(n-1, k-1);
    This makes the algorithm take exponential time (the number of recursive calls is around C(n, k)). To compensate, we must avoid recomputing the same quantities over and over. One way to do this is to store the results of all of the resulting subproblems in an n-by-k array. Before checking whether to compute C(n, k), we first consult the table to see if it has already been computed. This ensures that we compute C(n, k) at most once for each choice of n and k. This version of dynamic programming is referred to as top-down.

    Instead of using a recursive procedure, we could fill up the entries in the n-by-k array. We must organize the computation so that all entries that we need are filled in prior to using them in subsequent computations. Bottom-up dynamic programming fills in the 2D array starting with the values that are easiest to compute. We begin with the base cases: C(n, 0) = 1 for all n ≥ 0, and C(0, k) = 0 for all k ≥ 1. Then we fill in all the values when n = 1, then n = 2, and so forth. This ensures that everything we need is precomputed before we ever access it. Program takes two command line integer N and K and prints out C(N, K) using a combination of Pascal's identity and bottom-up dynamic programming.

    for (int k = 1; k <= K; k++) binomial[0][k] = 0;
    for (int n = 0; n <= N; n++) binomial[n][0] = 1;

    for (int n = 1; n <= N; n++)
    for (int k = 1; k <= K; k++)
    binomial[n][k] = binomial[n-1][k-1] + binomial[n-1][k];
    To compute C(6, 4), the program computes the following table of values from left-to-right (k = 1 to K) and then top-to-bottom (n = 1 to N).
    n\k  0  1  2  3  4
    0 1 0 0 0 0
    1 1 1 0 0 0
    2 1 2 1 0 0
    3 1 3 3 1 0
    4 1 4 6 4 1
    5 1 5 10 10 5
    6 1 6 15 20 15
  • Longest common subsequence. Now we consider a more sophisticated application of dynamic programming to a central problem arising in computational biology and other domains. Given two strings s and t, we wish to determine how similar they are. Some examples include: comparing two DNA sequences for homology (similarity), two English words for spelling, two Java files for repeated code. It also arises in molecular biology, gas chromatography, and bird song analysis. One simple strategy is to find the length of the longest common subsequence (LCS). If we delete some characters from s and some characters from t, and the resulting two strings are equal, we call the resulting string a common subsequence. The LCS problem is to find a common subsequence that is as long as possible. For example the LCS of ggcaccacg and acggcggatacg is ggcaacg.

    Now we describe a systematic method for computing the LCS of two strings x and y using dynamic programming. Let M and N be the lengths of x and y, respectively. We use the notation x[i..M] to denote the suffix of x starting at position i, and y[j..N] to denote the suffix of y starting at position j. If x and y begin with the same letter, then we should include that first letter in the LCS. Now our problem reduces to finding the LCS of the two remaining substrings x[1..M] and y[1..N]. On the other hand, if the two strings start with different letters, both characters cannot be part of a common subsequence, so we must remove one or the other. In either case, the problem reduces to finding the LCS of two strings, at least one of which is strictly shorter. If we let opt[i][j] denote the length of the LCS of x[i..M] and y[j..N], then the following recurrence expresses it in terms of the length of LCSs of shorter suffixes.

    formula for longest common substring

    opt[i][j] = 0                              if i = M or j = N
    = opt[i+1][j+1] + 1 if xi = yj
    = max(opt[i][j+1], opt[i+1][j]) otherwise

    Program is a bottom-up translation of this recurrence. We maintain a two dimensional array opt[i][j] that is the length of the LCS for the two strings x[i..M] and y[j..N]. For the input strings ggcaccacg and acggcggatacg, the program computes the following table by filling in values from right-to-left (j = N-1 to 0) and bottom-to-top (i = M-1 to 0).

           0  1  2  3  4  5  6  7  8  9 10 11 12
    x\y a c g g c g g a t a c g
    0 g 7 7 7 6 6 6 5 4 3 3 2 1 0
    1 g 6 6 6 6 5 5 5 4 3 3 2 1 0
    2 c 6 5 5 5 5 4 4 4 3 3 2 1 0
    3 a 6 5 4 4 4 4 4 4 3 3 2 1 0
    4 c 5 5 4 4 4 3 3 3 3 3 2 1 0
    5 c 4 4 4 4 4 3 3 3 3 3 2 1 0
    6 a 3 3 3 3 3 3 3 3 3 3 2 1 0
    7 c 2 2 2 2 2 2 2 2 2 2 2 1 0
    8 g 1 1 1 1 1 1 1 1 1 1 1 1 0
    9 0 0 0 0 0 0 0 0 0 0 0 0 0
    One final challenge is to recover the optimal solution itself, not just its value. The key idea is to retrace the steps of the dynamic programming algorithm backwards, re-discovering the path of choices (highlighted in red in the table above) from opt[0][0] to opt[M][N]. To determine the choice that led to opt[i][j], we consider the three possibilities:
    • x[i] matches y[j]. In this case, we must have opt[i][j] = opt[i+1][j+1] + 1, and the next character in the LCS is x[i].
    • The LCS does not contain x[i]. In this case, opt[i][j] = opt[i+1][j].
    • The LCS does not contain y[j]. In this case, opt[i][j] = opt[i][j+1].

    The algorithm takes time and space proportional to MN.

Space usage. Usually with dynamic programming you run out of space before time. However, sometimes it is possible to avoid using an M-by-N array and getting by with just one or two arrays of length M and N. For example, it is not hard to modify to do exactly this. (See Exercise 1.) Similarly, for the longest common subsequence problem, it is easy to avoid the 2D array if you only need the length of LCS. Finding the alignment itself in linear space is substantially more challenging (but possible using Hirschberg's divide-and-conquer algorithm).

Dynamic programming history. Bellman. LCS by Robinson in 1938. Cocke-Younger-Kasami (CYK) algorithm for parsing context free grammars, Floyd-Warshall for all-pairs shortest path, Bellman-Ford for arbitrage detection (negative cost cycles), longest common subsequence for diff, edit distance for global sequence alignment, bitonic TSP. Knapsack problem, subset sum, partitioning. Application = multiprocessor scheduling, minimizing VLSI circuit size.

Root finding. Goal: given function f(x), find x* such that f(x*) = 0. Nonlinear equations can have any number of solutions.

x2 + y2 = -1 no real solutions
e-x + 17 = 0 one real solution
x2 -4x + 3 = 0 has two solutions (-1, -3)
sin(x) = 0 has infinitely many solutions

Unconstrained optimization. Goal: given function f(x), find x* such that f(x) is maximized or minimized. If f(x) is differentiable, then we are looking for an x* such that f'(x*) = 0. However, this may lead to local minima, maxima, or saddle points.

Bisection method. Goal: given function f(x), find x* such that f(x*) = 0. Assume you know interval [a, b] such that f(a) < 0 and f(b) > 0.

Newton's method. Quadratic approximation. Fast convergence if close enough to answer. The update formulas below are for finding the root of f(x) and f'(x).

root finding:  xk+1 = xk - f'(xk)-1 f(xk)
optimization: xk+1 = xk - f''(xk)-1 f'(xk)

Newton's method only reliable if started "close enough" to solution. Bad example (Smale): f(x) = x^3 - 2*x + 2. If you start in the interval [-0.1, 0.1] , Newton's method reaches a stable 2-cycle. If started to the left of the negative real root, it will converge.

To handle general differentiable or twice differentiable functions of one variable, we might declare an interface

public interface Function {
public double eval(double x);
public double deriv(double x);

Program runs Newton's method on a differentiable function to compute points x* where f(x*) = 0 and f'(x*) = 0.

The probability of finding an electron in the 4s excited state of hydrogen ar radius r is given by: f(x) = (1 - 3x/4 + x2/8 - x3/192)2 e-x/2, where x is the radius in units of the Bohr radius (0.529173E-8 cm). Program contains the formula for f(x), f'(x), and f''(x). By starting Newton's method at 0, 4, 5, and 13, and 22, we obtain all three roots and all five local minima and maxima.

Newton's method in higher dimensions. [probably omit or leave as an exercise] Use to solve system of nonlinear equations. In general, there are no good methods for solving a nonlinear system of equations

xk+1 = xk - J(xk)-1 f(xk)

where J is the Jacobian matrix of partial derivatives. In practice, we don't explicitly compute the inverse. Instead of computing y = J-1f, we solve the linear system of equations Jy = f.

To illustrate the method, suppose we want to find a solution (x, y) to the following system of two nonlinear equations.

x3 - 3xy2 - 1 = 0
3x2y - y3 = 0

In this example, the Jacobian is given by

J  = [ 3x2 - 3y2     -6xy      ]
[ 6x 3x2 - 3y2 ]

If we start Newton's method at the point (-0.6, 0.6), we quickly obtain one of the roots (-1/2, sqrt(3)/2) up to machine accuracy. The other roots are (-1/2, -sqrt(3)/2) and (1, 0). Program uses the interface and to solve the system of equations. We use the Jama matrix library to do the matrix computations.

Optimization. Use same method to optimize a function of several variables. Good methods exist if multivariate function is sufficiently smooth.

xk+1 = xk - H(xk)-1 g(xk)

Need gradient g(x) = ∇f(x) and Hessian H(x) = ∇2f(x). Method finds an x* where g(x*) = 0, but this could be a maxima, minima, or saddle point. If Hessian is positive definite (all eigenvalues are positive) then it is a minima; if all eigenvalues are negative, then it's a maxima; otherwise it's a saddle point.

Also, 2nd derivatives change slowly, so it may not be necessary to recalculate the Hessian (or its LU decomposition) at each step. In practice, it is expensive to compute the Hessian exactly, so other so called quasi-Newton methods are preferred, including the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update rule.

Linear programming. Create matrix interface. Generalizes two-person zero-sum games, many problems in combinatorial optimization, .... run AMPL from the web.

Programming = planning. Give some history. Decision problem not known to be in P for along time. In 1979, Khachian resolved the question in the affirmative and made headlines in the New York Times with a geometric divide-and-conquer algorithm known as the ellipsoid algorithm. It requires O(N4L) bit operations where N is the number of variables and L is the number of bits in the input. Although this was a landmark in optimization, it did not immediately lead to a practical algorithm. In 1984, Karmarkar proposed a projective scaling algorithm that takes O(N3.5L) time. It opened up the door for efficient implementations because by typically performing much better than its worst case guarantee. Various interior point methods were proposed in the 1990s, and the best known complexity bound is O(N3 L). More importantly, these algorithm are practical and competitive with the simplex method. They also extend to handle even more general problems.

Simplex method.

Linear programming solvers. In 1947, George Dantzig proposed the simplex algorithm for linear programming. One of greatest and most successful algorithms of all time. Linear programming, but not industrial strength. Program illustrates how to use it. The classes MPSReader and MPSWriter can parse input files and write output files in the standard MPS format. Test LP data files in MPS format.

More applications. OR-Objects also has graph coloring, traveling salesman problem, vehicle routing, shortest path.