Dynamic Programming was invented by Richard Bellman, 1950. It is a very general technique for solving optimization problems. Using Dynamic Programming requires that the problem can be divided into overlapping similar subproblems. A recursive relation between the larger and smaller sub problems is used to fill out a table. The algorithm remembers solutions of the subproblems and so does not have to recalculate the solutions.
Also optimal problem must have the principle of optimality (phase coined by Bellman) meaning that the optimal problem can be posed as the optimal solution among sub problems. This is not true for all optimal problems.
Dynamic Programming requires:
1. Problem divided into overlapping subproblems
2. Subproblem can be represented by a table
2. Principle of optimality, recursive relation between smaller and larger problems
Compared to a brute force recursive algorithm that could run exponential, the dynamic programming algorithm runs typically in quadratic time. (Recall the algorithms for the Fibonacci numbers.) The recursive algorithm ran in exponential time while the iterative algorithm ran in linear time. The space cost does increase, which is typically the size of the table. Frequently, the whole table does not have to store.
Computing binomial coefficients is non optimization problem but can be solved using dynamic programming.
Binomial coefficients are represented by C(n, k) or (^{n}_{k}) and can be used to represent the coefficients of a binomail:
(a + b)^{n} = C(n, 0)a^{n} + ... + C(n, k)a^{n}^{kbk} + ... + C(n, n)b^{n}
The recursive relation is defined by the prior power
C(n, k) = C(n1, k1) + C(n1, k) for n > k > 0
IC C(n, 0) = C(n, n) = 1
Dynamic algorithm constructs a nxk table, with the first column and diagonal filled out using the IC.
Construct the table:




k 





0 
1 
2 
... 
k1 
k 

0 
1 






1 
1 
1 





2 
1 
2 
1 



n 
. . . 







k 
1 




1 

. . . 







n1 
1 



C(n1, k1) 


n 
1 




C(n, k) 
The table is then filled out iteratively, row by row using the recursive relation.
Algorithm Binomial(n, k)
for i ← 0 to n do // fill out the table row wise
for i = 0 to min(i, k) do
if j==0 or j==i then C[i, j] ← 1 // IC
else C[i, j] ← C[i1, j1] + C[i1, j] // recursive relation
return C[n, k]
The cost of the algorithm is filing out the table. Addition is the basic operation. Because k ≤ n, the sum needs to be split into two parts because only the half the table needs to be filled out for i < k and remaining part of the table is filled out across the entire row.
A(n, k) = sum for upper triangle + sum for the lower rectangle
= ∑_{i}_{=1}^{k} ∑_{j}_{=1}^{i}^{1 }1 + ∑_{i}_{=1}^{n} ∑_{j}_{=1}^{k} 1
= ∑_{i}_{=1}^{k }(i1) + ∑_{i}_{=1}^{n} k
= (k1)k/2 + k(nk) ε Θ(nk)
Note we do not need to keep the whole table, only the prior row.
We'll consider more sophisticate dynamic programming problems, Warshall's and Floyd's algorithms.