Dynamic Programming

Dynamic Programming refers to the idea of breaking down a problem into small sub problems and then have each sub problem solve its most optimal case.

We can apply the dynamic programming paradigm to a problem if the problem has optimal substructure.

  • For example: anything that has a predictable sequence can be solved efficiently with dynamic programming.

    • General summations, factorial, or the Fibonacci sequence, for example - can all be solved using this method. Often it is the natural way humans try and solve summations for example. Rather than doing 1 + 2 + 3 + 4 to find the 4th element we do 1 + 2 = 3; (3 + 3) = 6; 6 + 4 = 10.

    • This is why many recursive (self-referential) algorithms have a dynamic programming aspect.

In other words, there exists a way to start with a very small base case, and build up towards a solution. This is know as the bottom-up approach, and it can often help us discover whether or not we can use DP.

With bottom-up, optimum substructure essentially means discovering that the brute force approach contains a recursive tree with overlapping subproblems.

The dimensionality of the parameters often tells us dimensionality of our hash table. For instance, the knapsack problem associates the relationship between (w, v) -> (weights, and values). It would then be reasonable to associate all permutations and equivalences between (w,v), which is precisely an n^2 table. In another case, the coin change problem is only interested with finding the optimal case for a single parameter - and it should hence be dependent on the 1-dimensional array.

Last updated