Dynamic Programming is a technique that trades space for time. It stores the intermediate results computed (which might be needed in future) in memory, thus avoiding the need to compute the result every time.

It is typically used when your problem can be divided into states and the solution to the problem as a whole can be derived from the solution to sub-problems.

The considered problem may have either overlapping problems or optimal substructure characteristics.

**What is overlapping problems?**

A problem is said to have overlapping sub problems if it can be broken down into sub problems which are reused multiple times. This is closely related to recursion.

**What is an Optimal Substructure?**

**Optimal Substructure: **A given problems has Optimal Substructure Property if optimal solution of the given problem can be obtained by using optimal solutions of its sub problems.

**Principle of Optimality**

The principle of optimality is the basic principle of dynamic programming.

A problem is said to satisfy the Principle of Optimality if the sub solutions of an optimal solution of the problem are themselves optimal solutions for their sub problems.

Examples:

The shortest path problem satisfies the Principle of Optimality.

This is because if a,x1,x2,…,xn ,b is a shortest path from node a to node b in a graph, then the portion of xi to xj on that path is a shortest path from xi to xj.

**Greedy vs. Dynamic Programming:**

Both techniques are optimization techniques, and both build solutions from a collection of choices of individual elements.

- The greedy method computes its solution by making its choices in a serial forward fashion, never looking back or revising previous choices.
- Dynamic programming computes its solution bottom up by synthesizing them from smaller sub solutions, and by trying many possibilities and choices before it arrives at the optimal set of choices.
- There is no a priori litmus test by which one can tell if the Greedy method will lead to an optimal solution.
- By contrast, there is a litmus test for Dynamic Programming, called The Principle of Optimality .

**Divide and Conquer vs. Dynamic Programming:**

Both techniques split their input into parts, find sub solutions to the parts, and synthesize larger solutions from smaller ones.

- Divide and Conquer splits its input at pre specified deterministic points (e.g., always in the middle)
- Dynamic Programming splits its input at every possible split points rather than at pre-specified points. After trying all split points, it determines which split point is optimal.

**Advantages and Disadvantages of Dynamic Programming**

**Advantages**

- For the various problems in area such as inventory, chemical engineering design , and control theory, Dynamic Programming is the only technique used to solve the problem.
- It is well suited for multi-stage or multi-point or sequential decision process.
- It is suitable for linear or non-linear problems, discrete or continuous variables, and deterministic problems.

**Disadvantages**

- No general formation of Dynamic Program is available; every problem has to be solving in its own way.
- Dividing problem in sub problem and storing inter mediate results consumes memory.

**Implementation methods**

Dynamic programming can be implemented in two ways –

- Memoization
- Tabulation

**Memoization** – Memoization uses the top-down technique to solve the problem i.e. it begin with original problem then breaks it into sub-problems and solve these sub-problems in the same way.

In this approach, you assume that you have already computed all subproblems. You typically perform a recursive call (or some iterative equivalent) from the main problem. You ensure that the recursive call never recomputes a subproblem because you cache the results, and thus duplicate sub-problems are not recomputed.

**Tabulation** – Tabulation is the typical Dynamic Programming approach. Tabulation uses the bottom up approach to solve the problem, i.e., by solving all related sub-problems first, typically by storing the results in an array. Based on the results stored in the array, the solution to the “top” / original problem is then computed.

Memoization and tabulation are both storage techniques applied to avoid re-computation of a sub problem.