It is a technique to solve problems by finding a relation between smaller sub-problems and large sub-problem.
sub-problem: a part of the problem with smaller arguments.
For example. factorial.
F(n) = n!
we know n! = n * (n-1)!
thus F(n) = n * F(n-1)
so if we already know F(n-1) we can know F(n) easily.
for(int i=1; i < n; i++ )
f[i] = i * f[i-1];
//now we have an array filled with factorials of numbers.
So the main problem lies in breaking the problem into smaller sub-problems.
for example. fibonacci series.
F[n] = F[n-1] + F[n-2] //we can get all the fibonacci numbers in an array in O(n)
you may argue we can get F[n] in O(log n) using matrix exponentiation.
yes that's true. It depends on how many fibonacci numbers you need to find.
if the whole series needs to be printed then matrix exponentiation method needs n * O(log n)
where as our dynamic programming technique requires O(n) only.
Now some interesting problems.
--> Calculating the height of a tree:
height at node n --- H[n] = maximum of H[child1] , H[child2] + ....
--> Number of ways you can make a binary string with no adjacent ones together.
for example lets check strings of length three
out of which 011 , 110 add 111 are invalid.
Let f(n) represent answer of length n.
Assume we have answer to all the strings of smaller length.
if first bit is 0, what ever be the second bit the string is still valid.
if first bit is 1, second bit must be 0 else we will have 2 adjacent ones together.
if second bit is 0, third bit can be anything.
so if we put 0 infront of all the strings of length n-1 we get a valid string of length n.
also if we put 1 followed by 0 in front of strings of length n-2 we get a valid string of length n.
f(n) = f(n-1) + f(n-2)
which is also a fibonacci number ;)
see how the mathematics is related everywhere :D