There are a lot of techniques to make the process faster and one of them is Dynamic programming.
Do you really know what is dynamic programming?
Let me tell you what it is in this post and you will learn everything about Dynamic programming, its advantages an disadvantages, etc.
Let's dive into this.
Dynamic programming is a problem-solving technique that divides problems into sub-problems and saves the result for later use, eliminating the need to recalculate the result.
The optimal substructure property describes how subproblems improve the overall solution. Dynamic programming is mainly used to tackle optimization challenges.
When we talk about optimization challenges, we're discussing finding the most straightforward or most complex solution to a problem. If an ideal solution to a problem exists, dynamic programming ensures that it will be found.
Dynamic programming is a strategy for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each subproblem only once, and then storing the solutions to prevent repeating calculations, according to the definition.
The following are the steps that the dynamic programming follows:
The above five steps are the basic steps for dynamic programming. Those problems that are having overlapping subproblems and optimum substructures. Here, optimum substructure means that the solution of optimization issues may be reached by simply combining the optimal solution of all the subproblems.
In the case of dynamic programming, the space complexity would increase as we store the intermediate outcomes, but the time complexity would be minimized.
There are two ways to dynamic programming:
The top-down approach follows the memory technique, whereas the bottom-up approach follows the tabulation method. Here memorizing is equivalent to the total of recursion and caching. Recursion involves calling the function itself while caching means storing the intermediate results.
Let's explain dynamic programming by an example.
int fib(int n)
{
if(n<0)
error;
if(n==0)
return 0;
if(n==1)
return 1;
sum = fib(n-1) + fib(n-2);
}
We have utilized the recursive strategy to find out the Fibonacci series in the preceding code. When the value of 'n' increases, the function calls will increase, and calculations will also increase. In this situation, the time complexity grows exponentially and becomes 2n.
One answer to this challenge is to adopt the dynamic programming approach. Rather than constructing the recursive tree, we can reuse the already calculated value again and again. If we assume the dynamic programming approach, the time complexity would be O(n) (n).
When we apply the dynamic programming approach in the implementation of the Fibonacci series, then the code would look like this:
static int count = 0;
int fib(int n)
{
if(memo[n]!= NULL)
return memo[n];
count++;
if(n<0)
error;
if(n==0)
return 0;
if(n==1)
return 1;
sum = fib(n-1) + fib(n-2);
memo[n] = sum;
}
In the preceding code, we have utilized the memorization technique in which we store the results in an array to reuse the values. This is also known as a top-down technique in which we move from the top and break the problem into sub-problems.
The bottom-up approach is also one of the techniques which may be utilized to accomplish dynamic programming. It uses the tabulation technique to implement the dynamic programming approach. It addresses the same kind of problems, but it removes the recursion. If we remove the recursion, there is no stack overflow issue and no overhead of the recursive functions. In this tabulation technique, we answer the questions and store the results in a matrix.
The bottom-up strategy is used to prevent the recursion, thereby saving the memory space. The bottom-up is an algorithm that starts from the beginning, whereas the recursive algorithm starts from the end and works backwards. In the bottom-up technique, we start from the base case to obtain the answer for the future. As we know, the base cases in the Fibonacci series are 0 and 1. Since the bottom method starts from the base situations, thus we will begin with 0 and 1.
Example
Let's find the nth member of a Fibonacci series.
#include<stdio.h>
int Fibonacci(int N)
{
//if N = 2, we need to store 3 fibonacci members(0,1,1)
//if N = 3, we need to store 4 fibonacci members(0,1,1,2)
//In general to compute Fib(N), we need N+1 size array.
int Fib[N+1],i;
//we know Fib[0] = 0, Fib[1]=1
Fib[0] = 0;
Fib[1] = 1;
for(i = 2; i <= N; i++)
Fib[i] = Fib[i-1]+Fib[i-2];
//last index will have the result
return Fib[N];
}
int main()
{
int n;
scanf("%d",&n);
//if n == 0 or n == 1 the result is n
if(n <= 1)
printf("Fib(%d) = %d\n",n,n);
else
printf("Fib(%d) = %d\n",n,Fibonacci(n));
return 0;
}
Dynamic programming is a problem-solving technique that divides problems into sub-problems and saves the result for later use, eliminating the need to recalculate the result.
In static binding, the compiler resolves the function calls at the compile time while in dynamic binding function calls are resolved at the run time.
dynamic programming can be implemented in various programming languages, including but not limited to:
C++ is a statically typed programming language, which means that the data types of variables must be declared before they are used, and their types are determined at compile-time. Once a variable is declared with a specific data type, its type cannot be changed during runtime.
The two key elements of dynamic programming are:
Recursion: Dynamic programming often involves solving a problem by breaking it down into smaller subproblems and solving them recursively. This involves defining a problem in terms of smaller instances of the same problem and solving them until the base case is reached.
Memoization: Dynamic programming utilizes memoization, which is the process of storing intermediate results of subproblems in a data structure (such as a cache or a table) so that they can be reused later. This helps in avoiding redundant computations and improves the overall efficiency of the algorithm.