Signup/Sign In

What is Dynamic Programming In C++

Posted in Technology   LAST UPDATED: APRIL 17, 2023

    There are a lot of techniques to make the process faster and one of them is Dynamic programming.

    Do you really know what is dynamic programming?

    Let me tell you what it is in this post and you will learn everything about Dynamic programming, its advantages an disadvantages, etc.

    Let's dive into this.

    Dynamic Programing

    What is Dynamic Programming?

    Dynamic programming is a problem-solving technique that divides problems into sub-problems and saves the result for later use, eliminating the need to recalculate the result.

    The optimal substructure property describes how subproblems improve the overall solution. Dynamic programming is mainly used to tackle optimization challenges.

    When we talk about optimization challenges, we're discussing finding the most straightforward or most complex solution to a problem. If an ideal solution to a problem exists, dynamic programming ensures that it will be found.

    Dynamic programming is a strategy for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each subproblem only once, and then storing the solutions to prevent repeating calculations, according to the definition.

    How Does The Dynamic Programming Method Work?

    The following are the steps that the dynamic programming follows:

    • It breaks down the complicated problem into simpler subproblems.
    • It finds the optimal solution to these sub-problems.
    • It stores the results of subproblems (memoization) (memoization). The technique of keeping the effects of subproblems is known as memory.
    • It reuses them such that the same sub-problem is calculated more than once.
    • Finally, calculate the impact of the complicated problem.

    The above five steps are the basic steps for dynamic programming. Those problems that are having overlapping subproblems and optimum substructures. Here, optimum substructure means that the solution of optimization issues may be reached by simply combining the optimal solution of all the subproblems.

    In the case of dynamic programming, the space complexity would increase as we store the intermediate outcomes, but the time complexity would be minimized.

    Approaches Of Dynamic Programming

    There are two ways to dynamic programming:

    • Top-down approach
    • Bottom-up approach

    Top-down approach

    The top-down approach follows the memory technique, whereas the bottom-up approach follows the tabulation method. Here memorizing is equivalent to the total of recursion and caching. Recursion involves calling the function itself while caching means storing the intermediate results.

    Advantages of the Top-down approach

    • It is straightforward to learn and implement.
    • It solves the subproblems only when it is required.
    • It is easy to debug.

    Disadvantages of the Top-down approach

    • It employs the recursion approach that occupies additional memory in the call stack. Sometimes when the recursion is too deep, the stack overflow condition will arise.
    • It occupies additional memory that decreases the overall performance.

    Let's explain dynamic programming by an example.

    int fib(int n)  
    {  
       if(n<0)  
       error;  
     if(n==0)  
     return 0;  
     if(n==1)  
    return 1;  
    sum = fib(n-1) + fib(n-2);  
    }  

    We have utilized the recursive strategy to find out the Fibonacci series in the preceding code. When the value of 'n' increases, the function calls will increase, and calculations will also increase. In this situation, the time complexity grows exponentially and becomes 2n.

    One answer to this challenge is to adopt the dynamic programming approach. Rather than constructing the recursive tree, we can reuse the already calculated value again and again. If we assume the dynamic programming approach, the time complexity would be O(n) (n).

    When we apply the dynamic programming approach in the implementation of the Fibonacci series, then the code would look like this:

    static int count = 0;   
    int fib(int n)  
    {  
    if(memo[n]!= NULL)  
    return memo[n];  
    count++;  
       if(n<0)  
       error;  
     if(n==0)  
     return 0;  
     if(n==1)  
    return 1;  
    sum = fib(n-1) + fib(n-2);  
    memo[n] = sum;  
    }  

    In the preceding code, we have utilized the memorization technique in which we store the results in an array to reuse the values. This is also known as a top-down technique in which we move from the top and break the problem into sub-problems.

    Bottom-Up approach

    The bottom-up approach is also one of the techniques which may be utilized to accomplish dynamic programming. It uses the tabulation technique to implement the dynamic programming approach. It addresses the same kind of problems, but it removes the recursion. If we remove the recursion, there is no stack overflow issue and no overhead of the recursive functions. In this tabulation technique, we answer the questions and store the results in a matrix.

    The bottom-up strategy is used to prevent the recursion, thereby saving the memory space. The bottom-up is an algorithm that starts from the beginning, whereas the recursive algorithm starts from the end and works backwards. In the bottom-up technique, we start from the base case to obtain the answer for the future. As we know, the base cases in the Fibonacci series are 0 and 1. Since the bottom method starts from the base situations, thus we will begin with 0 and 1.

    Example

    Let's find the nth member of a Fibonacci series.

    #include<stdio.h>
    
    int Fibonacci(int N)
    {
        //if N = 2, we need to store 3 fibonacci members(0,1,1)
        //if N = 3, we need to store 4 fibonacci members(0,1,1,2)
        //In general to compute Fib(N), we need N+1 size array.
        int Fib[N+1],i;
    
        //we know Fib[0] = 0, Fib[1]=1
        Fib[0] = 0;
        Fib[1] = 1;
    
        for(i = 2; i <= N; i++)
            Fib[i] = Fib[i-1]+Fib[i-2];
    
        //last index will have the result
        return Fib[N];
    }
    
    int main()
    {
        int n;
        scanf("%d",&n);
    
        //if n == 0 or n == 1 the result is n
        if(n <= 1)
            printf("Fib(%d) = %d\n",n,n);
        else
            printf("Fib(%d) = %d\n",n,Fibonacci(n));
    
        return 0;
    }

    Conclusion

    • We solve all the smaller sub-problems needed to answer the larger sub-problems then progress to the more significant problems using smaller sub-problems.
    • We utilize for loop to iterate over the sub-problems.
    • The bottom-up strategy is also known as the tabulation or table-filling method.

    Frequently Asked Questions (FAQs)

    1. What is dynamic programming in OOP?

    Dynamic programming is a problem-solving technique that divides problems into sub-problems and saves the result for later use, eliminating the need to recalculate the result.

    2. What is Static vs dynamic in OOP?

    In static binding, the compiler resolves the function calls at the compile time while in dynamic binding function calls are resolved at the run time.

    3. What are the 4 dynamic programming languages?

    dynamic programming can be implemented in various programming languages, including but not limited to:

    • Python
    • Java
    • C++
    • JavaScript

    4. Is C++ static or dynamic?

    C++ is a statically typed programming language, which means that the data types of variables must be declared before they are used, and their types are determined at compile-time. Once a variable is declared with a specific data type, its type cannot be changed during runtime.

    5. What are the two elements of dynamic programming?

    The two key elements of dynamic programming are:

    1. Recursion: Dynamic programming often involves solving a problem by breaking it down into smaller subproblems and solving them recursively. This involves defining a problem in terms of smaller instances of the same problem and solving them until the base case is reached.

    2. Memoization: Dynamic programming utilizes memoization, which is the process of storing intermediate results of subproblems in a data structure (such as a cache or a table) so that they can be reused later. This helps in avoiding redundant computations and improves the overall efficiency of the algorithm.

    Author:
    Adarsh Kumar Singh is a technology writer with a passion for coding and programming. With years of experience in the technical field, he has established a reputation as a knowledgeable and insightful writer on a range of technical topics.
    dynamic-programmingcpp
    IF YOU LIKE IT, THEN SHARE IT

    RELATED POSTS