Dynamic Programming
Dynamic programming (DP) is a method used in computer science and mathematics to solve complex problems by breaking them down into simpler subproblems. It involves solving each subproblem just once and storing their solutions – typically using a data structure like an array or a table – to avoid redundant work. The essence of DP is to use previously computed results to build up the solution to the overall problem efficiently. This approach is particularly effective for optimization problems and problems with overlapping subproblems and optimal substructure properties.
Key Concepts
Optimal Substructure: This property means that the solution to a problem can be composed of optimal solutions to its subproblems. If a problem exhibits optimal substructure, it can be solved recursively by combining the solutions to its smaller instances.
Overlapping Subproblems: This property indicates that a problem can be broken down into subproblems that are reused multiple times. Instead of solving the same subproblem repeatedly, dynamic programming stores the results of these subproblems, often in a table, to avoid redundant computations and improve efficiency.
Sequence of Decisions: Many dynamic programming problems involve making a sequence of decisions to achieve an optimal solution. Each decision depends on previous decisions and affects future ones. Dynamic programming helps in systematically exploring all possible sequences of decisions and choosing the one that leads to the optimal outcome. Examples include determining the optimal way to cut a rod to maximize profit or deciding the optimal order to multiply matrices to minimize the number of operations.
Optimization Problems: These are problems that seek the best solution among many possible options. Dynamic programming is particularly useful for solving optimization problems, such as finding the shortest path in a graph, the maximum profit in a knapsack problem, or the minimum cost path in a grid. By breaking down the problem into smaller subproblems and solving each one optimally, dynamic programming ensures that the overall solution is optimal.
Comparison with Other Problem-Solving Techniques
-
Brute Force:
- Approach: Brute force involves trying all possible solutions to find the best one. It is straightforward but often inefficient, especially for large problem instances.
- Efficiency: Typically has exponential time complexity due to the exhaustive search of all possibilities.
- Use Case: Suitable for small problems where the number of possible solutions is manageable.
-
Divide and Conquer:
- Approach: This technique involves dividing a problem into smaller, independent subproblems, solving each subproblem recursively, and then combining their solutions to solve the original problem.
- Efficiency: More efficient than brute force for many problems but can still be suboptimal if subproblems overlap and redundant calculations are performed.
- Use Case: Effective for problems like merge sort and quick sort, where subproblems do not overlap.
-
Greedy Algorithms:
- Approach: Greedy algorithms make a series of choices, each of which looks the best at the moment, with the hope of finding a global optimum.
- Efficiency: Often very efficient with linear or logarithmic time complexity but does not always produce the optimal solution for all problems.
- Use Case: Works well for problems like the fractional knapsack problem, Huffman coding, and certain graph problems like Dijkstra's shortest path algorithm.
Techniques in Dynamic Programming
Dynamic Programming (DP) employs two primary techniques to efficiently solve complex problems by breaking them down into smaller subproblems and storing their solutions:
Memoization (Top-Down Approach): Memoization is a technique that optimizes recursive algorithms by storing the results of expensive function calls and reusing them when the same inputs occur again. This approach is particularly effective in problems where the same subproblems are solved multiple times. By caching the results, memoization reduces redundant computations, significantly improving the overall efficiency of the algorithm. For example, in the computation of Fibonacci numbers using memoization, each Fibonacci number is computed only once and stored, ensuring subsequent calls for the same number retrieve the result from memory rather than recalculating it.
Tabulation (Bottom-Up Approach): Tabulation involves solving the problem iteratively by building up solutions to subproblems in a table, such as an array or matrix. Unlike memoization, which uses recursion, tabulation starts with the smallest subproblems and systematically computes solutions for larger subproblems based on previously computed values. This method ensures that all subproblems are solved in a predefined order, typically from the base cases up to the final solution. A classic example of tabulation is computing Fibonacci numbers iteratively using an array to store intermediate results, which allows each Fibonacci number to be calculated based on its preceding values stored in the array.
These techniques form the backbone of dynamic programming and are selected based on the problem's characteristics, constraints, and optimization requirements. Memoization is particularly useful for problems with overlapping subproblems where recursion can be utilized effectively, while tabulation excels when all subproblems need to be solved and stored iteratively. Understanding these techniques enables developers to apply dynamic programming effectively to a wide range of computational problems, from optimizing recursive algorithms to solving complex optimization problems efficiently.
Steps to Solve a Dynamic Programming Problem
Dynamic Programming (DP) is a methodical approach to solving complex problems by breaking them down into smaller, manageable subproblems. Here are the steps typically followed to solve a DP problem:
Identify if it's a DP problem:
Determining if a problem can benefit from DP involves recognizing patterns of overlapping subproblems and optimal substructure. Overlapping subproblems mean that the solution to a problem relies on solutions to the same subproblem multiple times, while optimal substructure ensures that an optimal solution to the problem can be constructed from optimal solutions to its subproblems.Define the state:
Defining the state involves identifying the variables that represent the current state of the problem. These variables encapsulate the information necessary to solve a subproblem and move towards the solution of the larger problem. For example, in problems involving sequences, the state might include the current index or position in the sequence.Formulate a recurrence relation:
The recurrence relation defines the relationship between the solution to the larger problem and its subproblems. It expresses how the optimal solution of the current state depends on the solutions of smaller related subproblems. This recursive formulation is crucial in dynamic programming as it provides a roadmap for solving larger instances of the problem by solving smaller instances.Identify the base cases:
Base cases are the simplest subproblems that can be solved directly without further decomposition. They provide the starting points for the recurrence relation and serve as termination conditions for recursive processes. Base cases are essential for initiating the solution process and building up towards solving larger instances of the problem.-
Choose the approach (Memoization or Tabulation):
Depending on the problem characteristics and constraints, decide between memoization (top-down) or tabulation (bottom-up):- Memoization: Involves recursive calls where results of subproblems are stored (memoized) to avoid redundant computations.
- Tabulation: Involves iterative computation where solutions to subproblems are stored in a table (usually an array or matrix) and built up from smaller to larger subproblems.
Implement the solution:
Implement the chosen approach based on the recurrence relation and base cases identified. Ensure the implementation correctly computes solutions using the selected method (memoization or tabulation) and handles edge cases effectively.Optimize the solution (if necessary):
Analyze the time and space complexity of the implemented solution. Consider optimizations such as reducing redundant computations, optimizing space usage (especially in tabulation), or improving the recurrence relation for faster computation. Optimization ensures that the DP solution is efficient and scalable for larger inputs or real-time applications.
By following these structured steps, dynamic programming allows developers to systematically break down and solve complex problems, leveraging optimal substructure and overlapping subproblems to achieve efficient and effective solutions. Each step contributes to building a comprehensive understanding of the problem and crafting a solution that balances clarity, efficiency, and scalability.
Common Dynamic Programming Problems
Dynamic Programming (DP) is applied to a wide range of computational problems, offering efficient solutions by breaking them down into smaller subproblems and storing their solutions. These problems typically exhibit overlapping subproblems and optimal substructure, making DP an effective approach for optimization, sequence alignment, shortest path finding, and more. By identifying patterns in problem-solving techniques, DP enables systematic computation of solutions that are both optimal and efficient, leveraging recursive relationships and iterative computation methods.
0/1 Knapsack Problem
The 0/1 Knapsack Problem involves selecting items from a given set, each with a weight and a corresponding value, such that the total weight does not exceed a specified capacity of a knapsack while maximizing the total value of the selected items.
For instance, given n
items where each item i
has a weight weights[i]
and a value values[i]
, and a knapsack with a capacity capacity
, the objective is to determine the maximum value that can be achieved by selecting a subset of items.
Brute-Force (Recursive) Approach for 0/1 Knapsack Problem
In the brute-force recursive approach to solving the 0/1 Knapsack problem, we systematically explore all possible combinations of items to determine the optimal subset that fits within the capacity of the knapsack.
#include <iostream>
#include <vector>
#include <algorithm> // For max function
using namespace std;
// Brute-force recursive function to solve the 0/1 Knapsack problem
double knapsackRecursive(vector<double>& weights, vector<double>& values, double capacity, int n) {
// Base case: if no items are left or the capacity is 0
if (n == 0 || capacity == 0) {
return 0;
}
// If weight of the nth item is more than the capacity, it cannot be included
if (weights[n - 1] > capacity) {
return knapsackRecursive(weights, values, capacity, n - 1);
}
// Return the maximum value obtained by either including or excluding the nth item
return max(
values[n - 1] + knapsackRecursive(weights, values, capacity - weights[n - 1], n - 1),
knapsackRecursive(weights, values, capacity, n - 1)
);
}
int main() {
vector<double> weights = {1, 5, 20, 35, 90}; // Weights of items
vector<double> values = {15, 14.5, 19.2, 19.8, 195.2}; // Values of items
double capacity = 20; // Capacity of the knapsack
int n = weights.size(); // Number of items
double maxValue = knapsackRecursive(weights, values, capacity, n);
cout << "Maximum value in Knapsack: " << maxValue << endl;
return 0;
}
In this implementation, the knapsackRecursive
function recursively evaluates two choices for each item: either including the item in the knapsack (if its weight allows) or excluding it. It computes the maximum value obtainable by either of these choices until all items are considered or the knapsack's capacity is exceeded. While this approach guarantees finding the optimal solution by evaluating all subsets of items, its time complexity grows exponentially with the number of items, making it inefficient for large inputs.
Memoization Approach for 0/1 Knapsack Problem
In the memoization approach to solving the 0/1 Knapsack problem, we optimize the recursive solution by storing computed results of subproblems in a memoization table. This technique helps avoid redundant calculations and improves the efficiency of the solution.
#include <iostream>
#include <vector>
#include <algorithm> // For max function
using namespace std;
// Helper function to solve the 0/1 Knapsack problem using memoization
double knapsackMemo(vector<int>& weights, vector<double>& values, int capacity, int n, vector<vector<double>>& memo) {
// Base case: if no items are left or the capacity is 0
if (n == 0 || capacity == 0) {
return 0;
}
// Return the stored value if the subproblem has already been solved
if (memo[n][capacity] != -1) {
return memo[n][capacity];
}
// If weight of the nth item is more than the capacity, it cannot be included
if (weights[n - 1] > capacity) {
return memo[n][capacity] = knapsackMemo(weights, values, capacity, n - 1, memo);
}
// Return the maximum value obtained by either including or excluding the nth item
return memo[n][capacity] = max(
values[n - 1] + knapsackMemo(weights, values, capacity - weights[n - 1], n - 1, memo),
knapsackMemo(weights, values, capacity, n - 1, memo)
);
}
// Function to initialize memoization and call the helper function
double knapsack(vector<int>& weights, vector<double>& values, int capacity) {
int n = weights.size(); // Number of items
vector<vector<double>> memo(n + 1, vector<double>(capacity + 1, -1)); // Memoization table
return knapsackMemo(weights, values, capacity, n, memo);
}
int main() {
vector<int> weights = {1, 5, 20, 35, 90}; // Weights of items
vector<double> values = {15, 14.5, 19.2, 19.8, 195.2}; // Values of items
int capacity = 200; // Capacity of the knapsack
double maxValue = knapsack(weights, values, capacity);
cout << "Maximum value in Knapsack: " << maxValue << endl;
return 0;
}
In this implementation, knapsackMemo
uses a memoization table (memo
) to store results of subproblems. If a subproblem's solution has already been computed, it is retrieved from the memoization table, avoiding redundant calculations. This approach optimizes the recursive solution to the 0/1 Knapsack problem by reducing its time complexity from exponential to polynomial, making it more suitable for larger inputs.
Tabulation Approach for 0/1 Knapsack Problem
The tabulation approach to solving the 0/1 Knapsack problem uses dynamic programming to build a DP table iteratively from smaller subproblems to larger ones. This method avoids recursion and uses space efficiently to compute the maximum value that can be placed in a knapsack of given capacity.
#include <iostream>
#include <vector>
#include <algorithm> // For max function
using namespace std;
// Function to solve the 0/1 Knapsack problem using dynamic programming (tabulation)
double knapsack(vector<int>& weights, vector<double>& values, int capacity) {
int n = weights.size(); // Number of items
vector<vector<double>> dp(n + 1, vector<double>(capacity + 1, 0)); // DP table
// Build the DP table in a bottom-up manner
for (int i = 1; i <= n; ++i) {
for (int w = 1; w <= capacity; ++w) {
if (weights[i - 1] <= w) {
dp[i][w] = max(values[i - 1] + dp[i - 1][w - weights[i - 1]], dp[i - 1][w]);
} else {
dp[i][w] = dp[i - 1][w];
}
}
}
return dp[n][capacity]; // Maximum value that can be put in a knapsack of capacity
}
int main() {
vector<int> weights = {1, 5, 20, 35, 90}; // Weights of items
vector<double> values = {15, 14.5, 19.2, 19.8, 195.2}; // Values of items
int capacity = 200; // Capacity of the knapsack
double maxValue = knapsack(weights, values, capacity);
cout << "Maximum value in Knapsack: " << maxValue << endl;
return 0;
}
In this implementation, the knapsack
function initializes a DP table (dp
) where dp[i][w]
represents the maximum value that can be achieved with the first i
items and a knapsack capacity of w
. The table is filled in a bottom-up manner, iterating through each item and capacity combination. If the current item can fit into the knapsack (i.e., its weight is less than or equal to the current capacity), the function computes the maximum value by either including or excluding the item.
Coin Changing Problem
The Coin Changing Problem involves determining the minimum number of coins required to make up a specified amount using a given set of coin denominations. Each coin denomination has a specific value, and the goal is to find the optimal combination of coins that sums up to the desired amount.
For instance, given n
coin denominations where each coin i
has a value coins[i]
, and a target amount amount
, the objective is to compute the minimum number of coins needed to make up amount
. If it is impossible to form the amount using the given denominations, the solution should indicate that it's not feasible.
Brute-Force (Recursive) Approach for Coin Changing Problem
In the brute-force recursive approach to solving the Coin Changing problem, we systematically explore all possible combinations of coins to determine the minimum number of coins needed to achieve the specified amount.
#include <iostream>
#include <vector>
#include <climits> // For INT_MAX
using namespace std;
// Brute-force recursive function to calculate the minimum coins
int minCoinsRecursive(vector<int>& coins, int amount) {
// Base case: if amount is 0, no coins are needed
if (amount == 0) {
return 0;
}
// If amount is negative, return INT_MAX (impossible situation)
if (amount < 0) {
return INT_MAX;
}
int minCoins = INT_MAX;
// Try every coin and find the minimum number of coins needed
for (int coin : coins) {
int res = minCoinsRecursive(coins, amount - coin);
if (res != INT_MAX) {
minCoins = min(minCoins, res + 1);
}
}
return minCoins;
}
int coinChange(vector<int>& coins, int amount) {
int result = minCoinsRecursive(coins, amount);
return result == INT_MAX ? -1 : result;
}
int main() {
vector<int> coins = {1, 4, 7, 9, 16, 43};
int amount = 17;
int result = coinChange(coins, amount);
if (result != -1) {
cout << "Minimum coins needed: " << result << endl;
} else {
cout << "Amount cannot be made up by any combination of the given coins." << endl;
}
return 0;
}
In this implementation, the minCoinsRecursive
function recursively evaluates every possible combination of coins to find the minimum number required to make up the amount amount
. It checks each coin denomination and recursively computes the minimum coins needed by subtracting the coin's value from the amount. This approach ensures that all possible combinations are explored, but it may be inefficient for larger amounts due to its exponential time complexity.
Memoization Approach for Coin Changing Problem
In the memoization approach to solving the Coin Changing problem, we optimize the recursive solution by storing computed results of subproblems in a memoization table (memo
). This helps avoid redundant calculations and improves efficiency.
#include <iostream>
#include <vector>
#include <climits> // For INT_MAX
using namespace std;
// Helper function to calculate the minimum coins using memoization
int minCoinsMemo(vector<int>& coins, int amount, vector<int>& memo) {
if (amount == 0) {
return 0;
}
if (amount < 0) {
return INT_MAX;
}
if (memo[amount] != -1) {
return memo[amount];
}
int minCoins = INT_MAX;
for (int coin : coins) {
int res = minCoinsMemo(coins, amount - coin, memo);
if (res != INT_MAX) {
minCoins = min(minCoins, res + 1);
}
}
memo[amount] = minCoins;
return minCoins;
}
int coinChange(vector<int>& coins, int amount) {
vector<int> memo(amount + 1, -1);
int result = minCoinsMemo(coins, amount, memo);
return result == INT_MAX ? -1 : result;
}
int main() {
vector<int> coins = {1, 4, 7, 9, 16, 43};
int amount = 85;
int result = coinChange(coins, amount);
if (result != -1) {
cout << "Minimum coins needed: " << result << endl;
} else {
cout << "Amount cannot be made up by any combination of the given coins." << endl;
}
return 0;
}
In this implementation, the minCoinsMemo
function recursively computes the minimum number of coins needed for each amount using memoization. The memo
vector stores results of subproblems to avoid redundant calculations. By recursively exploring each coin denomination and memoizing results, the algorithm efficiently determines the minimum coins required to make up the amount amount
. This approach significantly improves performance compared to the brute-force recursive approach, especially for larger amounts and coin sets.
Tabulation Approach for Coin Changing Problem
In the tabulation approach to solving the Coin Changing problem, we use dynamic programming to build up solutions to subproblems in a bottom-up manner, filling out a table (dp
) where dp[i]
represents the minimum number of coins needed to make up the amount i
.
#include <iostream>
#include <vector>
#include <climits> // For INT_MAX
using namespace std;
// Function to calculate the minimum coins using dynamic programming (tabulation)
int coinChange(vector<int>& coins, int amount) {
vector<int> dp(amount + 1, INT_MAX);
dp[0] = 0; // Base case: 0 coins needed to make up amount 0
for (int i = 1; i <= amount; ++i) {
for (int coin : coins) {
if (i >= coin && dp[i - coin] != INT_MAX) {
dp[i] = min(dp[i], dp[i - coin] + 1);
}
}
}
return dp[amount] == INT_MAX ? -1 : dp[amount];
}
int main() {
vector<int> coins = {1, 4, 7, 9, 16, 43};
int amount = 82;
int result = coinChange(coins, amount);
if (result != -1) {
cout << "Minimum coins needed: " << result << endl;
} else {
cout << "Amount cannot be made up by any combination of the given coins." << endl;
}
return 0;
}
In this implementation, the coinChange
function uses a dp
array where dp[i]
is initialized to INT_MAX
(indicating that the amount i
cannot be made up with the current coins). We iteratively compute the minimum coins required for each amount up to amount
by iterating through each coin denomination and updating dp[i]
based on the minimum of its current value or dp[i - coin] + 1
(where coin
is the current coin denomination).
By the end of the iteration, dp[amount]
will contain the minimum number of coins needed to make up amount
, or -1
if it's not possible with the given denominations. This tabulation approach ensures an efficient solution to the Coin Changing problem with a time complexity of O(amount * n)
, where n
is the number of coin denominations.
Rod Cutting Problem
The Rod Cutting Problem involves determining the maximum profit obtainable by cutting a rod of a given length into smaller pieces and selling them based on specified prices for each piece length. Each allowed piece length has a corresponding price, and the goal is to find the optimal way to cut the rod to maximize profit.
For instance, given n
allowed piece lengths where each piece i
has a length lengths[i]
and a corresponding price prices[i]
, and a target rod length rodLength
, the objective is to compute the maximum profit that can be obtained by cutting the rod into pieces of the allowed lengths. If it is not possible to achieve a profit using the given lengths, the solution should indicate that the rod cannot be cut profitably.
Brute-Force (Recursive) Approach for Rod Cutting Problem
In the brute-force recursive approach to solving the Rod Cutting problem, we systematically explore all possible ways to cut the rod to determine the optimal subset of lengths that maximizes profit.
#include <iostream>
#include <vector>
#include <climits> // For INT_MIN
using namespace std;
// Function to calculate the maximum profit recursively
double rodCutting(vector<int>& lengths, vector<double>& prices, int n) {
// Base case: if rod length is 0, profit is 0
if (n == 0) {
return 0;
}
if (n < 0) {
return INT_MIN; // If n is negative, return INT_MIN
}
double maxProfit = INT_MIN; // Initialize max profit with minimum possible value
// Recursively calculate the maximum profit by considering all possible cuts
for (int i = 0; i < lengths.size(); ++i) {
maxProfit = max(maxProfit, prices[i] + rodCutting(lengths, prices, n - lengths[i]));
}
maxProfit = max(maxProfit, 0.0); // Consider the case where we waste the entire rod
return maxProfit;
}
int main() {
vector<int> lengths = {1, 3, 5, 10, 30, 50, 75}; // Allowed lengths for cutting
vector<double> prices = {0.1, 0.2, 0.4, 0.9, 3.1, 5.1, 8.2}; // Prices corresponding to each length
int rodLength = 30; // Example rod length to cut
double maxProfit = rodCutting(lengths, prices, rodLength);
cout << "Maximum profit for rod of length " << rodLength << " is: " << maxProfit << endl;
return 0;
}
In this implementation, the rodCutting
function recursively evaluates different ways to cut the rod by considering each allowed length and its corresponding price. For each possible cut, it computes the maximum profit obtainable by including that piece length and recursively solving for the remaining rod length. This approach guarantees finding the optimal solution by evaluating all possible ways to cut the rod, but its time complexity grows exponentially with the rod length and the number of allowed lengths, making it inefficient for large inputs.
Memoization Approach for Rod Cutting Problem
In the memoization approach to solving the Rod Cutting problem, we optimize the recursive solution by storing computed results of subproblems in a memoization table. This technique helps avoid redundant calculations and improves the efficiency of the solution.
#include <iostream>
#include <vector>
#include <climits> // For INT_MIN
using namespace std;
// Helper function to calculate the maximum profit using memoization
double rodCuttingMemo(vector<int>& lengths, vector<double>& prices, int n, vector<double>& memo) {
// Base case: if rod length is 0, profit is 0
if (n == 0) {
return 0;
}
if (n < 0) {
return INT_MIN; // If n is negative, return INT_MIN
}
if (memo[n] != -1) {
return memo[n]; // Return cached result if available
}
double maxProfit = INT_MIN; // Initialize max profit with minimum possible value
// Recursively calculate the maximum profit by considering all possible cuts
for (int i = 0; i < lengths.size(); ++i) {
maxProfit = max(maxProfit, prices[i] + rodCuttingMemo(lengths, prices, n - lengths[i], memo));
}
maxProfit = max(maxProfit, 0.0); // Consider the case where we waste the entire rod
memo[n] = maxProfit; // Store the result in the cache
return maxProfit;
}
double rodCutting(vector<int>& lengths, vector<double>& prices, int n) {
vector<double> memo(n + 1, -1); // Initialize memoization array with -1
return rodCuttingMemo(lengths, prices, n, memo);
}
int main() {
vector<int> lengths = {1, 3, 5, 10, 30, 50, 75}; // Allowed lengths for cutting
vector<double> prices = {0.1, 0.2, 0.4, 0.9, 3.1, 5.1, 8.2}; // Prices corresponding to each length
int rodLength = 300; // Example rod length to cut
double maxProfit = rodCutting(lengths, prices, rodLength);
cout << "Maximum profit for rod of length " << rodLength << " is: " << maxProfit << endl;
return 0;
}
In this implementation, rodCuttingMemo
uses a memoization table (memo
) to store results of subproblems. If a subproblem's solution has already been computed, it is retrieved from the memoization table, avoiding redundant calculations. This approach optimizes the recursive solution to the Rod Cutting problem by reducing its time complexity, making it more suitable for larger inputs.
Tabulation Approach for Rod Cutting Problem
The tabulation approach to solving the Rod Cutting problem uses dynamic programming to build a DP table iteratively from smaller subproblems to larger ones. This method avoids recursion and uses space efficiently to compute the maximum profit that can be obtained by cutting a rod of a given length.
#include <iostream>
#include <vector>
#include <climits> // For INT_MIN
using namespace std;
// Function to calculate the maximum profit using dynamic programming (tabulation)
double rodCutting(vector<int>& lengths, vector<double>& prices, int n) {
// Create a DP array to store maximum profit for each rod length from 0 to n
vector<double> dp(n + 1, 0); // Bottom-up approach: initialize dp array with size n+1 filled with 0
// Calculate maximum profit for each rod length up to n
for (int i = 1; i <= n; ++i) {
double maxProfit = INT_MIN;
for (int j = 0; j < lengths.size(); ++j) {
if (i >= lengths[j]) { // Only consider valid cuts where i >= lengths[j]
maxProfit = max(maxProfit, prices[j] + dp[i - lengths[j]]);
}
}
dp[i] = maxProfit; // Store the maximum profit for rod length i
}
return dp[n]; // Return maximum profit for rod of length n
}
int main() {
vector<int> lengths = {1, 3, 5, 10, 30, 50, 75}; // Allowed lengths for cutting
vector<double> prices = {0.1, 0.2, 0.4, 0.9, 3.1, 5.1, 8.2}; // Prices corresponding to each length
int rodLength = 300; // Example rod length to cut
double maxProfit = rodCutting(lengths, prices, rodLength);
cout << "Maximum profit for rod of length " << rodLength << " is: " << maxProfit << endl;
return 0;
}
In this implementation, the rodCutting
function initializes a DP array (dp
) where dp[i]
represents the maximum profit that can be obtained with a rod of length i
. The array is filled in a bottom-up manner, iterating through each possible rod length from 1
to n
. For each rod length, the function calculates the maximum profit by considering all possible cuts and updating the DP array accordingly. This approach optimizes the solution to the Rod Cutting problem by ensuring each subproblem is solved only once, thus improving efficiency.
Top comments (0)