Introduction to Greedy Algorithms
Definition and Key Concepts
Greedy Algorithms:
A greedy algorithm is an approach for solving optimization problems by making the locally optimal choice at each stage with the hope of finding a global optimum. The fundamental principle of greedy algorithms is to make a sequence of choices, each of which looks best at the moment, with the aim of reaching an overall optimal solution.
Key Concepts:
- Locally Optimal Choice: At each step, choose the option that looks the best at that moment.
- Greedy Choice Property: This property states that a globally optimal solution can be arrived at by selecting the local optimums.
- Optimal Substructure: An optimal solution to the problem contains an optimal solution to subproblems.
Greedy Choice Property and Optimal Substructure
Greedy Choice Property:
- The greedy choice property states that a problem can be solved by making a series of choices, each of which looks best at the moment. Once a choice is made, it is not reconsidered. For a greedy algorithm to work, it must be possible to determine that there is always a globally optimal solution containing the chosen local optimal solutions.
Optimal Substructure:
- A problem exhibits optimal substructure if an optimal solution to the problem contains an optimal solution to its subproblems. This means that we can solve the problem by solving smaller instances of the same problem and combining their solutions to form a solution to the original problem.
Differences between Greedy Algorithms and Other Paradigms
Greedy Algorithms vs. Divide and Conquer:
- Greedy Algorithms: Make a series of choices, each of which looks best at the moment, and never reconsider those choices.
- Divide and Conquer: Divide the problem into smaller subproblems, solve each subproblem recursively, and then combine their solutions to form a solution to the original problem.
Example:
- Greedy: Huffman coding for data compression.
- Divide and Conquer: Merge sort algorithm for sorting an array.
Greedy Algorithms vs. Dynamic Programming:
- Greedy Algorithms: Make a series of choices, each of which looks best at the moment. This approach works when the problem has the greedy choice property and optimal substructure.
- Dynamic Programming: Break down the problem into subproblems, solve each subproblem just once, and store their solutions using a table. This approach works when the problem has overlapping subproblems and optimal substructure.
Example:
- Greedy: Fractional knapsack problem.
- Dynamic Programming: 0/1 knapsack problem.
Example for Greedy Algorithms
Greedy algorithms are a technique used for solving optimization problems by making the locally optimal choice at each step with the hope of finding a global optimum. This approach is particularly effective in problems where choosing the best local option leads to an optimal solution. Greedy algorithms are widely used in network design, scheduling, and data compression.
Activity Selection Problem
The Activity Selection Problem involves selecting the maximum number of activities that don't overlap, given their start and finish times. The greedy choice here is to always select the activity that finishes earliest.
Here’s a C++ implementation:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
struct Activity {
int start;
int finish;
};
// Comparator function to sort activities by their finish times
bool compare(Activity a, Activity b) {
return a.finish < b.finish;
}
vector<Activity> selectActivities(vector<Activity>& activities) {
// Sort activities by their finish times
sort(activities.begin(), activities.end(), compare);
vector<Activity> selected;
int lastFinishTime = 0;
for (const auto& activity : activities) {
if (activity.start >= lastFinishTime) {
selected.push_back(activity);
lastFinishTime = activity.finish;
}
}
return selected;
}
int main() {
vector<Activity> activities = {{1, 3}, {2, 5}, {0, 6}, {5, 7}, {8, 9}, {5, 9}};
vector<Activity> result = selectActivities(activities);
cout << "Selected activities:\n";
for (const auto& activity : result) {
cout << "Start: " << activity.start << ", Finish: " << activity.finish << "\n";
}
return 0;
}
Coin Change Problem
The Coin Change Problem involves making change for a given amount using the fewest number of coins from a given set of denominations. The greedy choice is to always pick the largest denomination coin that is less than or equal to the remaining amount.
Here’s a C++ implementation:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
vector<int> coinChange(vector<int>& denominations, int amount) {
sort(denominations.rbegin(), denominations.rend());
vector<int> result;
for (int coin : denominations) {
while (amount >= coin) {
amount -= coin;
result.push_back(coin);
}
}
return result;
}
int main() {
vector<int> denominations = {1, 2, 5, 10, 20, 50, 100, 200, 500, 2000};
int amount = 2896;
vector<int> result = coinChange(denominations, amount);
cout << "Coins used:\n";
for (int coin : result) {
cout << coin << " ";
}
return 0;
}
Fractional Knapsack Problem
The Fractional Knapsack Problem involves maximizing the total value in a knapsack by taking fractions of items with given weights and values. The greedy choice is to always pick the item with the highest value-to-weight ratio that fits in the knapsack.
Here’s a C++ implementation:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
struct Item {
int value;
int weight;
};
// Comparator function to sort items by their value-to-weight ratio
bool compare(Item a, Item b) {
double r1 = (double)a.value / a.weight;
double r2 = (double)b.value / b.weight;
return r1 > r2;
}
double fractionalKnapsack(vector<Item>& items, int capacity) {
sort(items.begin(), items.end(), compare);
double totalValue = 0.0;
int currentWeight = 0;
for (const auto& item : items) {
if (currentWeight + item.weight <= capacity) {
currentWeight += item.weight;
totalValue += item.value;
} else {
int remain = capacity - currentWeight;
totalValue += item.value * ((double)remain / item.weight);
break;
}
}
return totalValue;
}
int main() {
vector<Item> items = {{60, 10}, {100, 20}, {120, 30}};
int capacity = 50;
double maxValue = fractionalKnapsack(items, capacity);
cout << "Maximum value in Knapsack = " << maxValue << "\n";
return 0;
}
Dijkstra's Algorithm
Dijkstra's Algorithm finds the shortest path from a source vertex to all other vertices in a weighted graph with non-negative weights. The greedy choice is to always pick the vertex with the smallest known distance.
Here’s a C++ implementation:
#include <iostream>
#include <vector>
#include <queue>
#include <climits>
using namespace std;
struct Edge {
int destination;
int weight;
};
void dijkstra(const vector<vector<Edge>>& graph, int source) {
int numVertices = graph.size();
vector<int> distances(numVertices, INT_MAX);
distances[source] = 0;
priority_queue<pair<int, int>, vector<pair<int, int>>, greater<pair<int, int>>> minHeap;
minHeap.push({0, source});
while (!minHeap.empty()) {
int currentVertex = minHeap.top().second;
minHeap.pop();
for (const auto& edge : graph[currentVertex]) {
int neighbor = edge.destination;
int edgeWeight = edge.weight;
if (distances[currentVertex] + edgeWeight < distances[neighbor]) {
distances[neighbor] = distances[currentVertex] + edgeWeight;
minHeap.push({distances[neighbor], neighbor});
}
}
}
cout << "Vertex distances from source " << source << ":\n";
for (int i = 0; i < numVertices; ++i) {
cout << "Vertex " << i << ": " << distances[i] << "\n";
}
}
int main() {
int numVertices = 5;
vector<vector<Edge>> graph(numVertices);
graph[0] = {{1, 10}, {4, 5}};
graph[1] = {{2, 1}, {4, 2}};
graph[2] = {{3, 4}};
graph[3] = {{2, 6}, {0, 7}};
graph[4] = {{1, 3}, {2, 9}, {3, 2}};
int source = 0;
dijkstra(graph, source);
return 0;
}
Prim's Algorithm
Prim's Algorithm finds the Minimum Spanning Tree (MST) for a graph. The greedy choice is to always add the minimum weight edge that connects a vertex in the MST to a vertex outside the MST.
Here’s a C++ implementation:
#include <iostream>
#include <vector>
#include <queue>
#include <climits>
using namespace std;
struct Edge {
int destination;
int weight;
};
void primMST(const vector<vector<Edge>>& graph) {
int numVertices = graph.size();
vector<int> minEdgeWeight(numVertices, INT_MAX);
vector<int> parent(numVertices, -1);
vector<bool> inMST(numVertices, false);
priority_queue<pair<int, int>, vector<pair<int, int>>, greater<pair<int, int>>> minHeap;
minHeap.push({0, 0});
minEdgeWeight[0] = 0;
while (!minHeap.empty()) {
int currentVertex = minHeap.top().second;
minHeap.pop();
inMST[currentVertex] = true;
for (const auto& edge : graph[currentVertex]) {
int neighbor = edge.destination;
int edgeWeight = edge.weight;
if (!inMST[neighbor] && minEdgeWeight[neighbor] > edgeWeight) {
minEdgeWeight[neighbor] = edgeWeight;
minHeap.push({minEdgeWeight[neighbor], neighbor});
parent[neighbor] = currentVertex;
}
}
}
cout << "Edges in the MST:\n";
for (int i = 1; i < numVertices; ++i) {
cout << parent[i] << " - " << i << "\n";
}
}
int main() {
int numVertices = 5;
vector<vector<Edge>> graph(numVertices);
graph[0] = {{1, 2}, {3, 6}};
graph[1] = {{0, 2}, {2, 3}, {3, 8}, {4, 5}};
graph[2] = {{1, 3}, {4, 7}};
graph[3] = {{0, 6}, {1, 8}};
graph[4] = {{1, 5}, {2, 7}};
primMST(graph);
return 0;
}
Kruskal's Algorithm
Kruskal's Algorithm finds the Minimum Spanning Tree (MST) for a graph. The greedy choice is to always pick the smallest weight edge that does not form a cycle in the MST.
Here’s a C++ implementation:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
struct Edge {
int source;
int destination;
int weight;
};
bool compare(Edge a, Edge b) {
return a.weight < b.weight;
}
class DisjointSet {
public:
vector<int> parent, rank;
DisjointSet(int numVertices) {
parent.resize(numVertices);
rank.resize(numVertices, 0);
for (int i = 0; i < numVertices; ++i)
parent[i] = i;
}
int find(int vertex) {
if (vertex != parent[vertex])
parent[vertex] = find(parent[vertex]);
return parent[vertex];
}
void unionSets(int vertex1, int vertex2) {
int root1 = find(vertex1);
int root2 = find(vertex2);
if (root1 != root2) {
if (rank[root1] > rank[root2]) {
parent[root2] = root1;
} else if (rank[root1] < rank[root2]) {
parent[root1] = root2;
} else {
parent[root2] = root1;
rank[root1]++;
}
}
}
};
void kruskalMST(vector<Edge>& edges, int numVertices) {
sort(edges.begin(), edges.end(), compare);
DisjointSet ds(numVertices);
vector<Edge> mst;
for (const auto& edge : edges) {
int vertex1 = edge.source;
int vertex2 = edge.destination;
if (ds.find(vertex1) != ds.find(vertex2)) {
mst.push_back(edge);
ds.unionSets(vertex1, vertex2);
}
}
cout << "Edges in the MST:\n";
for (const auto& edge : mst) {
cout << edge.source << " - " << edge.destination << " (Weight: " << edge.weight << ")\n";
}
}
int main() {
int numVertices = 4;
vector<Edge> edges = {
{0, 1, 10}, {0, 2, 6}, {0, 3, 5}, {1, 3, 15}, {2, 3, 4}
};
kruskalMST(edges, numVertices);
return 0;
}
Huffman Coding
Huffman Coding is a compression algorithm that uses variable-length codes for encoding characters. The greedy choice is to always merge the two least frequent nodes.
Here’s a C++ implementation:
#include <iostream>
#include <queue>
using namespace std;
// Structure for Huffman tree nodes
struct HuffmanNode {
char data; // Data stored in the node (for leaf nodes, this is the character)
int frequency; // Frequency of the character
HuffmanNode *left, *right; // Left and right child pointers
HuffmanNode(char data, int frequency) {
this->data = data;
this->frequency = frequency;
left = right = nullptr;
}
};
// Comparator for priority queue (min-heap based on frequency)
struct CompareNodes {
bool operator()(HuffmanNode* const& lhs, HuffmanNode* const& rhs) {
return lhs->frequency > rhs->frequency;
}
};
// Class representing Huffman Tree
class HuffmanTree {
private:
HuffmanNode* root;
public:
HuffmanTree() : root(nullptr) {}
// Function to build Huffman Tree from given character frequencies
void buildTree(priority_queue<HuffmanNode*, vector<HuffmanNode*>, CompareNodes>& minHeap) {
while (minHeap.size() > 1) {
// Extract the two nodes with the minimum frequency
HuffmanNode* left = minHeap.top();
minHeap.pop();
HuffmanNode* right = minHeap.top();
minHeap.pop();
// Create a new internal node with these two nodes as children
// and with a frequency equal to the sum of the children's frequencies.
HuffmanNode* newNode = new HuffmanNode('$', left->frequency + right->frequency);
newNode->left = left;
newNode->right = right;
// Add the new node to the min-heap
minHeap.push(newNode);
}
// The remaining node in the heap is the root node of the Huffman tree
root = minHeap.top();
}
// Function to print level order traversal of the Huffman Tree
void printLevelOrder() {
if (root == nullptr)
return;
queue<HuffmanNode*> q;
q.push(root);
while (!q.empty()) {
int size = q.size();
while (size--) {
HuffmanNode* node = q.front();
q.pop();
cout << "(" << node->data << ", " << node->frequency << ") ";
if (node->left)
q.push(node->left);
if (node->right)
q.push(node->right);
}
cout << endl;
}
}
};
int main() {
// Example usage to create a Huffman Tree for characters 'a', 'b', 'c', 'd'
// and their respective frequencies
priority_queue<HuffmanNode*, vector<HuffmanNode*>, CompareNodes> minHeap;
minHeap.push(new HuffmanNode('a', 5));
minHeap.push(new HuffmanNode('b', 9));
minHeap.push(new HuffmanNode('c', 12));
minHeap.push(new HuffmanNode('d', 13));
HuffmanTree tree;
tree.buildTree(minHeap);
cout << "Level Order Traversal of Huffman Tree:\n";
tree.printLevelOrder();
return 0;
}
Top comments (0)