## DEV Community

hridyesh bisht for AWS Community Builders

Posted on • Updated on

# Basics of an Algorithm and Different Algorithmic Approaches

The information in this blog contains,

1. Page 1: What do you mean by an Algorithm, Time complexity, Space complexity, Orders of growth, Asymptotic Notations, Approach to solve recursive and non recursive algorithms.
2. Page 2: Divide and Conquer, Applications of Divide and Conquer, Greedy Method, Applications of Greedy Method, Dynamic Method, Principle of Optimality and Applications of Dynamic Method.
3. Page 3: Backtracking Method, Applications of Backtracking, Branch and Bound, Applications of Branch and bound, NP Complete and NP Hard problems, GitHub repository with the code .

## Introduction:

An algorithm is a set of instructions that produces an output or a result.

It tells the system what to do in order to achieve the desired result. It may not know what the result is beforehand, but it knows that it wants one.

An example for an algorithm, will be cooking a cake. We have our desired output which is a cake, now we have to follow certain set of instructions to make a cake.

#### There are two kinds of efficiency:

1. Time Efficiency/Complexity: Indicates how fast an algorithm in question runs.
2. Space Efficiency/Complexity: Refers to the amount of memory units required by the algorithm in addition to the space needed for its input and output.

Since we are after a measure of an algorithm’s efficiency, we would like to have a metric that does not depend on these extraneous factors.

We focus on Basic operation, The operation contributing the most to the total running time, and compute the number of times the basic operation is executed.

Algorithm’s time efficiency suggests measuring it by counting the number of times the algorithm’s basic operation is executed on inputs of size n. Running time T (n) of a program implementing this algorithm on that computer by the formula

1. cop be the execution time of an algorithm’s basic operation on a particular computer,
2. C(n) be the number of times this operation needs to be executed for this algorithm.

#### Orders of Growth:

When comparing two algorithms that solve the same problem, if the input size is very less–

1. It is difficult to distinguish the time complexities
2. Not possible to choose efficient algorithm

2^n and the factorial function n! Both these functions grow so fast that their values become astronomically large even for rather small values of n.

The efficiencies of some algorithms may differ significantly for inputs of the same size. For such algorithms, we need to distinguish between the worst-case, average-case, and best-case efficiencies.

let us consider linear search example

SequentialSearch(A[],K)
//Searches for a given value in a given array by linear search
//Input: An array A[] and a search key K
//Output: The index of the first element in A that matches k or -1 if there are no matching elements

int i=0;
while i< n and A[i] !=k do
i=i+1;
if i<n return i
else return -1

1.Worst case: No match

Analyze the algorithm to see what kind of inputs yield the largest value of the basic operation’s count C(n) among all possible inputs of size n and then compute this worst-case value.

2.Best case: Very first match

For the best-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the fastest among all possible inputs of that size.

Worst-case efficiency is very important than Best-case efficiency

3.Average case: For average case efficiency assumptions are needed

The standard assumptions for linear search are that

1. The probability of a successful search is equal to p (0 ≤ p ≤ 1) and
2. The probability of the first match occurring in the ith position of the list is the same for every i.

In the case of a successful search, the probability of the first match occurring in the ith position of the list is p/n for every i. In the case of an unsuccessful search, the number of comparisons will be n with the probability of such a search being (1 -p)

1.For example, if p = 1 (the search must be successful)

1. The average number of key comparisons made by sequential search is (n + 1)/2; that is, the algorithm will inspect, on average, about half of the list’s elements.

2.If p = 0 (the search must be unsuccessful)

1. The average number of key comparisons will be n because the algorithm will inspect all n elements on all such inputs.

The framework’s primary interest lies in the order of growth of the algorithm’s running time(extra memory units consumed)as its is input size goes to infinity.

### Asymptotic Notations:

we are interested in, t(n) will be an algorithm’s running time( usually indicated by its basic operation countC(n), and g(n) will be some simple function to compare the count with.

1.O(big oh) Notation:

A function t(n) is said to be in O(g(n)), denoted t(n) ∈O(g(n)), if t(n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that

big-O notation for asymptotic upper bounds

2.Ω(big omega) Notation:

A function t(n) is said to be in Ω(g(n)), denoted t(n) ∈Ω(g(n)), if t(n) is bounded below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that

big-Ω notation for asymptotic lower bound

3.θ(big theta) Notation:

A function t(n) is said to be in θ(g(n)), denoted t(n)∈θ(g(n)), if t(n) is bounded both above and below by some positive constant multiples of g(n) for all large n, i.e.,if there exist some positive constants c1 and c2 and some non negative integer n0 such that,

big-Θ notation, an asymptotically tight bound on the running time

### Non-recursive and recursive algorithms:

1.General Plan for Analyzing the Time Efficiency of Non-recursive Algorithms,

1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. (As a rule, it is located in the innermost loop.)
3. Check whether the number of times the basic operation is executed depends only on the size of an input. If it also depends on some additional property, the worst-case, average-case, and, if necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is executed.
5. Using standard formulas and rules of sum manipulation, either find a closed form formula for the count or, at the very least, establish its order of growth.

For example, finding Largest element in a list

MaxElement(A[])
//Determines the value of the largest element in a given array
// Input: An array A[] of real numbers
// Output: The value of the largest element in A
maxal =A[0]
for i <- 1 to n-1 do
if A[i} >maxval
maxval=A[i]
return maxval

1. C(n) the number of times this comparison is executed
2. The algorithm makes one comparison on each execution of the loop, which is repeated for each value of the loop’s variable i within the bounds 1 and n -1, inclusive
3. Therefore, we get the following sum for C(n)

2.General Plan for Analyzing the Time Efficiency of Recursive Algorithms

1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed can vary on different inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the number of times the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution.

For example: Factorial function

Algorithm Factorial(n)
//Computes n! recursively
//Input: a non negative integer n
//Output: The value of n!

if n=0 return 1
else return F(n-1) *n

Compute the factorial function F (n) = n! for an arbitrary nonnegative integer n.

Another example of Recursive algorithm would be Towers of Hanoi.

One should be careful with recursive algorithms because their succinctness may mask their inefficiency.

## Divide and Conquer:

Divide-and-conquer algorithms work according to the following general plan:

1. A problem is divided into several sub problems of the same type, ideally of about equal size.
2. The sub problems are solved (typically recursively, though sometimes a different algorithm is employed, especially when sub problems become small enough).
3. If necessary, the solutions to the sub problems are combined to get a solution to the original problem.

Recurrence relation for the general divide and conquer recurrence:

1. In the most typical case of divide-and-conquer a problem’s instance of size n is divided into two instances of size n/2.
2. More generally, an instance of size n can be divided into b instances of size n/b, with a of them needing to be solved. (Here, a and b are constants; a ≥ 1 and b > 1.)
3. Assuming that size n is a power of b to simplify our analysis, we get the following recurrence for the running time T (n):

The time complexity of recurrence relations is found by master theorem,

Some examples of master theorem solutions,

1. https://courses.missouristate.edu/anthonyclark/325/lectures/07-master-theorem.pdf

Time complexity of below algorithms is also calculated by master theorem

## Greedy Method:

A greedy algorithm is an algorithm that constructs an object X one step at a time, at each step choosing the locally best option. In some cases, greedy algorithms construct the globally best object by repeatedly choosing the locally best option.

Greedy algorithms aim for global optimal by iteratively making a locally optimal decision. Solves problems in a Top-Down approach

Greedy algorithms have several advantages over other algorithmic approaches:

1. Simplicity: Greedy algorithms are often easier to describe and code up than other algorithms.
2. Efficiency: Greedy algorithms can often be implemented more efficiently than other algorithms.

Greedy algorithms have several drawbacks:

1. Hard to design: Once you have found the right greedy approach, designing greedy algorithms can be easy. However, finding the right approach can be hard.
2. Hard to verify: Showing a greedy algorithm is correct often requires a nuanced argument.

For example, If we have to travel from V1 to V7

Then according to Greedy algorithm, we will consider local optimum path which will be.

1. V1 to V6 = 4
2. V6 toV5 = 15
3. V5 to V4 =10
4. V4 to V3 =6
5. V3 to V2 =8
6. V2 to V7 =7

#### Assemble an optimal solution to a problem

1. Making locally optimal (or greedy) choices
2. At each step, we make the choice that looks best in the current problem
3. We don’t consider results from different sub problems

### When do we use Greedy Algorithms?

All the greedy problems share a common property that a local optima can eventually lead to a global minima without reconsidering the set of choices already considered.

The only problem with them is that you might come up with the correct solution but you might not be able to verify if its the correct one.

### Applications of Greedy method

##### An application of of Greedy Algorithm is the Coin Change problem

Given currency denominations: 1, 5, 10, 25, 100, devise a method to pay amount to customer using fewest number of coins.

Our task is to use these coins to form a sum of money using the minimum (or optimal) number of coins. Also, we can assume that a particular denomination has an infinite number of coins

At each iteration, add coin of the largest value that does not take us past the amount to be paid.

Coin_Change(x,Coins[])
// Sort n coin denominations so that  c1< c2 ...
S = []
# Iterate through coins
for coin in denomination_list:
# Add current coin as long as not exceeding ampoiunt
while amount:
if coin <= amount:
change.append(coin)
amount -= coin
else:
break
return change

Problem reduces to coin-changing x–ck cents, which, by induction,is optimally solved by cashier's algorithm.

It may not even lead to a feasible solution every time, as if c1 > 1: 7, 8, 9.

1. Coin change's algorithm: 15¢ = 9 + ???.
2. Optimal: 15¢ = 7 + 8.
##### An application of of Greedy Algorithm is the knapsack problem

For solving the Fractional knapsack problem, we calculate the value of an item/ weight of an item ratio. Then we simply pick the best value to weight ratio items.

1. Take as much of the article with the highest value per pound (vi/wi) as possible
2. If the article with the highest value is over, take the next article with the highest value (vi/wi)
3. Continue until the sack is full

Greedy algorithms is applied in the following algorithms,

1. Prim’s algorithm
2. Kruskal’s algorithm
3. Dijkstra’s algorithm
4. Huffman trees
5. Job sequencing with deadlines/activity selection problem

## Dynamic Programming:

Dynamic Programming is a general approach to solving problems, much like “divide-and-conquer” is a general method, except that unlike divide-and-conquer, the sub problems will typically overlap and will not effect each other overlap.

1. Commonly used in optimization.
2. When solving sub problems, try them all, but choose the best one.

#### Basic Idea:

What we want to do is take our problem and somehow break it down in to a reasonable number of sub problems in such a way that we can use optimal solutions to the smaller sub problems to give us optimal solutions to the larger ones.

Unlike divide-and-conquer it is OK if our sub problems overlap, so long as there are not too many of them.

Dynamic Programming results in an efficient algorithm, if the following conditions hold:

1. The optimal solution can be produced by combining optimal solutions of subproblems;
2. The optimal solution of each subproblem can be produced by combining optimal solutions of sub-subproblems, etc;
3. The total number of subproblems arising recursively is polynomial.

#### Steps for Solving DP Problems

1. Define subproblems
2. Write down the recurrence that relates sub problems
3. Recognize and solve the base cases

Usually works when the obvious Divide&Conquer algorithm results in an exponential running time

#### Solutions for Dynamic Programming:

1. Find the recursion in the problem.
2. Top-down: store the answer for each subproblem in a table to avoid having to recompute them.
3. Bottom-up: Find the right order to evaluate the results so that partial results are available when needed.

Implementation Trick:

1. Remember (memoize) previously solved “subproblems”; e.g., in Fibonacci, we memo-ized the solutions to the subproblemsF0,F1,···,Fn−1, while unraveling the recursion.
2. if we encounter a subproblem that has already been solved, re-use solution.

Runtime≈Number of subproblems * time/subproblem

Let us take a few examples,

### 1.Factorial function:

Let's take the simple example of the Fibonacci numbers: finding the n th Fibonacci number defined by

Fn = Fn-1 + Fn-2 and F0 = 0, F1 = 1

1.Recursion: The obvious way to do this is recursive, so for every time we find f(3) value three different points,

def fibonacci(n):
if n == 0:
return 0
if n == 1:
return 1

return fibonacci(n - 1) + fibonacci(n - 2)


2.Top Down -Memoization:

The recursion does a lot of unnecessary calculations because a given Fibonacci number will be calculated multiple times. So, instead of calculating value of f(3) value three different times, we store value of f(3) in cache. So, we can easily retrieve it,

cache = {}

def fibonacci(n):
if n == 0:
return 0
if n == 1:
return 1
if n in cache:
return cache[n]

cache[n] = fibonacci(n - 1) + fibonacci(n - 2)

return cache[n]


The memoization does take extra space to store the cache values. So we compensate space complexity with time complexity.

3.Bottom-up - Iterative:

A better way to do this is to get rid of the recursion all-together by using iterative procedure, this way we don't have to store extra values or take time to find the value.

cache = {}

def fibonacci(n):
cache[0] = 0
cache[1] = 1

for i in range(2, n + 1):
cache[i] = cache[i - 1] +  cache[i - 2]

return cache[n]

So in terms of complexity Iterative > Memoization > Recurstion

### Principle of Optimality:

A key idea is that optimization over time can often be regarded as ‘optimization in stages’. We trade off our desire to obtain the lowest possible cost at the present stage against the implication this would have for costs at future stages.

The best action minimizes the sum of the cost incurred at the current stage and the least total cost that can be incurred from all subsequent stages, consequent on this decision.

For example, If we have to travel from V1 to V7

Then according to Principle of optimality, we will consider global optimum path instead of local optimum path,

1. V1 to V2
2. V2 to V7

### Applications of Dynamic method

##### An application of Dynamic Algorithm is the Coin change

To make change for n cents, we are going to figure out how to make change for every value x < n first. We then buildup the solution out of the solution for smaller values

We will only concentrate on computing the number of coins.

1. Let C[p] be the minimum number of coins needed to make change for p cents.
2. Let x be the value of the first coin used in the optimal solution.
3. Then C[p] = 1 +C[p−x]

DP-Change(n)
C[<0] =∞
C[0] = 0
for p= 2 t o n
do min =∞
for i= 1 to k
do if(p≥di)
then if(C[p−di]) + 1<min)
thenmin=C[p−di] + 1
coin=i
C[p] =min
S[p] =coin

Instead of going for local largest coin to return at every iteration, We rather focus on minimizing the number of coins returned.

It may not leads to a feasible solution every time, as if c1 > 1: 7, 8, 9.

1. Dynamic Coin change's algorithm: 15¢ = 7 + 8
2. Greedy Coin change's algorithm:15¢ = 9 + ?.
##### An application of Dynamic Algorithm is the knapsack problem

For solving the 0/1 knapsack problem, we calculate the value of an item/ weight of an item ratio. Then we simply check for the optimum solution from all the combination of itmes.

1. Take as much of the article with the highest value per pound (vi/wi) as possible
2. Check the maximum value/weight for all the combination of items.
3. Continue until the sack is full

#### Difference between Greedy algorithm and Dynamic algorithm

More applications of Dynamic programming are,

## Backtracking

Backtracking is an approach to solving constraint-satisfaction problems without trying all possibilities.

Backtracking consists of doing a DFS of the state space tree, checking whether each node is promising and if the node is non promising backtracking to the node’s parent.

Suppose you have to make a series of decisions,among various choices, where

1. You don’t have enough information to know what to choose
2. Each decision leads to a new set of choices
3. Some sequence of choices (possibly more than one) may be a solution to your problem

Back tracking is a methodical way of trying out various sequences of decisions, until you find one that “works”

Backtracking is easily implemented with recursion because:

1. The run-time stack takes care of keeping track of the choices that got us to a given point.
2. Upon failure we can get to the previous choice simply by returning a failure code from the recursive call

An real life example of this would be Solving a maze,

1. Given a maze, find a path from start to finish
2. At each intersection, you have to decide between three or fewer choices:
1. Go straight
2. Go left
3. Go right
3. You don’t have enough information to choose correctly
4. Each choice leads to another set of choices
5. One or more sequences of choices may (or may not)lead to a solution
6. Many types of maze problem can be solved with backtracking

Each of the problems above are in the some sense the same problem. They all work by calling some initialization function, then for each possible slot, you try each possible value in turn, stopping when a solution is found, or backtracking when we can’t fill the current slot.

#### Iterative Implementations

The solutions above used recursion to implement backtracking. This is great because if we ever have to backtrack, we return right in the middle of a for-loop so we know the next value to try. This is actually awesome.

Can we do things iteratively?

Well, yeah, but we have to some incrementing an decrementing manually, and some ugly loop controlling.

#### The backtracking algorithm

Backtracking is really quite simple--we“explore” each node, as follows:

To “explore” node N:

1. If N is a goal node, return “success”
2. If N is a leaf node, return “failure”
3. For each child C of N,
1. Explore C
1. If C was successful, return “success”
4. Return “failure”

It is convenient to implement this kind of processing by constructing a tree of choices being made called the "State Space Tree". Its root represents an initial state before the search for the solution begins. The nodes of the first level in the tree represent the choices for the first component of a solution and the nodes of a second level represent the choices for the second component and so on.

A node in the state space tree is promising if it corresponds to the partials constructed solution that may lead to the complete solution otherwise the nodes are called non-promising. Leaves of the tree represent either the non- promising dead end or complete solution found by the algorithm.

### Application of Backtracking method:

#### An application of Back tracking Algorithm is the N Queen Problem:

Given: n-queens and an nxn chess board

Find: A way to place all n queens on the board s.t. no queens are attacking another queen

Let us consider 4 queens as of now, in a chess board. A queen can move,

1. Horizontal
2. Vertical
3. Diagonal

Algorithm for N queens problem:

1. Now we can place the first queen with out any problems
2. Before placing the second queen. we will have to check if it does not violate any one condition, i.e not present in the same row, column or diagonal as the first queen.
3. If we get stuck at a point, after which there is no solution. Then we backtrack to one step before and take an another approach.
4. We repeat this until we get a solution.

##### An application of Back tracking Algorithm is the knapsack problem

The profit density of each item (weight / profit) is first calculated, and the knapsack is filled in order of decreasing profit density. A bounding function is used to determine the upper bound on the profit achievable with the current knapsack so as to reduce the search space.

1. The profit density of each item (weight / profit) is calculated
2. Filled in order of decreasing profit density.
3. A bounding function is used to determine the upper bound on the profit achievable with the current knapsack so as to reduce the search space.
4. Continue until the sack is full

Bounding function is needed to help kill some live nodes without actually expanding them.
A good bounding function for this problem is obtained by using an upper bound on the value of the best feasible solution obtainable by expanding the given live node and any of its descendants. If this upper bound is not higher than the value of the best solution determined so far then that live node may be killed.

More applications of Dynamic programming are,

1. Subset Sum
2. Graph coloring
3. Hamiltonian Cycle

## Branch and bound

Branch and Bound is a state space search method in which all the children of a node are generated before expanding any of its children.

It is similar to backtracking technique but uses BFS like search.

The different types (FIFO, LIFO, LC) define different 'strategies' to explore the search space and generate branches.

1.FIFO (first in, first out): always the oldest node in the queue is used to extend the branch. This leads to a breadth-first search, where all nodes at depth d are visited first, before any nodes at depth d+1 are visited.

2.LIFO (last in, first out): always the youngest node in the queue is used to extend the branch. This leads to a depth-first search, where the branch is extended through every 1st child discovered at a certain depth, until a leaf node is reached.

3.LC (lowest cost): the branch is extended by the node which adds the lowest additional costs, according to a given cost function. The strategy of traversing the search space is therefore defined by the cost function.

Branch and bound keeps a list of live nodes. Four strategies are used to manage the list:

1. Depth first search: As soon as child of current E-node is generated, the child becomes the new E-node
1. Parent becomes E-node only after child’s subtree is explored
2. In the other 3 strategies, the E-node remains the E-node until it is dead. Its children are managed by:
1. Breadth first search: Children are put in queue
2. D-search: Children are put on stack
3. Least cost search: Children are put on heap
3. We use bounding functions (upper and lower bounds) to kill live nodes without generating all their children.

Both BFS and DFS generalize to branch-and-bound strategies

1. BFS is an FIFO search in terms of live node
1. List of live nodes is a queue
2. DFS is an LIFO search in terms of live nodes
1. List of live nodes is a stack

Just like backtracking, we will use bounding functions to avoid generating subtrees that do not contain an answer node

##### An application of Back tracking Algorithm is the knapsack problem

1. Numbers inside a node are profit and weight at that node, based on decisions from root to that node
2. Nodes without numbers inside have same values as their parent
3. Numbers outside the node are upper bound calculated by greedy algorithm
1. Upper bound for every feasible left child (xi=1) is same as its parent’s bound
2. Chain of left children in tree is same as greedy solution at that point in the tree
3. We only recompute the upper bound when we can’t move to a feasible left child
4. Final profit and final weight (lower bound) are updated at each leaf node reached by algorithm
1. Solution improves at each leaf node reached
2. No further leaf nodes reached after, because lower bound (optimal value) is sufficient to prune all other tree branches before leaf is reached
5. By using floor of upper bound at nodes, we avoid generating the tree below either node
1. Since optimal solution must be integer, we can truncate upper bounds
2. By truncating bounds at Nodes we avoid exploring Nodes.

##### An application of Back tracking Algorithm is the Traveling salesman

Definition: Find a tour of minimum cost starting from a node S going through other nodes only once and returning to the starting point S.

Definitions:

1. A row(column) is said to be reduced iff it contains at least one zero and all remaining entries are non-negative.
2. A matrix is reduced iff every row and column is reduced.

Branching:

1. Each node splits the remaining solutions into two groups: those that include a particular edge and those that exclude that edge
2. Each node has a lower bound.,

Bounding: How to compute the cost of each node?

1. Subtract of a constant from any row and any column does not change the optimal solution (The path).
2. The cost of the path changes but not the path itself.
3. Let A be the cost matrix of a G=(V,E).
4. The cost of each node in the search tree is computed as follows:
1. Let R be a node in the tree and A(R) its reduced matrix
2. The cost of the child (R), S:
1. Set row i and column j to infinity
2. Set A(j,1) to infinity
3. Reduced S and let RCL be the reduced cost.
4. C(S) = C(R) + RCL+A(i,j)
5. Get the reduced matrix A' of A and let L be the value subtracted from A.
6. L: represents the lower bound of the path solution
7. The cost of the path is exactly reduced by L

What to determine the branching edge?

1. The rule favors a solution through left subtree rather than right subtree, i.e., the matrix is reduced by a dimension.
2. Note that the right subtree only sets the branching edge to infinity.
3. Pick the edge that causes the greatest increase in the lower bound of the right subtree, i.e., the lower bound of the root of the right subtree is greater.

Difference between Backtracking and Branch and Bound

## NP - Hard and NP Complete Problems:

1.Polynomial Algorithms(P):

P is the class of decision problems which can be solved in polynomial time by a deterministic Turing machine. If a problem belongs to P, then there exists at least one algorithm that can solve it from scratch in polynomial time.

Many algorithms complete in polynomial time:

1. All basic mathematical operations; addition, subtraction, division, multiplication
2. Linear and Binary Search Algorithms for a given set of number

2.Nondeterministic Polynomial(NP) :

NP is the class of decision problems which can be solved in polynomial time by a non-deterministic Turing machine. Equivalently, it is the class of problems which can be verified in polynomial time by a deterministic Turing machine.

They cannot be solved in polynomial time. However, they can be verified in polynomial time.

3.Nondeterministic Polynomial Hard(NP-Hard) :

NP-hard is the class of decision problems to which all problems in NP can be reduced to in polynomial time by a deterministic Turing machine.

They are not only hard to solve but are hard to verify as well. In fact, some of these problems aren’t even decidable. Some of them are,

1. Traveling Salesman Problem, and
2. Graph Coloring

4.Nondeterministic Polynomial Complete(NP-Complete) :

NP-Complete is the intersection of NP-hard and NP. Equivalently, NP-complete is the class of decision problems in NP to which all other problems in NP can be reduced to in polynomial time by a deterministic Turing machine.

For any problem that’s complete, there exists a polynomial-time algorithm that can transform the problem into any other -complete problem. This transformation requirement is also called reduction.

As stated already, there are numerous problems proven to be complete. Among them are:

1. Traveling Salesman
2. Knapsack,
3. Graph Coloring

Curiously, what they have in common, aside from being in , is that each can be reduced into the other in polynomial time. These facts together place them in .