Optimization is not just about finding a good answer.
It is about finding a better answer than the one you have now.
The hard part is knowing whether your current “best” is truly best, or just locally good.
Core Idea
Optimization is the process of improving a solution according to some objective.
You start with a candidate solution.
You measure how good it is.
Then you search for a better one.
That sounds simple.
But the search space can be huge.
And the best nearby solution may not be the best overall solution.
The Key Structure
A simple optimization loop looks like this:
Candidate Solution → Evaluate Score → Explore Neighbor → Accept or Reject → Repeat
More compactly:
Optimization = search space + objective function + update strategy
The objective function tells you what “better” means.
The update strategy tells you how to move.
Implementation View
At a high level, optimization search works like this:
start with an initial solution
evaluate its score
while stopping condition is not met:
generate candidate solutions
compare scores
choose whether to move
update the current solution
return the best solution found
This structure appears in many algorithms.
Hill Climbing.
Simulated Annealing.
Genetic Algorithms.
Even mathematical optimization methods follow the same broad idea:
evaluate, improve, repeat.
Concrete Example
Imagine tuning a route, schedule, or model setting.
You start with one possible solution.
Then you slightly modify it.
If the new solution improves the score, you keep it.
If not, you reject it.
That is the basic intuition behind Hill Climbing.
But there is a problem.
What if the current solution is better than all nearby options, but not the best possible solution overall?
That is the local optimum problem.
Local Optimum vs Global Optimum
This is the first concept to understand.
A local optimum is the best solution in its nearby region.
A global optimum is the best solution across the entire search space.
Local optimum:
- looks best from the current neighborhood
- can trap simple algorithms
- may not be the true best answer
Global optimum:
- is the best overall solution
- may be far from the current point
- is often harder to find
If you do not understand this distinction, optimization algorithms feel like random strategy names.
Hill Climbing
Hill Climbing is the most intuitive optimization strategy.
It repeatedly moves to a better neighboring solution.
The idea is simple:
If the next move improves the score, take it.
If not, stop or try another neighbor.
Hill Climbing is easy to understand and implement.
But it has a major weakness.
It can get stuck at a local optimum.
That is why more advanced strategies exist.
Hill Climbing vs Simulated Annealing
Simulated Annealing extends Hill Climbing.
Hill Climbing usually accepts only better moves.
Simulated Annealing sometimes accepts worse moves.
That sounds strange at first.
But it helps the algorithm escape local optima.
Hill Climbing:
- simple and greedy
- improves step by step
- can get stuck locally
Simulated Annealing:
- allows occasional worse moves
- explores more widely
- gradually becomes more selective
So the key difference is exploration.
Hill Climbing exploits.
Simulated Annealing explores before settling.
Genetic Algorithm
Genetic Algorithms take a different approach.
Instead of improving one solution, they work with a population of solutions.
The algorithm repeatedly selects, mixes, and mutates candidates.
A simple view:
Population → Selection → Crossover → Mutation → New Population
This is useful when the search space is complex.
Instead of betting on one path, the algorithm explores multiple directions at once.
That makes Genetic Algorithms very different from Hill Climbing.
Hill Climbing improves one candidate.
Genetic Algorithms evolve many candidates.
Newton-Raphson Method
Newton-Raphson is more mathematical.
It uses information about the function’s slope and curvature.
When the function is smooth and the structure is known, this can move quickly toward a solution.
But it depends more heavily on mathematical assumptions.
Compared with search-based methods, Newton-Raphson is less like exploring a landscape blindly.
It is more like using the shape of the landscape to jump efficiently.
Search-Based vs Mathematical Optimization
This comparison helps organize the field.
Search-based optimization:
- explores candidate solutions
- works even when the landscape is messy
- can be intuitive and flexible
- may require many evaluations
Mathematical optimization:
- uses structure like gradients or curvature
- can converge quickly when assumptions hold
- may fail or become unstable when assumptions are weak
- often needs a smoother objective
So the right method depends on the problem.
If you know the structure, use it.
If you do not, search.
Recommended Learning Order
If optimization strategies feel scattered, learn them in this order:
- Local Optimum vs Global Optimum
- Hill Climbing
- Simulated Annealing
- Genetic Algorithm
- Newton-Raphson Method
- Heuristic Search
This order works because you first understand the problem.
Then you learn the simplest strategy.
Then you learn how algorithms escape limitations.
Then you compare broader approaches.
Takeaway
Optimization is search under an objective.
The shortest version is:
Optimization = find better solutions according to a score
Hill Climbing moves toward local improvement.
Simulated Annealing sometimes accepts worse moves to escape traps.
Genetic Algorithms explore with many candidates.
Newton-Raphson uses mathematical structure to move faster.
If you remember one idea, remember this:
Every optimization method is a trade-off between exploitation, exploration, and assumptions about the problem.
Discussion
When solving optimization problems, do you usually start with a simple greedy method like Hill Climbing, or jump directly to broader strategies like Simulated Annealing or Genetic Algorithms?
Originally published at zeromathai.com.
Original article: https://zeromathai.com/en/optimization-and-search-strategies-hub-en/
GitHub Resources
AI diagrams, study notes, and visual guides:
https://github.com/zeromathai/zeromathai-ai
Top comments (0)