Today I see the cs50 ai course on their website which is freely available for anyone and it's gave me a better understanding of different search algorithms used in the AI.I got the class on topic in college but now it is crystal clear seeing this awesome course.
So the topic is the search algorithms and how these works to find an optimal and best solution respected to a certain criteria or given environment.
AI you can define as the try to do a task intelligently like a human and example can be recognise a people or play a game like tic tac toe which will be used in this lecture later.
First I get a overview of basic things which will be used to define search and it's importance in ai .Let's take a look in these:
Agent: An entity which perceives the signal/stimulus from the nature/it's environment and gave a response back.
State: A configuration/specific structure of the environment For example, in a 15 puzzle, a state is any one way that all the numbers are arranged on the board.
initial state:The first state or from which state the search algorithm starts exploration.e.g:The state from which the search algorithm starts. In a navigator app, that would be the current location.
- Actions: A agent take certain choices to do a task.It is called actions.More precisely, actions can be defined as a function. Upon receiving state s as input, Actions(s) returns as output the set of actions that can be executed in state s. For example, in a 15 puzzle, the actions of a given state are the ways you can slide squares in the current configuration (4 if the empty square is in the middle, 3 if next to a side, 2 if in the corner).
Actions(s)=[s1,s2,...,s3](set of actions which can be executed is state s)
Transition model: It is describe the result of applying legal action on the state and More precisely, the transition model can be defined as a function. Upon receiving state s and action a as input, Results(s, a) returns the state resulting from performing action a in state s. For example, given a certain configuration of a 15 puzzle (state s), moving a square in any direction (action a) will bring to a new configuration of the puzzle (the new state).
State space: It is the all possible state which can be reached from the initial state.For example, in a 15 puzzle, the state space consists of all the 16!/2 configurations on the board that can be reached from any initial state. The state space can be visualised as a directed graph with states, represented as nodes, and actions, represented as arrows between nodes.
Goal Test: It is the final state where the agent need to go and after that the task is finished.The condition that determines whether a given state is a goal state. For example, in a navigator app, the goal test would be whether the current location of the agent (the representation of the car) is at the destination. If it is β problem solved. If itβs not β we continue searching.
Path Cost: The cost of do certain task it may be numeric and by it we can determine how much optimum is a solution of a search problem. For example, a navigator app does not simply bring you to your goal; it does so while minimising the path cost, finding the fastest way possible for you to get to your goal state.
Let's Solve search problem from my learning:
In this case I see various algorithms applied on two problems these are maze path finding problem and tic tac toe,in certain cases chase as an example.Now the things are common or need to clarify are the solution,Data structure- where the data or results/state can be stored and we can check them for later on.
So for data structure a stack,queue,tree,linked list can be useful depending on the task but for now let's take a general data structure i assume it's name frontier which has a one node initially.
- Node: Nodes contain information that makes them very useful for the purposes of search algorithms. They contain a state, which can be checked using the goal test to see if it is the final state. If it is, the nodeβs path cost can be compared to other nodesβ path costs, which allows choosing the optimal solution. Once the node is chosen, by virtue of storing the parent node and the action that led from the parent to the current node, it is possible to trace back every step of the way from the initial state to this node, and this sequence of actions is the solution.
You can say that nodes are data holder or state holder for a certain case.Now in our ds frontier we have a parent node which contains the initial state:S(ini) Now basic algorithm to solve a problem let's say search a path from a given graph:
Let's say find a path 0 to 2 and here two paths can be possible
- 0->1->->2
- 0->4->3->2
where the first one is the optimum solution (where the path cost is lower than other path)
Now the base algorithm for these type of problem is:
Repeat:
If the frontier is empty,
Stop. There is no solution to the problem.
Remove a node from the frontier. This is the node that will be considered.
If the node contains the goal state,
Return the solution. Stop.
Else,
* Expand the node (find all the new nodes that could be reached from this node), and add resulting nodes to the frontier.
* Add the current node to the explored set.
Now it's just a simple algorithm and does not gurantee to give a optimal solution where these search algorithms are come in.
It's will be in next part.
Top comments (0)