In the previous post I shared what ansible is, how our code would look when it comes to parsing the information from the playbook and host, and how when we have code converted into commands, we can execute them on the servers and return the response.
In this article, we will take a look at what strategies can be used for execution. The conversion of data from a YAML file into commands is not a part of this blog, but you can check the code here.
Strategies
Now there are two Strategies to choose from
- Linear
- Free
There is a common denominator for both strategies: MaxConcurrency. It is not a good idea to spin up 100 goroutines if there are 100 servers on which the playbook is to be executed.
Linear Strategy
This is the default strategy. Let's say you have five tasks that are supposed to run in parallel on three machines. In this strategy, the execution of the next task will only happen if the previous task has been completed on all the servers. If any of the tasks fail on any of the servers, we may or may not proceed with the next task for that server according to the meta data provided for the task (skip_errors or not).
Again, if the number of hosts exceeds the maximum concurrency number, we will split our execution into batches and run these batches sequentially. Each batch will have a certain number of hosts, which will run the tasks in parallel. We can do something similar using semaphores as well, but we will keep it simple here.
This is what the flow would look like:
- Parse all the tasks into commands
- Create batches of hosts
- For each Batch
- Run tasks on each host in parallel
- Wait until all the tasks are finished
- Proceed to the next task
For the waiting part, we can simply use the waitgroup
This is what the code may look like
func (e *Engine) LinearStrategy(respObj PlayDoc) {
opts := []ExecOutput{}
for k := 0; k < len(respObj.hosts)/e.maxConcurrent; k += e.maxConcurrent {
start, end := k*e.maxConcurrent, ((k + 1) * e.maxConcurrent)
if end > len(respObj.hosts) {
end = len(respObj.hosts)
}
for _, t := range respObj.tasks {
e.wg.Add(len(respObj.hosts))
for _, h := range respObj.hosts[start:end] {
h := h
t := t
if !e.sameOS(t, h) {
continue
}
go func() {
// Executing the ssh commands for each server
defer e.wg.Done()
for _, c := range t.cmds {
res, err := e.sshService.execute(h, c)
if err != nil {
continue
}
// Checking if there is an error and flag for skipping error is false.
if strings.Trim(res.Err, " ") != "" && !t.skip_errors {
break
}
opts = append(opts, res)
}
}()
}
e.wg.Wait()
}
}
fmt.Println(opts)
}
Free Strategy
This is where all the hosts run the tasks in parallel. We will wait until all tasks by all hosts are finished before we proceed for the next thing.
Free Strategy is faster than the linear strategy as we dont have to wait for each task. Also, some servers may have better network bandwidth/ more computer and yet they would have to wait for the task on other servers to complete if using linear strategy.
This is how the code may look like:
func (e *Engine) FreeStrategy(respObj PlayDoc) {
e.wg.Add(len(respObj.hosts))
opts := []ExecOutput{}
for _, h := range respObj.hosts {
h := h
go func() {
defer e.wg.Done()
for _, t := range respObj.tasks {
h := h
if !e.sameOS(t, h) {
continue
}
for _, c := range t.cmds {
res, err := e.sshService.execute(h, c)
fmt.Println("Response is ", res)
if err != nil {
continue
}
if strings.Trim(res.Err, " ") != "" && !t.skip_errors {
break
}
opts = append(opts, res)
}
}
}()
}
e.wg.Wait()
fmt.Println(opts)
}
And its done. So we have covered
- Parse Tasks
- Executing bunch of tasks using different strategies
- Executing ssh commands on remote host
The parsing of inventory file is not a part but one thing to mention is , while parsing the inventory file we need to make sure there is no cycle. i.e When grouping different hosts or hosts of hosts one may generate a cycle. So we need to check for cycle before executing the tasks. This can be done using graph algorithms such as Depth First Search or Breadth Firrst Search. You can check how I have Validated the inventory data for this project here
And thats it. We have a project that resembles ansible. It can run several tasks in parallel on bunch of machines.
Top comments (0)