If you use Digital Ocean, Linode or any VPS provider for that matter - you will know it can be a pain to setup everything from scratch.
You have to manually SSH into the box and setup postgres, a virtual environment, security and so forth.
In this guide, I will go through building a basic deployment automation module using: paramiko
Building our core module
To get started, lets first install paramiko:
pip install paramiko
Next, lets setup a function to connect to our server using SSH keys:
./serverbuilder/core.py
def connect(host, user="root",keypath="id_rsa"):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
keypath = f'/home/{os.environ.get("USER")}/.ssh/{keypath}'
ssh.connect(host, username=user, key_filename=keypath)
return ssh
The above function will basically look for an ssh key in ~/.ssh. "~/" is just a shortcut for "/home/myname/.ssh/".
In this case we default to "id_rsa" - which is the default name in Linux when you run:
ssh-keygen
Great, now that we can connect to a remote box, lets setup a function to actually run remote commands:
./serverbuilder/core.py
def runCommand(client :paramiko.SSHClient, command :str):
try:
stdin, stdout, stderr = client.exec_command(command)
for line in stdout.readlines():
print(line)
for line in stderr.readlines():
print(line)
stdin.close()
stdout.close()
stderr.close()
if stdout.channel.recv_exit_status() != 0:
return False
except Exception as e:
print(e)
return False
return True
The above function will take in the client object, which is returned by our earlier connect function, and a command - which is basically any bash string e.g. "ls -la".
Finally, to complete off our: core.py, we will need to setup a command that can load our bash code from bash files and run them using the "runCommand" method above.
./serverbuilder/core.py
def runTask(client :paramiko.SSHClient, task :str, vars :dict):
result = False
with open(f'./tasks/{task}.sh', "r") as f:
cmd = f.read()
for k,v in vars.items():
cmd = cmd.replace(f'#{k}#', v)
print("Running: {}".format(cmd))
result = runCommand(client, cmd)
return result
Setup BASH scripts
Unfortunately, we can't do everything in Python (well you can but BASH is usually easier). Therefore we are going to need to setup a scripts folder in our module. This is what our module structure should look like:
├── core.py
├── __init__.py
├── main.py
└── tasks
├── setupuser.sh
└── setupvenv.sh
As you can see - there's two tasks currently.
- setupuser.sh - will setup a special user to run our virtual environment, and the project.
- setupvenv.sh - sets up the virtual environment.
tasks/setupuser.sh
if ! sudo getent passwd "#username#" >/dev/null; then
echo "Setting up user: #username#"
sudo useradd -m -s /bin/bash #username#
mkdir -p /home/#username#
chown -R #username#:#username# /home/#username#
echo "Done"
else
echo "User already exists. exiting..."
fi
Note: there is no need for the Shebang directive here because this script, will be run line by line in our SSH terminal session. On Ubuntu, and most Linux distro's - this approach should work just fine.
tasks/setupvenv.sh
apt-get install python3-venv -y
sudo -u #username# python3 -m venv /home/#username#/.venv
You will notice we have: #username# sprinkled throughout this code. This is just a template variable we can pass to the script. If you remember our runTask function from above - you will notice we inject our variable data in this block:
for k,v in vars.items():
cmd = cmd.replace(f'#{k}#', v)
Putting it all together
So now we have a basic framework to build upon, to put this into action - we just need to setup a main.py as follows:
from core import connect, runTask
if __name__ == '__main__':
client = connect("192.168.1.1")
vars = {
"username": "djangouser",
}
tasks = [
"setupuser",
"setupvenv",
]
for t in tasks:
result = runTask(client, t, vars)
if result is False:
print(f'Task: {t} did not execute correctly. Refusing to continue with next task.')
break
client.close()
We start off by creating a client, thereafter we setup our variables and define a list of tasks - these are filenames for our BASH scripts as setup earlier. Finally, we just loop through each task and call runTask.
The username in this case can be any valid Linux username.
Conclusion
Wasn't that super easy? You now have a simple yet powerful framework to build upon. Simply, add bash scripts to tasks and update the tasks list to run through all your build steps.
Top comments (0)