How do you usually SSH to an AWS (Amazon Web Services) EC2 instance? If your answer is:
ssh -i <your pem file> <username>@<ip address of server>
Then you should read this tutorial.
The above mentioned method for connecting to an AWS EC2 instance or any remote server is absolutely correct. There is nothing wrong with it and it is a highly secure way of connecting to a remote server. But imagine yourself having to connect to 15 different servers almost every day (15 different IP addresses to remember) and each of them having a different private key file (the pem file in above example). Let's say in some of the servers you need to conect as user ubuntu and some of the servers you need to connect as user ec2-user, etc. Also, let us say you want some port forwarding (more on this later) in some of those connections. Remembering all these configs for even a handful of servers can be a pain and it becomes a mess to handle everything with the above mentioned method. Do you see the ugliness of it, the disarray? Would it not be much easier if you could just write the command:
$ ssh dev-server
$ ssh production-server
Imagine executing this command from any directory, without bothering to remember the location of your pem files (private keys), the username with which you want to connect and the IP address of the server. This would make life so much better. That's exactly what an SSH config file is meant for. As its name suggests, it's a file where you provide all sorts of configuration options like the server IP address, location of the private key file, username, port forwarding, etc. And here you provide an easy to remember name for the servers like dev-server or production-server, etc.
Now, do you see the beauty of it? The possibilities, the wonder? Well, if you do and you wish to learn how to explore these possibilities, then read on.
We will quickly go through a brief introduction of SSH and the concept of private and public keys. Then we will see how to SSH to an AWS instance without using a config file. Then we will learn how to connect to the same instance using an SSH config file instead. So, this brings us to our first question
SSH stands for secure shell. Wikipedia defintion says:
Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network
In very simple terms, it is a secure way of logging in to a remote server. It gives you a terminal to the remote server where you can execute the shell commands.
When you wish to connect to a remote server using SSH from your local machine, your local machine here is a client and the remote server is a server. The client machine needs a process called ssh client whose task is to initiate ssh connection requests and participate in the process of establishing the connection with the server. And the remote server needs to run a process called ssh server whose task is to listen for ssh connection requests, authenticate those and provide access to the remote server shell based on successful authentication. We provide the server ip address, username with which we wish to login, password or private key to the ssh client when we wish to connect to a remote server.
Typically, when we connect to a remote server via SSH, we do it using a public-private key based authentication. Public and private keys are basically base64 encoded strings which are stored in files. They are generated in pairs. Think of them as two different keys which are needed together to open a lock. And think of the process of establishing an SSH connection as a process of opening a lock. This process requires two keys of the same pair, a private key and its corresponding public key. We keep our private key file on our local machine and the server needs to store our public key.
Let us say we wish to login to a hypothetical remote server with IP address 220.127.116.11 with username john.
We give our ssh_client the address of the server (18.104.22.168), the username with which we wish to login (john) and the private key file to use. The ssh client goes to the ssh server using the address that we gave him and asks ssh server to bring the public key for user john to open the lock (authenticate the user john and provide him access to the remote server via SSH).
The ssh server checks the list of public keys he has and brings the public key for john. Both ssh client and ssh server then insert their respective public and private keys into the common lock. If the keys belong to the same pair, the lock will be opened and connection established. If the ssh server does not have the public key for user john then the lock does not open and authentication fails.
The above analogy is an oversimplification. The actual process is somewhat more complex. If you wish to understand the details of how it actually works, I would recommend this article on DigitalOcean.
When we create a new AWS EC2 instance for example using an Amazon Linux AMI or an Ubuntu Server AMI, at the last step we are asked a question about creating a new key pair or choosing an existing key pair. If you do it for the first time, you will need to create a new key pair. You provide a name for the key pair on this step. Let's say you provide the name as MyKeyPair at this step.
Before being able to proceed, you need to click the button Download Key Pair. This generates a pair of public and private key and lets you download the private key as MyKeyPair.pem file. After this when you click the Launch Instance button, AWS automatically adds the public key of the key pair to the newly created EC2 instance. Public keys are located in the ~/.ssh/authorized_keys file. So, if you choose Amazon Linux AMI while creating the EC2 instance, it will be added in the /home/ec2-user/.ssh/authorized_keys file. Similarly, if you use Ubuntu Linux AMI while creating a new EC2 instance, the public key will be added to the /home/ubuntu/.ssh/authorized_keys file. First thing you need to do is to change the permissions of your private key file (MyKeyPair.pem). Navigate to the directory where your private key file is located. And then run the following command:
$ chmod 400 MyKeyPair.pem
This gives read-only access to your private key file to only you. Other than yourself, nobody else can read or write this file. Now, in order to SSH to an EC2 instance, we would execute the following command:
$ ssh -i <path to MyKeyPair.pem> <username>@<ip address of the server>
So, for example if the IP address of the server was say 22.214.171.124 and we chose Ubuntu Linux while creating the EC2 instance, then the username will be ubuntu. And our command becomes
$ ssh -i MyKeyPair.pem firstname.lastname@example.org
This is assuming we are running this command from the directory containing our MyKeyPair.pem file. If we are executing this command from some other directory then we will need to provide the correct path of the MyKeyPair.pem file. Similarly, if we used Amazon Linux AMI while creating the EC2 instance, then username in that case becomes ec2-user.
So, this explains how AWS generates public-private key pairs when you create an EC2 instance and how you can use the private key to connect to an EC2 instance. Next we will learn how to do the same using an SSH config file.
We have already discussed what an SSH config file is. Now we will create one and use that to connect to our EC2 instance that we connected earlier. SSH file needs to be in the ~/.ssh directory of the client machine. In our case, this will be our local machine. So, go to the ~/.ssh directory (create it if it does not exist) and then create a file with name config. Open the file and add the following contents to it:
Host <an easy to remember name for the server> HostName <IP address of the server> IdentityFile <full path of the private Key file> User <username>
Replace the values in
<> with actual values in your case. For example, if we used Ubuntu Linux AMI and the IP address of the server is 126.96.36.199 and the private key file (MyKeyPair.pem file) is located in /home/mandeep/private_keys directory, then the content of the config file becomes:
Host my-server HostName 188.8.131.52 IdentityFile /home/mandeep/private_keys/MyKeyPair.pem User ubuntu
Let us see what each of these lines mean:
- Host: Here you need to provide any easy to remember name for the server. This is only for your reference
- Hostname: This is the fully qualified domain name or IP address of the server. In our example we have included IP address but it can also be fully qualified domain name like api.example.com_.
- IdentityFile: Absolute path of the private key file
- User: username of the user logging in. This user must exist on the server and have the public key in the ~/.ssh/authorized_keys file.
Once you save this file, you can easily connect to your EC2 instance by running the following command in the terminal:
$ ssh my-server
Here, it does not matter from which directory you execute this command. You can add as many configurations as you want in your config file. For example, if you wish to connect to another server with IP address say 184.108.40.206 and private key as MySecondKey.pem and username as ec2-user then your config file should look like this:
Host my-server HostName 220.127.116.11 IdentityFile /home/mandeep/private_keys/MyKeyPair.pem User ubuntu Host my-second-server HostName 18.104.22.168 IdentityFile /home/mandeep/private_keys/MySecondKey.pem User ec2-user
Now you can connect to the my-second-server by running the command:
$ ssh my-second-server
That's it. That's how you create an SSH config file. Easy, isn't it? And once you start using it, it's hard to imagine living without it. It makes life so much better.
So, we know how to SSH to a remote server using config files. What next?
Well, there are plenty of configuration options one can provide in a config file and discussing all of them is beyond the scope of this tutorial. You can refer to the documentation here for the complete list of options, but I will be discussing the two options that I usually find quite handy:
Let us discuss this with an example. Consider a scenario that you have a remote server with the domain name redis.mydomain.com. And let us say we are running some process on this server which is not accessible publicly. For example, let us say we are running a redis server on this remote server on port 6379 but it can only be accessed once you login to the remote server and not from outside. Now let's say our requirement is that we need to access this remote redis server in a script running on our local machine. How do we do this?
SSH tunneling allows us to map a port from our local machine to a ip address:port on the remote server. For example, we can map the port 6389 on our local machine to the address localhost:6379 on the remote server. After doing this, our local machine thinks that the redis server (which is actually running on remote server on localhost:6379) is running on our local machine on port 6389. So, when you hit localhost:6389 on your local machine, you are actually hitting the redis server running on the remote server on port 6379.
How do we do this using our SSH config file?
We just need to add an additional property of LocalForward. Here is an example:
Host Redis-Server Hostname redis.mydomain.com IdentityFile /home/mandeep/private_keys/RedisServerKey.pem Localforward 6389 localhost:6379 User ubuntu
This approach comes quite handy when you want to access a server which is a part of a VPC (Virtual Private Cloud) and not accessible publicly. For example, an Elasticache instance, an RDS instance, etc.
This property allows your SSH session to acquire the credentials of your local machine. Consider a scenario where you have a private Git repository on Github. You can access the repository either via HTTPS using username and password or by using SSH using private key. Username and password approach is less secure and not recommended. For accessing your repo via SSH, what we typically do is we create a private key public key pair which are stored in ~/.ssh directory as id_rsa (private key) and id_rsa.pub (publick key) files. Once we add our public key (id_rsa.pub) to our Github account, we can access our repository via SSH. This works well with our local machine. Now consider a scenario where you need to SSH to a remote server and access the Git repository from that remote server. You have two options here. One is to copy your private key (id_rsa) file and put it in the ~/.ssh directory on the remote server. This is a bad approach since you are not supposed to share your private key file. Another approach would be to generate a new key pair on the server and add the public key of that pair to the Github repo. There is a problem with both the approaches. Anyone who can SSH to the remote server will be able to access the Git repository. Let's say we don't want that. Let's say we only want the developers who have access to the repo through their own private key files should be able to access the repo. Anybody else who does not have access to the repo but can SSH to the remote server should not be able to access the repo from there. This is where the ForwardAgent property comes quite handy. You can add this to your config file as shown below:
Host App-Server Hostname app.mydomain.com IdentityFile /home/mandeep/private_keys/AppServerKey.pem User ubuntu ForwardAgent yes
After adding this property to your config file, when you SSH to the server using the following command:
$ ssh App-Server
Then the SSH terminal that gets opened acquires the credentials (id_rsa) file from your local machine. Now, even if there is no ~/.ssh/id_rsa file on the remote server, any Git repository that you can access on your local machine, you can also access that repository from the remote server.
With this tutorial we learned the importance of an SSH config file and saw how it can make our lives easier. If you found this tutorial helpful and believe that it can help others, please share it on social media using the social media sharing buttons below. If you like my tutorials and my writing style, follow me on twitter. If you feel I have made any mistakes or any information in this article is incorrect, feel free to mention those in the comments below. Thanks! Happy coding :-)