DEV Community

Robin Moffatt
Robin Moffatt

Posted on

SSH keys explained

Originally published at, reproduced here with permission.

What are SSH keys for?

You create a pair of keys using ssh-keygen. These are plain text
and can be cut and pasted , copied, as required. One is private
(e.g. id_rsa), and you need to protect this as you would any other
security artifact such as server passwords, and you can optionally
secure with a pass phrase. The other is public (e.g., and you can share with anyone.

Your public key is placed on any server you need access to, by
the server's administrator. It needs to go in the .ssh folder in
the user's home folder, in a file called authorized_keys. As many
public keys as need access can be placed in this file. Don't forget
the leading dot on .ssh.

Why are SSH keys good?

  • You don't need a password to login to a server, which is a big time saver and productivity booster.
  • Authentication becomes about "this is WHO may access something" rather than "here is the code to access it, we have no idea who knows it though".
  • It removes the need to share server passwords
    • Better security practice
    • Easier auditing of exactly who used a server
  • It enables the ability to grant temporary access to servers, and precisely control when it is revoked and from whom.
  • Private keys can be protected with a passphrase, without which they can't be used.
  • Using SSH keys to control server access is a lot more secure since you can disable server password login entirely, thus kiboshing any chance of brute force attacks
  • SSH keys can be used to support automatic connections between servers for backups, starting jobs, etc, without the need to store a password in plain text


  • SSH keys are just plain text, making them dead easy to backup in a Password Manager such as LastPass, KeePass, or 1Password.
  • SSH keys work just fine from Windows. Tools such as PuTTY and WinSCP support them, although you need to initially change the format of the private key to ppk using PuTTYGen, an ancillary PuTTY tool.
  • Whilst SSH keys reside by default in your user home .ssh folder, you
    can store them on a cloud service such as Dropbox and then use them
    from any machine you want.

    • To make an ssh connection using a key not in the default
      location, use the -i flag, for example

      ssh -i ~/Dropbox/ssh-keys/mykey
  • To see more information about setting up SSH keys, type:

    man ssh
  • The authorized_keys file is space separated, and the last entry on
    each line can be a comment. This normally defaults to the user and
    host name where the key was generated, but can be freeform text to
    help identify the key more clearly if needed. See man sshd for the
    full spec of the file.


Setting up SSH keys

Working with SSH keys involves taking the public key from a pair, and
adding that to another machine in order to allow the owner of the pair's
private key to access that machine. What we're going to do here is
generate a unique key pair that will be used as the identity across the
cluster. So each node will have a copy of the private key, in order to
be able to authenticate to any other node, which will be holding a copy
of the public key (as well as, in turn, the same private key).

In this example I'm going to use my own client machine to connect to the
cluster. You could easily use any of the cluster nodes too if a local
machine would not be appropriate.

SSH key strategy

We've several ways we could implement the SSH keys. Because it's a
purely sandbox cluster, I could use the same SSH key pair that I
generate for the cluster on my machine too, so the same public/private
key pair is distributed thus:

If we wanted a bit more security, a better approach might be to
distribute my personal SSH key's public key across the cluster too, and
leave the cluster's private key to truly identify cluster nodes alone.
An additional benefit of this approach is that is the client does not
need to hold a copy of the cluster's SSH private key, instead just
continuing to use their own.

For completeness, the extreme version of the key strategy would be for
each machine to have its own ssh key pair (i.e. its own security
identity), with the corresponding public keys distributed to the other
nodes in the cluster:

But anyway, here we're using the second option - a unique keypair used
across the cluster and the client's public ssh key distributed across
the cluster too.

Generating the SSH key pair

First, we need to generate the key. I'm going to create a folder to hold
it first, because in a moment we're going to push it and a couple of
other files out to all the servers in the cluster and it's easiest to do
this from a single folder.

mkdir /tmp/rnmcluster02-ssh-keys
Enter fullscreen mode Exit fullscreen mode

Note that in the ssh-keygen command below I'm specifying the target
path for the key with the -f argument; if you don't then watch out
that you don't accidentally overwrite your own key pair in the default
path of ~/.ssh.

The -q -N "" flags instruct the key generation to use no passphrase
for the key and to not prompt for it either. This is the lowest friction
approach (you don't need to unlock the ssh key with a passphrase before
use) but also the least secure. If you're setting up access to a machine
where security matters then bear in mind that without a passphrase on an
ssh key anyone who obtains it can therefore access any machine to which
the key has been granted access (i.e. on which its public key has been

ssh-keygen -f /tmp/rnmcluster02-ssh-keys/id_rsa -q -N ""
Enter fullscreen mode Exit fullscreen mode

This generates in the tmp folder two files - the private and public
(.pub) keys of the pair:

robin@RNMMBP ~ $ ls -l /tmp/rnmcluster02-ssh-keys
total 16
-rw-------  1 robin  wheel  1675 30 Nov 17:28 id_rsa
-rw-r--r--  1 robin  wheel   400 30 Nov 17:28
Enter fullscreen mode Exit fullscreen mode

Preparing the authorized_keys file

Now we'll prepare the authorized_keys file which is where the public
SSH key of any identity permitted to access the machine is stored. Note
that each user on a machine has their own authorized_keys file, in
~/.ssh/. So for example, the root user has the file in
/root/.ssh/authorized_keys and any public key listed in that file will
be able to connect to the server as the root user. Be aware the
American [mis-]spelling of "authorized" - spell it [correctly] as
"authorised" and you'll not get any obvious errors, but the ssh key
login won't work either.

So we're going to copy the public key of the unique pair that we just
created for the cluster into the authorized_keys file. In addition we
will copy in our own personal ssh key (and any other public key that we
want to give access to all the nodes in the cluster):

cp /tmp/rnmcluster02-ssh-keys/ /tmp/rnmcluster02-ssh-keys/authorized_keys
# [optional] Now add any other keys (such as your own) into the authorized_keys file just created
cat ~/.ssh/ >> /tmp/rnmcluster02-ssh-keys/authorized_keys
# NB make sure the previous step is a double >> not > since the double appends to the file, a single overwrites.
Enter fullscreen mode Exit fullscreen mode

Distributing the SSH artefacts

Now we're going to push this set of SSH files out to the .ssh folder
of the target user on each node, which in this case is the root user.
From a security point of view it's probably better to use a non-root
user for login and then sudo as required, but we're keeping things
simple (and less secure) to start with here. So the files in our
folder are:

  • id_rsa -- the private key of the key pair
  • -- the public key of the key pair. Strictly speaking this doesn't need distributing to all nodes, but it's conventional and handy to hold it alongside the private key.
  • authorized_keys -- this is the file that the sshd daemon on each node will look at to validate an incoming login request's offered private key, and so needs to hold the public key of anyone who is allowed to access the machine as this user.

To copy the files we'll use scp, but how you get them in place
doesn't really matter so much, so long as they get to the right place:

scp -r /tmp/rnmcluster02-ssh-keys root@rnmcluster02-node01:~/.ssh
Enter fullscreen mode Exit fullscreen mode

At this point you'll need to enter the password for the target user, but
rejoice! This is the last time you'll need to enter it as subsequent
logins will be authenticated using the ssh keys that you're now

Run the scp for all nodes in the cluster. If you've four nodes in the
cluster your output should look something like this:

$ scp -r /tmp/rnmcluster02-ssh-keys/ root@rnmcluster02-node01:~/.ssh
root@rnmcluster02-node01's password:
authorized_keys                                                  100%  781     0.8KB/s   00:00
id_rsa                                                           100% 1675     1.6KB/s   00:00                                                       100%  400     0.4KB/s   00:00
$ scp -r /tmp/rnmcluster02-ssh-keys/ root@rnmcluster02-node02:~/.ssh
Warning: Permanently added the RSA host key for IP address '' to the list of known hosts.
root@rnmcluster02-node02's password:
authorized_keys                                                  100%  781     0.8KB/s   00:00
id_rsa                                                           100% 1675     1.6KB/s   00:00                                                       100%  400     0.4KB/s   00:00
$ scp -r /tmp/rnmcluster02-ssh-keys/ root@rnmcluster02-node03:~/.ssh
root@rnmcluster02-node03's password:
authorized_keys                                                  100%  781     0.8KB/s   00:00
id_rsa                                                           100% 1675     1.6KB/s   00:00                                                       100%  400     0.4KB/s   00:00
$ scp -r /tmp/rnmcluster02-ssh-keys/ root@rnmcluster02-node04:~/.ssh
root@rnmcluster02-node04's password:
authorized_keys                                                  100%  781     0.8KB/s   00:00
id_rsa                                                           100% 1675     1.6KB/s   00:00                                                       100%  400     0.4KB/s   00:00
Enter fullscreen mode Exit fullscreen mode

Testing login authenticated through SSH keys

The moment of truth. From your client machine, try to ssh to each of the
cluster nodes. If you are prompted for a password, then something is not
right -- see the troubleshooting section below.

If you put your own public key in authorized_keys when you created it
then you don't need to specify which key to use when connecting because
it'll use your own private key by default:

robin@RNMMBP ~ $ ssh root@rnmcluster02-node01
Last login: Fri Nov 28 17:13:23 2014 from

[root@localhost ~]#
Enter fullscreen mode Exit fullscreen mode

There we go -- logged in automagically with no password prompt. If we're
using the cluster's private key (rather than our own) you need to
specify it with -i when you connect.

robin@RNMMBP ~ $ ssh -i /tmp/rnmcluster02-ssh-keys/id_rsa root@rnmcluster02-node01
Last login: Fri Nov 28 17:13:23 2014 from

[root@localhost ~]#
Enter fullscreen mode Exit fullscreen mode

Troubleshooting SSH key connections

SSH keys are one of the best things in a sysadmin's toolkit, but when
they don't work can be a bit tricky to sort out. The first thing to
check is that on the target machine the authorized_keys file that does
all the magic (by listing the ssh keys that are permitted to connect
inbound on a host to the given user) is in place:

[root@localhost .ssh]# ls -l ~/.ssh/authorized_keys
-rw-r--r-- 1 root root 775 Nov 30 18:55 /root/.ssh/authorized_keys
Enter fullscreen mode Exit fullscreen mode

If you get this:

[root@localhost .ssh]# ls -l ~/.ssh/authorized_keys
ls: cannot access /root/.ssh/authorized_keys: No such file or directory
Enter fullscreen mode Exit fullscreen mode

then you have a problem.

One possible issue in this specific instance could be that the above
pre-canned scp assumes that the user's .ssh folder doesn't already
(since it doesn't, on brand new servers) and so specifies it as
the target name for the whole rnmcluster02-ssh-keys folder. However if
it **does* already exist* then it ends up copying the
rnmcluster02-ssh-keys folder into the .ssh folder:

[root@localhost .ssh]# ls -lR
total 12
-rw------- 1 root root 1675 Nov 22  2013 id_rsa
-rw-r--r-- 1 root root  394 Nov 22  2013
drwxr-xr-x 2 root root 4096 Nov 30 18:49 rnmcluster02-ssh-keys

total 12
-rw-r--r-- 1 root root  775 Nov 30 18:49 authorized_keys
-rw------- 1 root root 1675 Nov 30 18:49 id_rsa
-rw-r--r-- 1 root root  394 Nov 30 18:49
[root@localhost .ssh]#
Enter fullscreen mode Exit fullscreen mode

To fix this simply move the authorized_keys from
rnmcluster02-ssh-keys back into .ssh:

[root@localhost .ssh]# mv ~/.ssh/rnmcluster02-ssh-keys/authorized_keys ~/.ssh/
Enter fullscreen mode Exit fullscreen mode

Other frequent causes of problems are file/folder permissions that are
too lax on the target user's .ssh folder (which can be fixed with
chmod -R 700 ~/.ssh) or the connecting user's ssh private key (fix:
chmod 600 id_rsa). The latter will show on connection attempts very

robin@RNMMBP ~ $ ssh -i /tmp/rnmcluster02-ssh-keys/id_rsa root@rnmcluster02-node01
Permissions 0777 for '/tmp/rnmcluster02-ssh-keys/id_rsa' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: /tmp/rnmcluster02-ssh-keys/id_rsa
Enter fullscreen mode Exit fullscreen mode

Another one that has bitten me twice over time -- and that eludes the
troubleshooting I'll demonstrate in a moment -- is that SELinux gets
stroppy about root access using ssh
. I always
just take this as a handy reminder to disable selinux (in
/etc/selinux/config, set SELINUX=disabled), having never had cause
to leave it enabled. But, if you do need it enabled you'll need to hit
the interwebs to check the exact cause/solution for this problem.

So to troubleshoot ssh key problems in general do two things. Firstly
from the client side, specify verbosity (-v for a bit of verbosity,
-vvv for most)

ssh -v -i /tmp/rnmcluster02-ssh-keys/id_rsa root@rnmcluster02-node01
Enter fullscreen mode Exit fullscreen mode

You should observe ssh trying to use the private key, and if the server
rejects it it'll fall back to any other ssh private keys it can find,
and then password authentication:

debug1: Offering RSA public key: /tmp/rnmcluster02-ssh-keys/id_rsa
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: password
Enter fullscreen mode Exit fullscreen mode

Quite often the problem will be on the server side, so assuming that you
can still connect to the server (eg through the physical console, or
using password authentication) then go and check /var/log/secure where
you'll see all logs relating to attempted connections. Here's the log
file corresponding to the above client log, where ssh key authentication
is attempted but fails, and then password authentication is used to
successfully connect:

Nov 30 18:15:05 localhost sshd[13156]: Authentication refused: bad ownership or modes for file /root/.ssh/authorized_keys
Nov 30 18:15:15 localhost sshd[13156]: Accepted password for root from port 59305 ssh2
Nov 30 18:15:15 localhost sshd[13156]: pam_unix(sshd:session): session opened for user root by (uid=0)
Enter fullscreen mode Exit fullscreen mode

Now we can see clearly what the problem is -- "bad ownership or modes
for file /root/.ssh/authorized_keys

The last roll of the troubleshooting dice is to get sshd (the ssh daemon
that runs on the host we're trying to connect to) to issue more verbose
logs. You can either set LogLevel DEBUG1 (or DEBUG2, or DEBUG3) in
/etc/ssh/sshd_config and restart the ssh daemon
(service sshd restart), or you can actually run a (second) ssh daemon
from the host with specific logging. This would be appropriate on a
multi-user server where you can't just go changing sshd configuration.
To run a second instance of sshd you'd use:

/usr/sbin/sshd -D -d -p 2222
Enter fullscreen mode Exit fullscreen mode

You have to run sshd from an absolute path (you'll get told this if
you try not to). The -D flag stops it running as a daemon and instead
runs interactively, so we can see easily all the output from it. -d
specifies the debug logging (-dd or -ddd for greater levels of
verbosity), and -p 2222 tells sshd to listen on port 2222. Since we're
doing this on top of the existing sshd, we obviously can't use the
default ssh port (22) so pick another port that is available (and not
blocked by a firewall).

Now on the client retry the connection, but pointing to the port of the
interactive sshd instance:

ssh -v -p 2222 -i /tmp/rnmcluster02-ssh-keys/id_rsa root@rnmcluster02-node01
Enter fullscreen mode Exit fullscreen mode

When you run the command on the client you should get both the client
and host machine debug output go crackers for a second, giving you
plenty of diagnostics to pore through and analyse the ssh handshake etc
to get to the root of the issue.

Top comments (4)

otteydw profile image

The only thing I would say about this is that it looks like you have copied your private key to the remote machines as well, since you used a recursive copy. Since only the public key is needed on the remote side, copying the private key is less secure.

rmoff profile image
Robin Moffatt

Good point, thanks.

mcrmonkey profile image
ant Kenworthy

Nice write up here; is there a reason you chose to use scp over ssh-copy-id ?

rmoff profile image
Robin Moffatt

I wasn't aware of ssh-copy-id, I'll check it out - thanks!