Disclaimer: These are just some quick thoughts without diving too deep into it.
Recently my company's security department announced that all container images must be signed and that this signature has to be verified. But does that actually improve security?
Signing an image in general is not a bad thing. I would even argue that you should do it so you can verify that it actually originated from your CI or whatever your build process is. So yes, it probably improves the security of your app or operation. End of discussion... ?
But this is to some degree an insufficient dream of security. Signing an image like the base nodejs image on Docker Hub does give you more confidence that you can actually trust it. However, what if that CI is compromised and someone is able to create rogue images that are actually signed? In case of nodejs that would be pretty bad but I would figure that such manipulation would be identified rather quickly just because such images are widely used and thus monitored by a lot of parties.
That is probably not true for your own images. If your CI is compromised and replaces your-app:1.1.0 with a new signed image you would probably not realize that until it is way too late. You could disallow replacing image tags but that does not stop them from creating new bad tags. The only way at this point to sufficiently ensure that you have the correct image is by using its digest. This is rather annoying to handle depending on your setup but it gives you the confidence that you are deploying exactly what you intended even without signing the image. (Of course, this only works as long as the underlying hash function is secure enough)
This sounds easy but at this point it is a chicken-egg problem. Where do you get the digest if you can only partially trust your CI? From building your image with your local docker? Sadly, no. The reason for that is the fact that docker build
does timestamp your images/layers. If you docker build
your app locally multiple times in a row it just gives you the illusion that your build is reproducible because of caching. If your colleague or your CI creates the image they will get another digest. (The last time I checked this was a couple of months ago. If that is not true anymore please correct me)
In order to create the correct digest locally and everywhere you need to make your build actually reproducible which is not an easy task depending on your tooling. For the container image creation part: Buildah is powerful alternative to docker build
which is able to strip the timestamps. But I am sure there are plenty of other tools out there.
Skipping a few other details, your Git repo now truly became your single source of truth (and build [and vulnerability]). So what if some bad actor commits to your Git? (here the circle comes to completion) Then you should probably only work with signed commits and code reviews. But the fact that you can actually review changes, check the signatures of your teammates, and talk to them makes this much easier to work with. And also the decentralization of Git helps to find manipulation.
There are obviously a lot of other attack vectors that you need to think of and address. But this is just a never ending story.
Another last thought: having just a single central CI is of course vulnerable and you just trust that whatever falls out of it on the other end is correct and good? Maybe it is time for a decentralized network of CIs that build and check your artifacts independently?
Top comments (0)