DEV Community

Gianluca Brindisi
Gianluca Brindisi

Posted on • Edited on • Originally published at cloudberry.engineering

Stricter Access Control to Google Cloud Registry

Google Cloud Registry (GCR) is the Docker container registry offered by Google Cloud Platform (GCP). Under the hood it's an interface on top of Google Cloud Storage (GCS), and it’s so thin that access control is entirely delegated to the storage layer.

There are no dedicated roles

In fact, there are no dedicated Identity Access Management (IAM) Roles to govern publishing and retrieval of container images: to push and pull we must use role bindings that will grant write (and read) to the underlying bucket that GCR is using.

According to the docs this bucket is artifacts.<PROJECT-ID>.appspot.com and the roles to use are roles/storage.admin and roles/storage.objectViewer (but any powerful primitive role such as Owner, Editor and Viewer will do).

The role binding can be applied to the IAM policy of the Project, Folder, Organization or the bucket itself.

What's the risk

While binding the role on the bucket's IAM could be fine, binding on an IAM policy higher in the hierarchy will result in a wider authorization grant affecting all buckets in scope.

Such grant could have an impact on your compliance posture.
A common example is if one of the buckets contains Personal Identifiable Information (PII) and your organization is subject to GDPR.

IAM is tricky and things get messier when the number of Projects we need to administer increases and there is a business case to give programmatic access to all GCR instances. For example if we have a centralized build system that needs to push container images, or if we need to integrate a third party container scanner.

In such cases, especially when a third party is involved, binding a Service Account with read/write permissions to the GCS layer is unacceptable as it will increase a potential attack's blast radius.

How to mitigate

While we wait for Google to implement a set of dedicated Roles (see Artifact Registry), there are a couple of solutions we can adopt to minimize the authorization grant.

The first is organizational: minimize the number of GCR instances.
Ideally, if you can use a single instance you can bind the Role on the associated bucket’s IAM policy.
A small number of instances could be managed that way but I let you decide what small means in your context.

The second solution is technical: leverage IAM Conditions to reduce the scope of the role binding to only the buckets that are used by GCR.
IAM Conditions is a feature of Cloud IAM that allows operators to scope down role bindings.

Luckily these buckets have similar names and we can use this pattern to set up a role binding that will be applied only when the bucket’s name match, like this:

{
    "expression": "resource.name.startsWith(\"projects/_/buckets/artifacts\")",
    "title": "GCR buckets only",
    "description": "Reduce the binding scope to affect only buckets used by GCR"
}

This solution is pragmatic and scale well with the number of Projects / Folders affected, as long as there are no other buckets named artifact*.

Keep in mind that you need to use the full bucket identifier in the condition and If GCR is configured to use explicit storage regions, the bucket name will be (eu|us|asia).artifacts.<PROJECT-ID>.appspot.com.


Did you find this post interesting? I’d love to hear your thoughts: hello AT cloudberry.engineering

I write about cloud security on my blog, you can subscribe to the RSS feed or to the newsletter.

Top comments (0)