DEV Community

Kuba Bartel for Meteopress

Posted on • Edited on

Granular access control to Google Cloud Storage objects using IAM Conditions

At Meteopress we leverage Google Cloud Storage very intensively. Due to its serverless characteristics and unbeatable performance and reliability, it often makes a core part of our services. More traditionally we also use it as an unlimited data storage and for serving static website resources.

But one feature we missed deeply - the ability to limit access to "directories" (subpaths) in Cloud Storage Bucket. Although Cloud IAM permissions had already been a very powerful mechanism, there was no possible way to bound each application (service account) to specific parts of a bucket.

Practicing least privilege principle we see a lot of sense in controlling shared bucket access - mainly in cases when production and experimental services meet on the same bucket resource.

This superpower comes with brand new Cloud IAM Conditions feature.

Satellite processing service

To demonstrate Cloud IAM Conditions capabilities the following lines are going to describe and simulate a simplified satellite image processing service. The service continuously reads data downloaded from our satellite dish and processes each file to render enhanced images. We need the service to be able to read all incoming data and write processed data to a specified Cloud Storage "directory". After each new file is processed, the manifest file is updated to contain the lastest image metadata.

The service is using iamconditionsdemo bucket which is created with uniform bucket-level permissions.

Reading data

In the beginning, the service cannot read any data.

> gsutil cp gs://iamconditionsdemo/satellite/raw/12_49.jpg .

ServiceException: 401 Anonymous caller does not have storage.objects.get access to iamconditionsdemo/satellite/12_49.jpg.
Enter fullscreen mode Exit fullscreen mode

Cloud Storage bucket is not publicly accessible so the service is going to use a service account to identify itself.

> gcloud auth activate-service-account --key-file gcp_credentials.json

Activated service account credentials for: [iamconditionstest@meteopress-radars.iam.gserviceaccount.com]
Enter fullscreen mode Exit fullscreen mode

We assign Storage Object Viewer role in bucket's permissions panel to the service account.

Alt Text

Now the service is able to read objects from the bucket. It has access to all of the objects no matter their name.

> gsutil cp gs://iamconditionsdemo/satellite/raw/12_49.jpg .

Copying gs://iamconditionsdemo/satellite/raw/12_49.jpg...
- [1 files][835.5 KiB/835.5 KiB]
Operation completed over 1 objects/835.5 KiB.
Enter fullscreen mode Exit fullscreen mode

Writing data

After our service processes any given file it needs to store a new object with satellite/processed/ path prefix. This operation is not allowed at the moment.

gsutil cp 12_49_processed.jpg gs://iamconditionsdemo/satellite/processed/12_49.jpg

Copying file://12_49_processed.jpg [Content-Type=image/jpeg]...
AccessDeniedException: 403 iamconditionstest@meteopress-radars.iam.gserviceaccount.com does not have storage.objects.create access to iamconditionsdemo/satellite/processed/12_49.jpg.
Enter fullscreen mode Exit fullscreen mode

To allow object writes a service account can obtain Storage Object Creator role. But the role actually allows any object to be written to the bucket. Any potentially dangerous write can appear and for example, contaminate our input dataset by creating an object starting with satellite/raw/ pathname.

Cloud IAM Conditions are the right tool to prevent those unwanted writes. We can limit the permission to write an object just to specified object paths. This can be achieved by clicking on Add condition besides the IAM role.

Using Edit condition dialog we set Type to be a cloud storage object and resource Name to filter object names by a prefix. According to the documentation object's resource name has the following form projects/_/buckets/[BUCKET_NAME]/objects/[OBJECT_NAME].

Alt Text

Or we can switch to write the rule in the built-in text editor using Common Expression Language.

Alt Text

The service is now able to write objects as required.

> gsutil cp 12_49_processed.jpg gs://iamconditionsdemo/satellite/processed/12_49.jpg

Copying file://12_49_processed.jpg [Content-Type=image/jpeg]...
\ [1 files][835.5 KiB/835.5 KiB]
Operation completed over 1 objects/835.5 KiB.
Enter fullscreen mode Exit fullscreen mode

At the same time, it can't write any objects out of satellite/processed "directory".

> gsutil cp 12_49_processed.jpg gs://iamconditionsdemo/satellite/raw/12_49_processed.jpg

Copying file://12_49_processed.jpg [Content-Type=image/jpeg]...
AccessDeniedException: 403 iamconditionstest@meteopress-radars.iam.gserviceaccount.com does not have storage.objects.create access to iamconditionsdemo/satellite/raw/12_49_processed.jpg.
Enter fullscreen mode Exit fullscreen mode

Updating manifest

After each new image is processed its metadata needs to be saved to the predefined manifest file satellite/manifest_processed.json. This object is not writable at the moment.

Since Storage Object Creator role doesn't give "storage.objects.delete" access (which is necessary to rewrite an object) we are going to add Storage Object Admin permission just to satellite/manifest_processed.json object.

"Unfortunately" the iamconditionstest bucket was created with uniform bucket-level permissions so we can't simply update object's permissions (not even mentioning it doesn't exist yet). IAM Conditions can be used again.

Alt Text

Now the manifest file can be created or updated after each image is processed.

> gsutil cp manifest.json gs://iamconditionsdemo/satellite/manifest_processed.json

Copying file://manifest.json [Content-Type=application/json]...
/ [1 files][   22.0 B/   22.0 B]
Operation completed over 1 objects/22.0 B.

...manifest updated...

> gsutil cp manifest.json gs://iamconditionsdemo/satellite/manifest_processed.json

Copying file://manifest.json [Content-Type=application/json]...
/ [1 files][   44.0 B/   44.0 B]
Operation completed over 1 objects/44.0 B.
Enter fullscreen mode Exit fullscreen mode

Conclusion

This illustrational use case demonstrates how Cloud IAM Conditions can be used to limit a service account's access on a granular Cloud Storage Bucket level. In reality, we use multiple buckets to better segregate raw and processed data but principals stay the same.

Cloud IAM Conditions are going to help us better control access to particular Cloud Storage resources and we can assure each service is not (re)writing data it shouldn't.

We can also use Cloud IAM Conditions for our customers' service accounts which they use to read a bucket with data they are subscribed to. Now we don't have to copy datasets to each domain bucket but we can set correct access with IAM Conditions based on a customer's subscription.

Cloud IAM Conditions are not limited just to the Cloud Storage resource type. The documentation describes other resources that can be controlled.

Unfortunately, there is no Terraform support yet.

What is your use case for Cloud IAM Conditions?

Top comments (0)