DEV Community

Kyle Lexmond
Kyle Lexmond

Posted on

A First Look at Google Cloud Platform

This got really long, so I broke it up into parts. Feel free to skip to the parts you're interested in.


I have a bunch of experience with AWS (Disclaimer: I worked there). My AWS account dates back to 2010, but I've only really started using AWS heavily in the last 2 years.

I've been speccing out and performing cost estimations for a new project recently, and with the introduction of the GCP free tier and the $300 of credit, I decided to look into some of the services GCP offers to see how it compares to AWS.

Google has a comparison between AWS and GCP, which is useful but pretty dry. I decided to just dive in and experiment - that $300 of credit means I'm pretty safe!

Registering for GCP

It was a matter of going to the GCP console and logging in with my Google account. I had to sign up for the free trial and provide my credit card details, but that was it. Compared to the AWS signup process, this was a lot simpler.

However, it's simple because Google has effectively split the account verification steps - I used my gmail account, which was already verified. A side effect of this is created resources are associated with this account by default. An AWS account is trivial to transfer - update the email address and be done with it. My Google account? Less easily transferred, but that brings me to the first major difference.

AWS Accounts vs GCP Projects

Google doesn't really expound on the account/project difference, merely saying this in their comparison:

Cloud Platform groups your service usage by project rather than by account. In this model, you can create multiple, wholly separate projects under the same account. In an organizational setting, this model can be advantageous, allowing you to create project spaces for separate divisions or groups within your company.

In practice, it's an entirely different way of handling resources. If you wanted to run something in its own isolated silo in AWS, you would generally create an entirely separate account, and use consolidated billing/AWS Organizations (which is a whole other set of problems). In GCP, each project is its own little silo, with no communication between projects by default.

After getting used to AWS (and using AWS Organizations to handle the account-per-project), this is a very different way of thinking. To me, there are two main benefits.

The first is that switching ownership of resources is incredibly simple - assign a new project owner, and remove the existing owner, and it's done. What's most impressive is that (as far as I can tell), the transfer will be done without interrupting anything currently running. Compute Engine instances will continue to run, Cloud Storage buckets don't need to have contents copied out, the bucket deleted, then recreated in the new account, hoping that no one else steals the bucket name in the meantime.

The second benefit is that segmentation of projects is far easier. You don't have to have the equivalent of an AWS account per project if you want separation for security.

A nerfed IAM?

The downside of separation by project is that GCP seems not to have an active equivalent of AWS IAM's ability to restrict access to individual resources. The GCP documentation explicitly calls this out:

Permissions are project-wide, which means that developers in a project can modify and deploy the application and the other Cloud Platform services and resources associated with that project.

I am conflicted over this situation. Best practice says that accounts/roles should have the fewest permissions possible. I try to lock down my IAM policies to specific resources wherever possible. For example, a user can only interact with a single SQS queue because I restrict the attached IAM policy by queue name.

On GCP, it's all or nothing within a project. I have to allow access to all PubSub topics if I want to allow access to one.

What actually happens is that people can and do liberally use * in their IAM policies in AWS. But the fact that the restrictions aren't built-in by default are worrying, especially for large companies that do have to capability to manage IAM policies (and not operate accounts per service).

I think Google's realised this, and is extending IAM (still in Alpha) to allow permissions to be defined on individual resources where supported (eg PubSub, Datastore). It looks like it's possible to use the IAM API to define custom roles, but I haven't successfully done so. I just ended up using project isolation, which works, but feels bad.

Authentication

GCP has a greater variety of ways to authenticate with their APIs compared to AWS.

  • Compute Engine/App Engine work with IAM and get credentials, much like EC2's instance roles. These are limited to individual projects.
  • Developers using the gcloud CLI can authenticate using OAuth2, and switch between projects.
  • Non-interactive systems outside GCP use a service account that's tied to a specific project.

Using the SDK requires creating a service account, which generates a JSON file (or PKCS12, but let's ignore that). The easiest way is to use an environment variable GOOGLE_APPLICATION_CREDENTIALS to set the location of the file when using the SDK, and let the SDK handle everything.

You can define the file location in code, like Boto. (And presumably other AWS SDKs, I've only really used the Python version.)

You can also go full OAuth, provide a list of scopes and walk through the OAuth process, but ... no.

Compared to AWS's simple "IAM users get a Access Key & Secret Key", it seems rather overcomplicated. Thankfully reasonable defaults are used.

Amusingly, GCP isn't as agnostic as AWS is - references to G Suite sometimes appear. Not everyone is designing applications to be run on G Suite, so I have a feeling it's just old documentation.

Top comments (2)

Collapse
 
erebos-manannan profile image
Erebos Manannán

You seem to have skipped mentioning all the things that are clearly superior in GCP vs AWS

  • Instance names are hostnames, boot up a machine named "salt" and it will show up as "salt.c.project-name.internal" as well as "salt" on all your machines making several things VERY easy compared to AWS. This internal DNS is very easy to expose over VPN etc. as well. Fully automated.

  • Startup scripts are project-wide. How is this not a thing in AWS in 2017?

  • You use YOUR SSH keys to connect to instances, and they can be set up project-wide if you so wish. You don't use some random generated keys that you then have to figure out how to store securely and distribute in addition to your already existing perfectly good SSH keys.

  • Since hostnames are correctly set up based on the instance name from the start it's really easy to do instance matching in various tools, such as in Salt Stack, without a need for some random AWS integrations. Just match api-* for all api servers, etc.

  • Instance groups and templates make setting up an easy to scale (including automatically) group of machines that will distribute instances across availability zones easily.

  • The load balancer is Google Load Balancer, not some random instance that will die when you get hit with a traffic spike. You turn on the LB in GCP and it will instantly handle a million requests per second.

  • Overall the whole interface and every bit about their system has been built to be easier and more powerful to use than AWS

  • There is no wondering about provisioned IOPS and all that nonsense. Size = performance in GCP. 500GB disk is 2x as fast as 250GB disk.

Long story short, having used GCP for the past 3-4 years or so having to do a few things with AWS recently was a huge blast from the past and it was hard to believe how bad things can still be in AWS.

GCP is easier to use, more powerful, less prone to human error, cheaper, and generally just superior in every single way.

Collapse
 
kyle profile image
Kyle Lexmond

Other than the SSH keys, I haven't touched any of this, so thanks for chiming in. :)

I'll probably be using some of that in future experiments!