Recently, AWS introduced a new feature in IAM, that allows you to sign JWT tokens using managed OIDC provider. On a per AWS Account basis you can enable and receive unique endpoint with managed JWKS keys to configure third-party identity pools. If you want to know more how it functions, I have a blog post how to host your own OIDC provider on CloudFront.
As I'm learning Google Cloud now, I decided to use GCP's Workload Identity Pool as the receiving end of the token issued by AWS IAM (even though this service also supports native AWS authentication). The goal for today is to make a Lambda function be able to write to Google Cloud Storage bucket and insert some rows into BigQuery table. The whole setup is on the diagram below:
For each of the steps below, I assume you are already authenticated to both AWS and Google Cloud CLIs and have necessary permissions to create all resources.
Repository with the whole code you can find here: github.com/ppabis/aws-outbound-oidc-google-cloud.
Configuring AWS Account and GCP Project
For now this new feature is not yet available in Terraform, so you will need to use latest AWS CLI and enable it using the command below. In case you have it already enabled, you can use the second one to get the issuer URL again. Run this in the root of your new project.
aws iam enable-outbound-web-identity-federation --output json > issuer.json
# Alternatively, if already enabled:
aws iam get-outbound-web-identity-federation-info --output json > issuer.json
This will create a file issuer.json with the OIDC endpoint that we will need later. Next, let's enable some services on Google Cloud:
gcloud services enable sts.googleapis.com
gcloud services enable storage.googleapis.com
gcloud services enable bigquery.googleapis.com
Terraform providers
Let's create a new provider.tf file that will configure our required Terraform providers to use in this project. We will need AWS v6.x, Google v7.x and archive to upload the Lambda. Remember to add your GCP project ID and select your preferred regions. Put your GCP project ID into terraform.tfvars file.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6.0"
}
archive = {
source = "hashicorp/archive"
version = "~> 2.0"
}
google = {
source = "hashicorp/google"
version = "~> 7.0"
}
}
}
provider "aws" {
region = "eu-west-1"
}
variable "gcp_project_id" {
type = string
}
provider "google" {
project = var.gcp_project_id
region = "europe-west1"
user_project_override = true
}
Google Cloud Workload Identity Pool
Next, we will create an identity pool and identity pool provider that will authenticate JWT tokens issued by AWS. We will use the JSON file we obtained earlier. First define some identity pool, I named my my-aws-app. Then create a provider in this pool, I named it the same. Remember to add attribute mapping so that AWS users and roles are reflected as GCP subjects - that way we will later configure grants in GCP IAM. In this step we can technically also choose audiences we will accept but I want to use the default one created by Google. It is complex enough to be secure 😅.
locals {
issuer = jsondecode(file("issuer.json")).IssuerIdentifier
}
resource "google_iam_workload_identity_pool" "my_aws_app" {
workload_identity_pool_id = "my-aws-app"
display_name = "my-aws-app"
}
resource "google_iam_workload_identity_pool_provider" "my_aws_app" {
workload_identity_pool_id = google_iam_workload_identity_pool.my_aws_app.workload_identity_pool_id
workload_identity_pool_provider_id = "my-aws-app"
oidc {
issuer_uri = local.issuer
}
attribute_mapping = {
"google.subject" = "assertion.sub"
}
}
Resources on GCP side
I will create some sample resources in GCP that we will write to from the Lambda function. I chose a storage bucket and a BigQuery table. You can experiment with other services as well, it's just a matter of your imagination.
resource "random_id" "bucket_name" { byte_length = 8 }
resource "google_storage_bucket" "test_bucket" {
name = "my-test-bucket-${random_id.bucket_name.hex}"
location = "europe-west1"
storage_class = "STANDARD"
uniform_bucket_level_access = true
force_destroy = true
}
resource "google_bigquery_dataset" "my_dataset" {
dataset_id = "my_dataset"
friendly_name = "my_dataset"
location = "europe-west1"
delete_contents_on_destroy = true
}
resource "google_bigquery_table" "my_table" {
dataset_id = google_bigquery_dataset.my_dataset.dataset_id
table_id = "my_table"
friendly_name = "my_table"
deletion_protection = false
schema = jsonencode([
{
name = "timestamp"
type = "TIMESTAMP"
mode = "REQUIRED"
},
{
name = "message"
type = "STRING"
mode = "NULLABLE"
}
])
}
For now we can't yet assign any permissions for out AWS side to these resources as we don't know the ARN of the role yet. Let's switch to AWS side now.
Creating AWS IAM Role for Lambda
Our role will be a very simple one - it needs to be assumable by Lambda and what it will do is just get JWT OIDC tokens from STS. To follow security best practices, we will only allow it to create tokens for specific audience - the one created by Google for our project. Unfortunately, this value is not emitted by any Terraform resource so we have to construct it from string. The comparison we will do in the condition is sts:IdentityTokenAudience. The audience on GCP side looks like this: //iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID. This huge 300 character long string is correct, trust me 😂. It's not the last time we will use it.
data "google_project" "current" {}
data "aws_iam_policy_document" "assume_role_policy" {
statement {
actions = ["sts:AssumeRole"]
effect = "Allow"
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
resource "aws_iam_role" "LambdaOutboundSTSRole" {
name = "LambdaOutboundSTSRole"
assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
}
resource "aws_iam_role_policy_attachment" "lambda_policy_attachment" {
role = aws_iam_role.LambdaOutboundSTSRole.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
data "aws_iam_policy_document" "lambda_policy" {
statement {
actions = ["sts:GetWebIdentityToken"]
effect = "Allow"
resources = ["*"]
condition {
test = "StringEquals"
variable = "sts:IdentityTokenAudience"
values = ["//iam.googleapis.com/projects/${data.google_project.current.number}/locations/global/workloadIdentityPools/${google_iam_workload_identity_pool.my_aws_app.workload_identity_pool_id}/providers/${google_iam_workload_identity_pool_provider.my_aws_app.workload_identity_pool_provider_id}"]
}
}
}
resource "aws_iam_role_policy" "lambda_policy" {
name = "lambda_policy"
role = aws_iam_role.LambdaOutboundSTSRole.id
policy = data.aws_iam_policy_document.lambda_policy.json
}
Apply the infrastructure so far, to catch if there are any issues. In your GCP project, you should see a new workload identity pool created and in AWS you should see the IAM role. GCP should also have a new storage bucket and a BigQuery table. With that we can continue with permissions in GCP.
Granting Permissions in GCP
As we now know what role will authenticate from AWS, we can grant permissions on our new GCP resources to it. If you remember the last audience string, this 250 character long member string shouldn't scare you anymore. The principal we have to grant to has the following format: principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL/subject/ROLE_ARN. Notice the change: we use only identity pool ID here (not provider).
To follow best security practices, we will also limit the permissions using IAM conditions. Compare this to AWS: in AWS you use Resource key in the IAM policy to limit the policy and in GCP you need to use condition, and it's not in the policy, rather in the binding. The roles (in AWS terms policies) we will give are roles/storage.objectAdmin for the bucket and roles/bigquery.dataEditor for the BigQuery dataset. We will limit them to only our single bucket and BigQuery dataset (all tables).
data "google_iam_role" "bucket_object_admin" { name = "roles/storage.objectAdmin" }
data "google_iam_role" "bigquery_data_editor" { name = "roles/bigquery.dataEditor" }
resource "google_project_iam_member" "aws_lambda_incoming_bucket_role_member" {
project = var.gcp_project_id
role = data.google_iam_role.bucket_object_admin.name
member = "principal://iam.googleapis.com/projects/${data.google_project.current.number}/locations/global/workloadIdentityPools/${google_iam_workload_identity_pool.my_aws_app.workload_identity_pool_id}/subject/${aws_iam_role.LambdaOutboundSTSRole.arn}"
condition {
title = "RestrictToTestBucket"
expression = "resource.name.startsWith(\"projects/_/buckets/${google_storage_bucket.test_bucket.name}/objects/\")"
}
}
resource "google_project_iam_member" "aws_lambda_incoming_bigquery_role_member" {
project = var.gcp_project_id
role = data.google_iam_role.bigquery_data_editor.name
member = "principal://iam.googleapis.com/projects/${data.google_project.current.number}/locations/global/workloadIdentityPools/${google_iam_workload_identity_pool.my_aws_app.workload_identity_pool_id}/subject/${aws_iam_role.LambdaOutboundSTSRole.arn}"
condition {
title = "RestrictToMyDataset"
expression = "resource.name.startsWith(\"projects/${data.google_project.current.project_id}/datasets/${google_bigquery_dataset.my_dataset.dataset_id}\")"
}
}
We are technically done on setting up Google Cloud side. Let's move to the coding part now.
Lambda Function
To comfortable interact with Google Cloud services from our Lambda, we will install some requirements using a requirements.txt file. Create it in a new directory called aws_lambda. I have also included boto3 as the current Lambda runtime doesn't have the latest version that supports the new get_web_identity_token function.
google-auth
google-cloud-iam
google-cloud-storage
google-cloud-bigquery
boto3
Install the requirements using pip directly into this directory. That way we will zip all the dependencies for Lambda and push it into AWS.
pip install -r requirements.txt -t aws_lambda/
In order to exchange OIDC token for actual working GCP working token that will be recognized by GCP IAM we need to use Google STS service. The code below seems a bit overwhelming compared to AWS STS calls but it's necessary. We need to provide all the grant types, scopes, etc. Save this into aws_lambda/auth.py.
from google.auth.transport.requests import Request
from google.oauth2 import sts
from google.oauth2.credentials import Credentials
import boto3
aws_sts_client = boto3.Session().client("sts", region_name="us-east-1")
def _exchange_aws_token_for_google_token(aws_token: str, audience: str) -> str:
"""Exchange AWS token for Google service account token via STS"""
request = Request()
google_sts_client = sts.Client(token_exchange_endpoint="https://sts.googleapis.com/v1/token")
try:
result = google_sts_client.exchange_token(
request=request,
grant_type="urn:ietf:params:oauth:grant-type:token-exchange",
audience=audience,
requested_token_type="urn:ietf:params:oauth:token-type:access_token",
subject_token=aws_token,
subject_token_type="urn:ietf:params:oauth:token-type:id_token",
scopes=["https://www.googleapis.com/auth/cloud-platform"],
)
return result["access_token"]
except Exception as e:
print(f"Error exchanging token: {e}")
raise
def authenticate_aws_oidc_to_google(audience: str) -> Credentials:
"""Authenticate AWS OIDC to Google using the AWS STS client."""
aws_token = aws_sts_client.get_web_identity_token(
Audience=[audience],
DurationSeconds=3600,
SigningAlgorithm='ES384'
)['WebIdentityToken']
google_token = _exchange_aws_token_for_google_token(aws_token, audience)
return Credentials(token=google_token)
The function above returns Google Credentials object that we can use later for any client we need. In order to easily interface with GCP services, we will use provider Python libraries. I will also use some utilities like datetime and os. We need to also load the bucket name, dataset name, table name and audience from environment variables.
import os, json
from datetime import datetime
from google.cloud import bigquery, storage
from auth import authenticate_aws_oidc_to_google
AUDIENCE = os.getenv("AUDIENCE")
GOOGLE_BUCKET = os.getenv("GOOGLE_BUCKET")
BQ_DATASET = os.getenv("GOOGLE_BQ_DATASET")
BQ_TABLE = os.getenv("GOOGLE_BQ_TABLE")
GOOGLE_PROJECT = os.getenv("GOOGLE_PROJECT_ID")
Let's define our operations to perform as separate functions for easier understanding. First function will be to create an object in Google Storage bucket. It will be dependent on the time and we will also add Lambda's request ID for uniqueness.
def create_object_in_bucket(creds, request_id: str) -> str:
"""Create a random object in the bucket under hello_<timestamp>.txt"""
client = storage.Client(project=GOOGLE_PROJECT, credentials=creds)
now = datetime.now()
blob = client.bucket(GOOGLE_BUCKET).blob(f"hello_{now.timestamp():.0f}.txt")
blob.upload_from_string(f"Hello, world! Current time: {now.isoformat()}, request_id: {request_id}")
return blob.public_url
Another function will insert a row into our BigQuery table. We will also populate the message field with some text, current time and request ID from Lambda.
def create_row_in_bigquery(creds, request_id: str) -> bool:
"""Create a row in the bigquery table with the current time and request_id"""
client = bigquery.Client(project=GOOGLE_PROJECT, credentials=creds)
table = f"{client.project}.{BQ_DATASET}.{BQ_TABLE}"
row = {
"timestamp": datetime.now().isoformat(),
"message": f"Hello, world! Current time: {datetime.now().isoformat()}, request_id: {request_id}"
}
err = client.insert_rows_json(table, [row])
if err:
print(f"Error BQ: {err}")
return False
return True
Lastly, we can glue everything together in the main Lambda handler. It will first authenticate to Google using our AWS provided JWT and try to call both functions defined above. It will also log result of each operation so that we can see it in CloudWatch logs.
def lambda_handler(event, context):
try:
creds = authenticate_aws_oidc_to_google(AUDIENCE)
url = create_object_in_bucket(creds, context.aws_request_id)
success = create_row_in_bigquery(creds, context.aws_request_id)
print(f"Object URL: {url}, BigQuery success: {success}")
return {"statusCode": 200, "body": json.dumps({"message": f"Object URL: {url}, BigQuery success: {success}"})}
except Exception as e:
print(f"Error: {e}")
return {"statusCode": 500, "body": json.dumps({"message": f"Error: {e}"})}
Packaging and Deploying Lambda
Now we need to package the Lambda code into .zip file and deploy it to AWS. Also very important is to set the environment variables we use in the code.
data "archive_file" "lambda_function" {
type = "zip"
source_dir = "aws_lambda"
output_path = "aws_lambda.zip"
}
resource "aws_lambda_function" "oidc_to_gcp" {
function_name = "LambdaOIDCtoGCP"
role = aws_iam_role.LambdaOutboundSTSRole.arn
handler = "main.lambda_handler"
runtime = "python3.12"
timeout = 30
source_code_hash = data.archive_file.lambda_function.output_base64sha256
filename = data.archive_file.lambda_function.output_path
environment {
variables = {
AUDIENCE = "//iam.googleapis.com/projects/${data.google_project.current.number}/locations/global/workloadIdentityPools/${google_iam_workload_identity_pool.my_aws_app.workload_identity_pool_id}/providers/${google_iam_workload_identity_pool_provider.my_aws_app.workload_identity_pool_provider_id}"
GOOGLE_BUCKET = google_storage_bucket.test_bucket.name
GOOGLE_BQ_DATASET = google_bigquery_dataset.my_dataset.dataset_id
GOOGLE_BQ_TABLE = google_bigquery_table.my_table.table_id
GOOGLE_PROJECT_ID = data.google_project.current.project_id
}
}
}
Testing
Let's run the Lambda function and see if we can do stuff on GCP side. You can use AWS Console to trigger it or use the following CLI command (unfortunately you have to store response in a file for whatever reason 🙄)
aws lambda invoke \
--region eu-west-1 \
--function-name LambdaOIDCtoGCP \
--payload '{}' \
response.json
We can now look into CloudWatch logs to see all the print statements from our function to see if everything went well. You should get a similar output:
STREAM=$(aws logs describe-log-streams \
--log-group-name /aws/lambda/LambdaOIDCtoGCP \
--query 'logStreams[0].logStreamName' \
--output text)
aws logs get-log-events \
--log-group-name /aws/lambda/LambdaOIDCtoGCP
--log-stream-name "$STREAM" \
--output text \
--query 'events[].[timestamp,message]'
1763923647520 START RequestId: a2d3bf7c-3bf5-4c6b-9f55-36febec82c96 Version: $LATEST
1763923648706 Object URL: https://storage.googleapis.com/my-test-bucket-fc436b0fe1b7c943/hello_1763923648.txt
1763923649085 BigQuery success: True
1763923649092 END RequestId: a2d3bf7c-3bf5-4c6b-9f55-36febec82c96
Take the object URL and replace https://storage.googleapis.com/ with gs://. Use gsutil to see the list of files in the bucket as well as contents of the file created. Compare the request ID in the object with the one in the logs.
$ gsutil ls gs://my-test-bucket-fc436b0fe1b7c943/
gs://my-test-bucket-fc436b0fe1b7c943/hello_1763923648.txt
$ gsutil cat gs://my-test-bucket-fc436b0fe1b7c943/hello_1763923648.txt
Hello, world! Current time: 2025-11-23T18:47:28.288871, request_id: a2d3bf7c-3bf5-4c6b-9f55-36febec82c96
We can also query BigQuery table to see if the records were inserted as we wanted. Use bq tool to run a simple SELECT statement.
$ bq query "SELECT * FROM my_dataset.my_table"
+---------------------+----------------------------------------------------------------------------------------------------------+
| timestamp | message |
+---------------------+----------------------------------------------------------------------------------------------------------+
| 2025-11-23 18:47:28 | Hello, world! Current time: 2025-11-23T18:47:28.706698, request_id: a2d3bf7c-3bf5-4c6b-9f55-36febec82c96 |
+---------------------+----------------------------------------------------------------------------------------------------------+
Checking if permissions are enforced
To verify that our IAM conditions in Google Cloud are working, I will create a second bucket without granting any permissions to our AWS role and run Lambda again. It should error out when trying to upload the content to GCS.
resource "google_storage_bucket" "second_bucket" {
name = "my-second-bucket-${random_id.bucket_name.hex}"
location = "europe-west1"
storage_class = "STANDARD"
uniform_bucket_level_access = true
force_destroy = true
}
# In Lambda environment variables, change GOOGLE_BUCKET to:
resource "aws_lambda_function" "oidc_to_gcp" {
function_name = "LambdaOIDCtoGCP"
# ...
environment {
variables = {
#...
GOOGLE_BUCKET = google_storage_bucket.second_bucket.name
}
}
}
Invoke the function again and get new log stream. You should see some error just like me below.
aws lambda invoke \
--region eu-west-1 \
--function-name LambdaOIDCtoGCP \
--payload '{}' \
response.json
STREAM=$(aws logs describe-log-streams \
--log-group-name /aws/lambda/LambdaOIDCtoGCP \
--query 'logStreams[0].logStreamName' \
--output text)
aws logs get-log-events \
--log-group-name /aws/lambda/LambdaOIDCtoGCP
--log-stream-name "$STREAM" \
--output text \
--query 'events[].[timestamp,message]'
1763924479236 START RequestId: 6a7d8016-ba34-4075-bca0-77d3bcb2c7a7 Version: $LATEST
1763924480076 Creds: <google.oauth2.credentials.Credentials object at 0x7fef0787e9c0>
1763924480586 Error: 403 POST https://storage.googleapis.com/upload/storage/v1/b/my-second-bucket-fc436b0fe1b7c943/o?uploadType=multipart: {
1763924480586 "error": {
1763924480586 "code": 403,
1763924480586 "message": "Caller does not have storage.objects.create access to the Google Cloud Storage object. Permission 'storage.objects.create' denied on resource (or it may not exist).",
Conclusion
This new outbound OIDC feature in AWS IAM is really simple to use. As you have seen in this project, most of the complexity was on Google Cloud's side. We barely did any setup for AWS itself. With that, you can now integrate IAM directly with your applications if they support OIDC.

Top comments (0)