loading...

GCP Cloud Functions with a Static IP

alvardev profile image Alvaro David Updated on ・4 min read

I always recommend to go to the documentation at first place. However, sometimes there are some concepts that are not so clear for everyone. In this case, network concepts for developers.

When we hear Serverless we forgot almost everything about DevOps, networking, memory and so on, just worry about the code and that's ok.

But now we have a requirement: the client API only accepts requests from a whitelisted IP.

This is the schema for a 'traditional' architecture:

Traditional

Cloud Functions will resolve most of the architecture but as you can see, some resources from a 'traditional' architecture are still necessary to achieve our objetive and that's where I want to help.

Serverless


Create a Simple HTTP Function

main.py

# This function will return the IP address for egress
import requests
import json

def test_ip(request):
    result = requests.get("https://api.ipify.org?format=json")
    return json.dumps(result.json())

deploy

gcloud functions deploy testIP \
    --runtime python37 \
    --entry-point test_ip \
    --trigger-http \
    --allow-unauthenticated

test

curl https://us-central1-your-project.cloudfunctions.net/testIP
# {"ip": "35.203.245.150"} (ephemeral: changes any time)

Networking

So this is the part that some devs going crazy, we're going to use a VPC (Virtual Private Cloud) that provides networking functionalities to out cloud-based services, in this case our Cloud Function.

VPC networks do not have any IP address ranges associated with them. IP ranges are defined for the subnets.

# Create VPC
gcloud services enable compute.googleapis.com

gcloud compute networks create my-vpc \
    --subnet-mode=custom \
    --bgp-routing-mode=regional

Then we have to create a Serverless VPC Access connector that allows Cloud functions (and other Serverless resources) to connect with a VPC.

# Create a Serverless VPC Access connectors 
gcloud services enable vpcaccess.googleapis.com

gcloud compute networks vpc-access connectors create functions-connector \
    --network my-vpc \
    --region us-central1 \
    --range 10.8.0.0/28

Before we can use our functions-connector we have to grant the appropriate permissions to the Cloud Functions service account, so the Cloud Functions will be able to connect to our functions-connector.

# Grant Permissions 
export PROJECT_ID=$(gcloud config list --format 'value(core.project)')
export PROJECT_NUMBER=$(gcloud projects list --filter="$PROJECT_ID" --format="value(PROJECT_NUMBER)")

gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:service-$PROJECT_NUMBER@gcf-admin-robot.iam.gserviceaccount.com \
--role=roles/viewer

gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:service-$PROJECT_NUMBER@gcf-admin-robot.iam.gserviceaccount.com \
--role=roles/compute.networkUser

Ok, we have the connector and the permissions, let's configure our Cloud Function to use the connector.

# Configurate the connector
gcloud functions deploy testIP \
    --runtime python37 \
    --entry-point test_ip \
    --trigger-http \
    --allow-unauthenticated \
    --vpc-connector functions-connector \
    --egress-settings all

If you make a request to our Cloud Function you will see this message: "Error: could not handle the request" that's because our VPC doesn't have any exit to the internet.

In order to be accessible to the outside world we have to:

  • Reserve a static IP.

  • Configure a Cloud Router to route our network traffic.

  • Create a Cloud Nat to allow our instances without external IP to send outbound packets to the internet and receive any corresponding established inbound response packets (aka via static IP).

# Reserve static IP
gcloud compute addresses create functions-static-ip \
    --region=us-central1

gcloud compute addresses list
# NAME                 ADDRESS/RANGE  TYPE      PURPOSE  NETWORK  REGION       SUBNET  STATUS
# functions-static-ip  34.72.171.164  EXTERNAL                    us-central1          RESERVED

We have our static IP! 34.72.171.164

# Creating the Cloud Router
gcloud compute routers create my-router \
    --network my-vpc \
    --region us-central1

# Creating Cloud Nat
gcloud compute routers nats create my-cloud-nat-config \
    --router=my-router \
    --nat-external-ip-pool=functions-static-ip \
    --nat-all-subnet-ip-ranges \
    --enable-logging

Awesome! now let's try our Cloud Functions with a new request

curl https://us-central1-your-project.cloudfunctions.net/testIP
# {"ip": "34.72.171.164"} (our static IP!)

Yay! everything is working :) a little recap:

  1. We have deployed a simple Cloud Function (HTTP).

  2. Created a VPC to provide networking functionalities to our Cloud Function.

  3. Created a Serverless VPC Access connector to allow our Cloud Function to use VPC functionalities (like use IPs for example).

  4. Granted permissions to the Cloud Functions Service Account to use network resourcing.

  5. Configured the Cloud Function to use the Serverless VPC Access connector and redirect all the outbound request through the VPC

  6. Reserved a static IP.

  7. Created a Cloud Router to route our network traffic.

  8. An finally create a Cloud Nat to communicate with the outside world.

Hope this post helps you and let me know if you have any questions or recommendations.

Source code: https://github.com/AlvarDev/functions-static-ip

Posted on by:

alvardev profile

Alvaro David

@alvardev

GCP Cloud Architect and Software Engineer

Discussion

pic
Editor guide
 

Hi Alvaro, Thanks for writing this post!

I'm trying to set up my existing Cloud function that I've deployed previously with python38. I've deployed without VPC previously and I was able to deploy with no problem. But when trying to set up VPC connector on deployment, I get the following error

(gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message: OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k

I tried to use the GCP UI to create VPC and hook up to the function Egress settings, but I get same error when deploying the changes. Would you know how to overcome this?

 

Turns out it's an issue with Python38. I used Python37 and didn't run into this issue anymore.

 

Hi Alex! good to know that you resolved the issue.

Yep, python 3.8 is in beta on Google Cloud Functions, here is the documentation for the supported Runtimes cloud.google.com/functions/docs/co...

Thanks for sharing that info Alvaro, that would be helpful.

Would you know if is there any way to find out whether my requests to these functions will now default to IPv4 instead of IPv6?

In the documentation it says "VPC networks only support IPv4 unicast traffic", does it mean it's always going to be IPV4 unless I switch to a global Load Balancer?
cloud.google.com/vpc/docs/vpc#spec...

Hi Alex, sorry for the late response, I've been busy these days.

Let me see if I understood your question, do you want to know if you can request a functions via IPv6?

In the post we set a IP for egress traffic, for ingress traffic we still use the URL.

The Load Balancer is a layer that you can use to centralize you different resource in a unique IP address (IPv4 or IPv6).

The HTTP(S) Load Balancer supports Serverless compute (aka Cloud Functions).

Check this post if you are interested on that:
cloud.google.com/blog/products/net...

 

Great post Alvaro!

Was setting this up for firebase using the console, had some issues that Alvaro was awesome enough to help me with (appreciate it a lot!).

Firstly, just want to mention that this solution also works for those using firebase (you can find the functions in the GCP console in case you were unaware, I know many firebase developers are primarily frontend) console.cloud.google.com/functions/

Secondly, if you are experiencing issues when setting everything up in the cloud console rather than the terminal note that GCP automatically ads a subnet which it wants you to fill in. You can remove it and create the VPC network and it will work if you do not know how to configure the subnet there properly that might be the issue.

 

We are community! to help each others.

 

Hi,
I am getting some problem and unable to resolve this i hope you can assist me.

I have setup in GCP where we using VPN to connect it with on-premises machine. A VPN tunnel we created is using Routing type 'Policy-based' so it have no option to setup Cloud Route etc. This VPN is working fine as we can access the remote machine within GCP instance(under the same VPC). BUT, we unable to access the Http service(from remote machine) within a cloud function. We have verify that cloud function can access the http service which we deploy on a local GCP instance via Serverless VPC connector But to access the remote service(over VPN) is not working somehow. We have verify that firewall is open and Gloud function have proper access but somehow function unable to access the service over the VPN.
As you mentioned we need a NAT, so how add to cloud function without a cloud route and to send the traffic to the VPN?

 

I've been struggling all day and I read this post and solved it in 30 minutes. Thank you so much.

 

Awesome! happy to help

 

firebase-function version 3.11 has added an option to configure and deploy connectors. It is now possible to manage and distribute more conveniently

 

Good news! thanks for sharing