I always recommend to go to the documentation at first place. However, sometimes there are some concepts that are not so clear for everyone. In this case, network concepts for developers.
When we hear Serverless we forgot almost everything about DevOps, networking, memory and so on, just worry about the code and that's ok.
But now we have a requirement: the client API only accepts requests from a whitelisted IP.
This is the schema for a 'traditional' architecture:
Cloud Functions will resolve most of the architecture but as you can see, some resources from a 'traditional' architecture are still necessary to achieve our objetive and that's where I want to help.
Create a Simple HTTP Function
main.py
# This function will return the IP address for egress
import requests
import json
def test_ip(request):
result = requests.get("https://api.ipify.org?format=json")
return json.dumps(result.json())
deploy
gcloud functions deploy testIP \
--runtime python37 \
--entry-point test_ip \
--trigger-http \
--allow-unauthenticated
test
curl https://us-central1-your-project.cloudfunctions.net/testIP
# {"ip": "35.203.245.150"} (ephemeral: changes any time)
Networking
So this is the part that some devs going crazy, we're going to use a VPC (Virtual Private Cloud) that provides networking functionalities to out cloud-based services, in this case our Cloud Function.
VPC networks do not have any IP address ranges associated with them. IP ranges are defined for the subnets.
# Create VPC
gcloud services enable compute.googleapis.com
gcloud compute networks create my-vpc \
--subnet-mode=custom \
--bgp-routing-mode=regional
Then we have to create a Serverless VPC Access connector that allows Cloud functions (and other Serverless resources) to connect with a VPC.
# Create a Serverless VPC Access connectors
gcloud services enable vpcaccess.googleapis.com
gcloud compute networks vpc-access connectors create functions-connector \
--network my-vpc \
--region us-central1 \
--range 10.8.0.0/28
Before we can use our functions-connector we have to grant the appropriate permissions to the Cloud Functions service account, so the Cloud Functions will be able to connect to our functions-connector.
# Grant Permissions
export PROJECT_ID=$(gcloud config list --format 'value(core.project)')
export PROJECT_NUMBER=$(gcloud projects list --filter="$PROJECT_ID" --format="value(PROJECT_NUMBER)")
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:service-$PROJECT_NUMBER@gcf-admin-robot.iam.gserviceaccount.com \
--role=roles/viewer
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:service-$PROJECT_NUMBER@gcf-admin-robot.iam.gserviceaccount.com \
--role=roles/compute.networkUser
Ok, we have the connector and the permissions, let's configure our Cloud Function to use the connector.
# Configurate the connector
gcloud functions deploy testIP \
--runtime python37 \
--entry-point test_ip \
--trigger-http \
--allow-unauthenticated \
--vpc-connector functions-connector \
--egress-settings all
If you make a request to our Cloud Function you will see this message: "Error: could not handle the request" that's because our VPC doesn't have any exit to the internet.
In order to be accessible to the outside world we have to:
Reserve a static IP.
Configure a Cloud Router to route our network traffic.
Create a Cloud Nat to allow our instances without external IP to send outbound packets to the internet and receive any corresponding established inbound response packets (aka via static IP).
# Reserve static IP
gcloud compute addresses create functions-static-ip \
--region=us-central1
gcloud compute addresses list
# NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
# functions-static-ip 34.72.171.164 EXTERNAL us-central1 RESERVED
We have our static IP! 34.72.171.164
# Creating the Cloud Router
gcloud compute routers create my-router \
--network my-vpc \
--region us-central1
# Creating Cloud Nat
gcloud compute routers nats create my-cloud-nat-config \
--router=my-router \
--nat-external-ip-pool=functions-static-ip \
--nat-all-subnet-ip-ranges \
--enable-logging
Awesome! now let's try our Cloud Functions with a new request
curl https://us-central1-your-project.cloudfunctions.net/testIP
# {"ip": "34.72.171.164"} (our static IP!)
Yay! everything is working :) a little recap:
We have deployed a simple Cloud Function (HTTP).
Created a VPC to provide networking functionalities to our Cloud Function.
Created a Serverless VPC Access connector to allow our Cloud Function to use VPC functionalities (like use IPs for example).
Granted permissions to the Cloud Functions Service Account to use network resourcing.
Configured the Cloud Function to use the Serverless VPC Access connector and redirect all the outbound request through the VPC
Reserved a static IP.
Created a Cloud Router to route our network traffic.
An finally create a Cloud Nat to communicate with the outside world.
Hope this post helps you and let me know if you have any questions or recommendations.
Source code: https://github.com/AlvarDev/functions-static-ip
Top comments (28)
Thanks for the great guide.
Seems that the NAT creation requires a
--router-region
parameter now:Then it works:
Wow thanks for the update!
I didn't know about that flag, it even doesn't appear on the references lol xD
cloud.google.com/sdk/gcloud/refere...
I will check for that flag (this weekend I hope u.u)
Awesome, thanks!
By the way, I converted the commands to a terraform config, just in case anyone's interested.
Cheers :)
Hello! I'm trying to setup a Cloud Function to geocode with an API key that's restricted to an IP address whitelist. When I make a request to ipify or curlmyip, the public static IP is as expected per this tutorial. However, when I make requests to Google Maps API it claims that I am making a request from an unauthorized IP.
Further, I am supposedly making a request from an IPv6 address that varies per request. Any ideas why this might be happening? Might there an exception in how this setup handles requests to Google APIs?
This is an interesting case, do you have any (public) repository or a way to replicate this issue? just to gain time, other wise please tell me what language and docs you are using :)
About the IPv6 I have no ideia hahaha, I mean I practically didn't 'use' IPv6 yet so I'll have to ask some friends about it.
Not at the moment as the repo and GCP project are owned by my employer/organization, sorry. I've used both Python 3.7/3.8 as my cloud function runtime. I've used both
requests
andgooglemaps
python library to make my requests although I doubt choosing one over the other makes any difference.As for the docs, I've pertained to both your article and GCP docs, primarily cloud.google.com/functions/docs/ne....
Hi Alvaro, Thanks for writing this post!
I'm trying to set up my existing Cloud function that I've deployed previously with python38. I've deployed without VPC previously and I was able to deploy with no problem. But when trying to set up VPC connector on deployment, I get the following error
(gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message: OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
I tried to use the GCP UI to create VPC and hook up to the function Egress settings, but I get same error when deploying the changes. Would you know how to overcome this?
Turns out it's an issue with Python38. I used Python37 and didn't run into this issue anymore.
Hi Alex! good to know that you resolved the issue.
Yep, python 3.8 is in beta on Google Cloud Functions, here is the documentation for the supported Runtimes cloud.google.com/functions/docs/co...
Thanks for sharing that info Alvaro, that would be helpful.
Would you know if is there any way to find out whether my requests to these functions will now default to IPv4 instead of IPv6?
In the documentation it says "VPC networks only support IPv4 unicast traffic", does it mean it's always going to be IPV4 unless I switch to a global Load Balancer?
cloud.google.com/vpc/docs/vpc#spec...
Hi Alex, sorry for the late response, I've been busy these days.
Let me see if I understood your question, do you want to know if you can request a functions via IPv6?
In the post we set a IP for egress traffic, for ingress traffic we still use the URL.
The Load Balancer is a layer that you can use to centralize you different resource in a unique IP address (IPv4 or IPv6).
The HTTP(S) Load Balancer supports Serverless compute (aka Cloud Functions).
Check this post if you are interested on that:
cloud.google.com/blog/products/net...
Hi,
I am getting some problem and unable to resolve this i hope you can assist me.
I have setup in GCP where we using VPN to connect it with on-premises machine. A VPN tunnel we created is using Routing type 'Policy-based' so it have no option to setup Cloud Route etc. This VPN is working fine as we can access the remote machine within GCP instance(under the same VPC). BUT, we unable to access the Http service(from remote machine) within a cloud function. We have verify that cloud function can access the http service which we deploy on a local GCP instance via Serverless VPC connector But to access the remote service(over VPN) is not working somehow. We have verify that firewall is open and Gloud function have proper access but somehow function unable to access the service over the VPN.
As you mentioned we need a NAT, so how add to cloud function without a cloud route and to send the traffic to the VPN?
So sorry nona-evva =(
I was too busy these weeks, did you resolve it?
Thanks for the article, Alvaro!
Please advise, according to the GC docs, static IP used by Cloud NAT go free of charge, don't they?
GC docs link, see External IP address pricing
Looks like a VPC Access connector will cost at least a couple dozens $ a month though. Will any other part of the solution cost as well?
Hi Yuri!, talk about pricing is a little bit complicated.
The architecture that I used on the post is a simplified architecture, there are more things for a complete architecture (yess... the same answer u.u... it depends on your application needs... again).
After you define your architecture I can recommend:
Most of the resources have the Pricing and Quota sections, they can give a good understanding how that resource is billed.
Use the GCP Calculator, there you can get an estimation about the billing in your region (and currency if available), some resources have commitments so it can be cheaper.
Hope it helps!
Alvaro, THANK YOU so much for doing this writeup. I've followed it to the letter, but still my function does not connect to the outside world. Can you help?
stackoverflow.com/questions/647115...
Hi daha2002!
Please follow back to enable a chat so we can check what is going on.
Great post Alvaro!
Was setting this up for firebase using the console, had some issues that Alvaro was awesome enough to help me with (appreciate it a lot!).
Firstly, just want to mention that this solution also works for those using firebase (you can find the functions in the GCP console in case you were unaware, I know many firebase developers are primarily frontend) console.cloud.google.com/functions/
Secondly, if you are experiencing issues when setting everything up in the cloud console rather than the terminal note that GCP automatically ads a subnet which it wants you to fill in. You can remove it and create the VPC network and it will work if you do not know how to configure the subnet there properly that might be the issue.
We are community! to help each others.
I've been struggling all day and I read this post and solved it in 30 minutes. Thank you so much.
Awesome! happy to help
Hey Alvaro, I have set up a cloud function with all these steps but the function is not responding timeout. please let me know which thing I'm missing
Hi Shahzad Ali, sorry I didn't get the question, could you please give me more details? Thanks!
Thanks for responding, the Cloud NAT from the above is not working correctly. after the VPC connector. The cloud function is not able to call any external service? did you know where I'm making mistake
firebase-function version 3.11 has added an option to configure and deploy connectors. It is now possible to manage and distribute more conveniently
Good news! thanks for sharing