DEV Community

Dinko Osrecki
Dinko Osrecki

Posted on

Deep Dive into Google Cloud SQL Connector for Node.js

Source: https://github.com/GoogleCloudPlatform/cloud-sql-nodejs-connector

Source: https://github.com/GoogleCloudPlatform/cloud-sql-nodejs-connector

Last week Google Cloud announced the general availability of Cloud SQL Node.js connector. This makes it the fourth language-specific connector after Java, Python, and Go. In this article, I will describe the features of the connector, show how it works under the hood, demonstrate how to use it in a Node.js application, and discuss the current support for the connector in the most popular Node.js database libraries.

Features

Cloud SQL Connector is an alternative, and now recommended, way of connecting to Google Cloud SQL instances from Node.js applications, instead of using the Cloud SQL Auth Proxy. It provides the following features, which do not differ from the ones provided by the proxy:

  1. IAM Authorization: Use IAM roles to define which user/service accounts can access your instances. Each account must have the Cloud SQL Client role for the instance.

  2. Secure Connection: The connector establishes a TLS v1.3 connection to your instance, and verifies the identities of both the client and the server. You do not need to manage TLS certificates or firewall rules.

  3. Automatic IAM DB Authentication: You can use Connector to seamlessly connect to your instances that use IAM database users. Each account must additionally have the Cloud SQL Instance User role for the instance.

It is important to note that Cloud SQL Connector does not provide a connection path to your instance. If your instance is on a VPC network behind a private IP, your application workloads must already have access to that VPC network.

How It Works

To start investigating how the connector works, let’s create a simple Cloud SQL instance with default settings (using public IP connectivity) in Google Cloud Console. It doesn’t matter which database engine we choose, since the connector works with all of them. For this article, I will use PostgreSQL.

Port 3307

Let’s start by scanning the ports of our Cloud SQL instance using Nmap. We will scan standard ports for MySQL, PostgreSQL, and SQL Server.

> nmap -Pn -p 1433,1434,3306,5432 $CLOUD_SQL_INSTANCE_PRIMARY_IP

PORT        STATE       SERVICE
1433/tcp    filtered    ms-sql-s
1434/tcp    filtered    ms-sql-m
3306/tcp    filtered    mysql
5432/tcp    filtered    postgresql
Enter fullscreen mode Exit fullscreen mode

Standard database server ports are in a filtered state, which means that a firewall is blocking the access. To connect to the Cloud SQL instance on these ports we need to whitelist our public IP address in the Cloud SQL instance configuration.

Find out your public IP address, and add it to your Cloud SQL instance’s authorized networks in Google Cloud Console. After that, scan the ports again.

> nmap -Pn -p 1433,1434,3306,5432 $CLOUD_SQL_INSTANCE_PRIMARY_IP

PORT        STATE       SERVICE
1433/tcp    filtered    ms-sql-s
1434/tcp    filtered    ms-sql-m
3306/tcp    filtered    mysql
5432/tcp    open        postgresql
Enter fullscreen mode Exit fullscreen mode

Now, port 5432 is in an open state, which means that we can connect to the Cloud SQL instance using a standard PostgreSQL client.

However, as I already mentioned, one of the benefits of using a Cloud SQL Connector, is that we do not have to manage firewall rules. This is because the connector uses a special port to connect to the Cloud SQL instance - port 3307. Let’s scan this port.

> nmap -Pn -p 3307 $CLOUD_SQL_INSTANCE_PRIMARY_IP

PORT        STATE    SERVICE
3307/tcp    open     opsession-prxy
Enter fullscreen mode Exit fullscreen mode

Port 3307 is always in an open state, which means that the server is accepting connections on that port without the need to maintain any firewall rules. Try removing your public IP address from the authorized networks, and you will still see port 3307 as open.

Ephemeral Certificates

Google Cloud SQL instance runs a proxy service on port 3307. This proxy service is responsible for establishing a secure connection to the database server. Google Cloud SQL Connector communicates with the proxy service using a TLS v1.3 connection. This connection is secured using an ephemeral X.509 certificate, which is generated using the Cloud SQL Admin API.

For this to work, you must enable the Cloud SQL Admin API in your Google Cloud project, and the service account that you use to connect to the Cloud SQL instance must have the Cloud SQL Client role.

Internally, the connector first generates an RSA-2048 private/public key pair. It then sends the public key to the Cloud SQL Admin API, which generates an ephemeral certificate that is signed by the private key that belongs to the Cloud SQL instance. This certificate, together with the generated private key, is then used to create a TLSSocket (duplex stream) to the primary IP address of the Cloud SQL instance on port 3307.

Ephemeral certificates are short-lived, and they expire after 1 hour. The connector automatically renews the certificate before it expires, as well as when there are connection errors.

If you use IAM database authentication, the connector will additionally obtain the access token, which will be included in the signed ephemeral certificate. For this to work, the service account that you use to connect to the Cloud SQL instance must additionally have the Cloud SQL Instance User role.

Usage

Since the connector already takes care of establishing a secure connection to the Cloud SQL instance, there must be a way for us to pass that connection to the database driver, so that it can use it instead of creating a new one. In fact, we will not be passing an already established connection, but a connection builder which can be used to maintain connection pools by the underlying database drivers.

Cloud SQL Connector class exposes a getOptions method, which returns an object containing the connection builder: { stream: () ⇒ tls.TLSSocket }. This connection builder is used by the pg driver to create new connections to the Cloud SQL instance. Similar implementations are available for mysql2, and tedious.

Let’s take a look at a simple example of using the connector with the pg driver to connect to our Cloud SQL instance over public IP, using built-in password authentication.

import { AuthTypes, Connector, IpAddressTypes } from '@google-cloud/cloud-sql-connector'
import pg from 'pg'

const connector = new Connector()
const options = await connector.getOptions({
  instanceConnectionName: 'my-project:region:my-instance',
  ipType: IpAddressTypes.PUBLIC,
  authType: AuthTypes.PASSWORD
})

const pool = new pg.Pool({
  user: 'my-user',
  password: 'my-password',
  database: 'my-database',
  max: 5,
  ...options
})

const { rows } = await pool.query('SELECT true AS is_connected')
console.table(rows)

await pool.end()
connector.close()
Enter fullscreen mode Exit fullscreen mode

The code is pretty straightforward. We have called the connector with our SQL instance ID, specifying that we want to use public IP connectivity (instead of private IP) and password authentication (instead of IAM authentication). We then passed the stream builder into the pg Pool constructor so that it can be used whenever a new connection needs to be acquired. If we used IAM authentication we would also omit the password option.

Double Encryption?!

Depending on the implementation, there is a possibility that the underlying database driver opens another TLS connection over the existing TLS connection that was established by the connector, which would have a certain performance impact.

It is therefore beneficial to disable such behavior, if possible. For example, the tedious driver accepts an { encrypt: boolean = true } option, which should be set to false when using the connector. Cloud SQL Connector class exposes additional getTediousOptions method, which works similarly to the getOptions method, but it also returns the { encrypt: false } option so that they can both be passed to the tedious driver.

Keep in mind that in this case, you must uncheck the Allow only SSL connections option for your Cloud SQL Server instance. Otherwise tedious will find out during the pre-login that the SQL Server requires a TLS connection, but we have set encrypt = false. This will cause an error to be thrown. There is an open issue for this in the Cloud SQL Connector repository. This is not an issue for PostgreSQL and MySQL instances, so you can keep the flag checked for them.

Let's take a look at how these options are passed into the mssql driver in the following code snippet:

import { Connector, IpAddressTypes } from '@google-cloud/cloud-sql-connector'
import mssql from 'mssql'

const connector = new Connector()
const options = await connector.getTediousOptions({
  instanceConnectionName: 'my-project:region:my-instance',
  ipType: IpAddressTypes.PUBLIC
})

const pool = await mssql.connect({
  server: 'REUQIRED-BUT-UNUSED',
  user: 'my-user',
  password: 'my-password',
  database: 'my-database',
  pool: {
    max: 5,
  },
  options
})

const { recordset } = await mssql.query`SELECT 'True' AS connected`
console.table(recordset)

await pool.close()
connector.close()
Enter fullscreen mode Exit fullscreen mode

SQL Server does not support IAM database authentication, so we can only use built-in password authentication. Notice that we also have to pass a dummy server property when creating a connection pool. This is due to a bug in the tedious driver, which always requires the server property, even though it is not used when a custom connection builder is provided.

Supported Libraries

Cloud SQL Connector is supported in all Node.js database drivers, starting with the versions listed in the table below.

Database driver versions that introduced support for Cloud SQL Connector

Database driver versions that introduced support for Cloud SQL Connector

I also investigated the support for the connector in the most popular high-level database libraries. The connector is mostly supported in the latest versions of these libraries, with some caveats. You can find examples of using the connector with all these libraries in my GitHub repository.

Database libraries’ support for Cloud SQL Connector

Database libraries’ support for Cloud SQL Connector

typeorm officially supports mssql@v9, but the support for the custom stream builder was added in mssql@v10. Since mssql is a peer dependency of typeorm, you can force override it and use the Cloud SQL Connector with typeorm. There is an open PR to add support for mssql@v10 in typeorm.

prisma does not support custom connection builders, it only accepts connection strings. Therefore, you need to use a workaround if you wish to use the Cloud SQL Connector with prisma. You do that by creating a local TCP proxy that forwards the traffic to the stream created by the Cloud SQL Connector. You then pass a connection string to prisma, which points to the local proxy.

Let’s take a look at using the connector with one of the most popular high-level database libraries — knex:

import { AuthTypes, Connector, IpAddressTypes } from '@google-cloud/cloud-sql-connector'
import knex from 'knex'

const connector = new Connector()
const options = await connector.getOptions({
  instanceConnectionName: 'my-project:region:my-instance',
  ipType: IpAddressTypes.PUBLIC,
  authType: AuthTypes.PASSWORD
})

const database = knex({
  client: 'pg',
  connection: {
    user: 'my-user',
    password: 'my-password',
    database: 'my-database',
    ...options
  },
})
Enter fullscreen mode Exit fullscreen mode

There is nothing special going on there — the code snippet is very similar to the one where we used the connector with the pg driver directly.

Conclusion

Cloud SQL Node.js Connector is a great alternative to the Cloud SQL Auth Proxy. It provides similar features but without the need to run a separate Auth Proxy process (VM instance, Kubernetes Deployment, Kubernetes Pod sidecar, …​), which can be very beneficial if you run your application workloads in a serverless environment (e.g. Cloud Run).

The connector is supported in all Node.js database drivers and the most popular high-level database libraries. However, if you are using prisma, you most likely want to stay with the Auth Proxy for now, since the workaround to use the connector with prisma is not very elegant.

For examples of using the connector with all the aforementioned database drivers and high-level libraries, check out my GitHub repository.

Top comments (0)