After a week that reminded everyone how much trust sits in every extra layer, here's the production pattern we use to keep Cloud Run + Cloud SQL auth simpler — and the 1-hour IAM token trap most examples miss.
This week was a good reminder that infrastructure is never just infrastructure.
Vercel disclosed a security incident that it said originated from a compromised third-party AI tool and ultimately exposed access to some internal environments and a limited subset of customer-related data. At the same time, the broader developer tooling world is debating where "public by design" ends and security expectations begin.
That is exactly why boring decisions matter.
If you are using Cloud SQL from Cloud Run in Node.js, most examples push you toward adding more app-layer machinery. Sometimes that is the right call. But if you are already on Cloud Run, already mounting the Cloud SQL socket, and already using Postgres.js, you may not need to add the connector package inside your app at all.
That is the pattern we use in production at Dropfile.
The important caveat: this is manual IAM database auth, not Google's recommended automatic connector-based flow. That distinction matters because IAM DB auth uses short-lived OAuth2 access tokens as the database password. Those tokens expire after one hour, and that is exactly where many examples quietly break.
This post shows the leaner pattern, where it fits, and the token-refresh detail you need to get right if you do it manually.
Why this matters now
Recent incidents across developer tooling have made one thing clearer: every extra layer in your stack is also an extra trust boundary.
This article is not arguing that connector packages are bad. It is arguing that if your platform already provides the plumbing you need, adding another dependency to app code should be a deliberate choice — not just something copied from the nearest tutorial.
The setup
- Cloud Run service with
--add-cloudsql-instances - Postgres.js (
postgres, notpg) - Drizzle ORM on top
- IAM database auth, with the service account as the DB user
- No static password in production
To be clear: we are not removing Cloud Run's Cloud SQL integration. We are just not duplicating that plumbing inside the Node app when Cloud Run is already doing the job.
The gotcha everyone hits
Most examples fetch the OAuth2 access token once at startup and pass it as password: token.
That works fine until your process has been warm for over an hour. The token expires, and every new connection starts failing. Existing connections may survive a bit longer. New ones are where the failures show up first.
The examples weren't wrong. They were just incomplete for long-lived processes.
The fix: pass a function, not a string
Postgres.js lets password be an async function. It gets resolved when a new connection is created, not once at boot.
On Cloud Run, the GCP metadata server is always available, so the token fetch is tiny:
async function fetchIAMToken(): Promise<string> {
const res = await fetch(
"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token",
{ headers: { "Metadata-Flavor": "Google" } }
);
if (!res.ok) {
throw new Error(`Metadata token fetch failed: ${res.status}`);
}
const { access_token } = await res.json();
return access_token;
}
Then wire up the client:
import postgres from "postgres";
const client = postgres({
host: `/cloudsql/${process.env.INSTANCE_CONNECTION_NAME}`,
user: process.env.DB_USER, // e.g. my-sa@my-project.iam
database: process.env.DB_NAME,
password: fetchIAMToken, // function reference, not fetchIAMToken()
max: 7,
idle_timeout: 20,
connect_timeout: 10,
max_lifetime: 45 * 60, // recycle connections before the 1-hour token expiry
});
Two details matter:
-
passwordis the function reference, notfetchIAMToken() -
max_lifetime: 45 * 60recycles pooled connections before they drift too close to the 1-hour token expiry
That combination is the whole trick.
Local dev fallback
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";
import * as schema from "./schema";
function createClient() {
if (process.env.INSTANCE_CONNECTION_NAME) {
return drizzle(
postgres({
host: `/cloudsql/${process.env.INSTANCE_CONNECTION_NAME}`,
user: process.env.DB_USER,
database: process.env.DB_NAME,
password: fetchIAMToken,
max: 7,
idle_timeout: 20,
connect_timeout: 10,
max_lifetime: 45 * 60,
}),
{ schema }
);
}
return drizzle(postgres(process.env.DATABASE_URL!), { schema });
}
export const db = createClient();
Production uses IAM plus the mounted socket. Local uses DATABASE_URL. Nothing fancy.
Deploy command
gcloud run deploy my-service \
--add-cloudsql-instances "my-project:us-central1:my-instance" \
--service-account "my-sa@my-project.iam.gserviceaccount.com" \
--set-env-vars "INSTANCE_CONNECTION_NAME=my-project:us-central1:my-instance,DB_USER=my-sa@my-project.iam,DB_NAME=mydb"
Notes:
-
--add-cloudsql-instancesmounts the Unix socket path -
DB_USERfor an IAM service-account user is the service account email without.gserviceaccount.com - The Cloud Run service account and the IAM DB user should be the same identity
What you need in GCP
- Cloud SQL instance with IAM database authentication enabled (
cloudsql.iam_authentication) - Service account added as a Cloud SQL IAM service-account user
- Service account has both
roles/cloudsql.clientandroles/cloudsql.instanceUser - Normal PostgreSQL grants for that user, because IAM login and schema access are separate
Why we like this approach
- No static DB password in production
- No extra package in the app
- Token refresh is tied to connection creation, not process startup
- Works cleanly with Postgres.js and Drizzle
It's not the most abstracted setup.
That's exactly why we like it.
The real takeaway is bigger than Cloud SQL.
In 2026, a lot of teams are rediscovering the cost of hidden trust boundaries. If your platform already gives you the secure transport layer, it is worth asking whether your app really needs another package, another auth path, and another thing to maintain. Sometimes the better production pattern is not more abstraction. It is fewer moving parts, with the sharp edges understood.
Curious if anyone else is doing Cloud SQL IAM auth this way, or if you stuck with the connector package.
Top comments (0)