Auth0 has 14 cipher suites scheduled for removal at the end of H1 2026. If any of your clients — your backend, a reverse proxy, an older mobile binary — negotiates one of them today, your next Auth0 call after the cutoff will fail with handshake_failure and no further explanation.
There's no JSON error body. No HTTP status code. No entry in the Auth0 tenant log, because the connection never gets far enough to hit the application layer. The client just gets a TLS alert from the edge and a stack trace that points at your HTTP library, not at Auth0.
That's a bad failure mode to debug under pressure, so it's worth checking now.
The exact suites being removed
Auth0's support article lists 14 CBC-mode and RSA-keyed suites as deprecated:
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_AES_128_CBC_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA256
Two flavors to notice: all CBC-mode AES suites regardless of key exchange, plus every static-RSA suite even when it's using GCM. GCM alone isn't enough — the RSA key exchange is the problem, because it provides no forward secrecy.
The Canadian region (CA-1) already removed these suites. If you have traffic split across regions and something works in US/EU but not CA, this is a strong lead.
What the failure looks like in your stack
The observable symptom depends on your HTTP client. The common shapes:
Node (undici / node-fetch):
Error: write EPROTO ... SSL alert number 40 ... handshake failure
Python (requests / urllib3):
SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure
Go (net/http):
remote error: tls: handshake failure
Java (Apache HttpClient):
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
None of those mention Auth0. None include a cipher name. None suggest a fix. A developer Googling the exact error string mostly lands on Stack Overflow answers about self-signed certificates and clock skew, which is the wrong diagnosis.
The actually-useful signal is which endpoint fails. If every call to *.auth0.com or your tenant's login. custom domain starts alerting at the same moment, TLS negotiation is the story.
How to check whether you're affected
The handshake is a client-side concern, so you check at the client. Three approaches, cheapest first.
1. Ask OpenSSL directly
Pick one of the deprecated suites and force the handshake against your tenant. If the handshake succeeds, that client can still reach Auth0 today, and will break at the cutoff:
openssl s_client \
-connect YOUR-TENANT.auth0.com:443 \
-tls1_2 \
-cipher 'ECDHE-RSA-AES128-SHA' \
< /dev/null 2>&1 | grep -E 'Cipher|Verify'
If you see Cipher : ECDHE-RSA-AES128-SHA in the output, your OpenSSL build negotiates a deprecated suite and nothing stopped it. Run the same probe from the environment your code actually runs in — a container, a serverless function, a legacy VM — not just your laptop.
2. Check what your runtime offers
The handshake is a negotiation between what your client offers and what Auth0 accepts. If your client's offered list is a superset of modern suites, you're fine; the removal only bites when the client has only deprecated options on the table.
Old Android devices, Java 8 builds without recent TLS patches, Alpine containers with stripped OpenSSL, and vendored binaries from 2019-era SDKs are the usual culprits.
# Node
node -e 'console.log(require("tls").DEFAULT_CIPHERS)'
# Python
python3 -c "import ssl; ctx=ssl.create_default_context(); print(ctx.get_ciphers())"
# Java
keytool -list -v -storepass changeit 2>&1 | head -40
Look for whether modern suites — TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-RSA-AES128-GCM-SHA256 — are in the list. If they are, you keep working after the cutoff. If only the deprecated suites are present, you break.
3. Point staging at the Canadian region
Auth0's CA-1 region has already removed the weak suites. If you can temporarily route staging traffic at a Canadian tenant (or any service that's enforced the same restrictions), anything that fails there is what would have failed post-cutoff in your primary region. This is the closest thing to a dress rehearsal.
What to fix
For most callers the fix is a runtime upgrade, not a code change.
- Node 18+, Python 3.10+, Go 1.17+, Java 11+ all ship with acceptable defaults. If you're on those, you're probably fine — verify with the probes above.
- OpenSSL 1.1.1+ on Linux, LibreSSL on macOS 12+, or Schannel on Windows Server 2022+ are the modern baselines. Older TLS stacks need rebuilding or replacing.
- Self-managed certificate deployments (Auth0's custom domain setup with your own cert) are the most exposed — you control the termination layer and therefore the negotiated cipher. Anything in front of that path (nginx, Envoy, a CDN) is also yours to audit.
- Reverse proxies in your path must also negotiate modern suites. A proxy with TLS 1.2 and old suites hard-coded will break exactly the same way, even if the process behind it is on a current runtime.
If you can't upgrade — there's always a legacy binary someone can't rebuild — the fallback is to pin an explicit cipher list that includes at least one GCM or ChaCha20 suite both sides support. That buys you time, not permanence: anything RSA-keyed will lose forward secrecy as a floor requirement eventually.
The broader pattern
Vendor-side TLS changes are a specific case of the general schema-drift problem: the vendor updates their endpoint, your code doesn't change, and the contract between them silently shifts.
The ways this usually bites:
-
No status code to assert on. Integration tests that check
response.status === 200can't fire for a handshake that never completes. - Not in the vendor's tenant log. Auth0's dashboard shows nothing because the request never reached the application layer.
- Not in your APM. Most APMs start the span at the HTTP request; a TLS-layer failure looks like a one-line network error at best.
The way to catch this before your users do is to make a small, boring background check against each third-party endpoint you depend on — not just that it returns 200, but that the connection itself is healthy from your runtime. A synthetic handshake from inside your infrastructure is cheap, runs on a schedule, and produces an alert that points at the right vendor instead of at a generic network error.
Minimum-viable fix for today
- Run the OpenSSL probe above against your Auth0 tenant from every environment your code runs in — local, staging, prod, each serverless region, any legacy VM that still has outbound Auth0 traffic
- Grep your Auth0 callers for any forced cipher list:
git grep -E 'ciphers|ssl_ciphers|TLS_RSA_WITH|ECDHE_RSA_WITH_AES.*CBC' - For any binary you can't rebuild, confirm at least one modern GCM or ChaCha20 suite is in its offered list
- Route staging traffic at the
CA-1region for a day and watch forhandshake_failurealerts — this is the pre-cutoff preview - Add a scheduled synthetic check that connects to your Auth0 tenant from production and fails loudly on a TLS error, not just on HTTP non-2xx
If none of the above produce a finding, you're clear. If any of them do, you have roughly six weeks.
FlareCanary monitors REST APIs — including Auth0 tenants — for schema drift and connection-layer changes. Free tier covers 5 endpoints with daily checks.
Top comments (0)