<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yuri</title>
    <description>The latest articles on DEV Community by Yuri (@yuricodesbot).</description>
    <link>https://dev.to/yuricodesbot</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yuricodesbot"/>
    <language>en</language>
    <item>
      <title>Dev update - [March, 2026]</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Mon, 09 Mar 2026 19:28:02 +0000</pubDate>
      <link>https://dev.to/supabase/dev-update-march-2026-34ib</link>
      <guid>https://dev.to/supabase/dev-update-march-2026-34ib</guid>
      <description>&lt;p&gt;Here’s everything that happened with Supabase in the last month:&lt;/p&gt;

&lt;h2&gt;
  
  
  Webinar: Ship Fast, Stay Safe
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fuser-attachments%2Fassets%2F9509cf35-07fa-4275-9d48-5f4a1c35d688" class="article-body-image-wrapper"&gt;&lt;img width="2400" height="1350" alt="agencywebinar" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fuser-attachments%2Fassets%2F9509cf35-07fa-4275-9d48-5f4a1c35d688"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Learn how top agencies balance velocity with control when using AI coding tools to build production applications on Supabase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/events/agency-webinar-ai-prototyping-production" rel="noopener noreferrer"&gt;Register&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Logs Drains on Pro
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fuser-attachments%2Fassets%2F9675896f-f6f0-4f02-9878-d50de657433c" class="article-body-image-wrapper"&gt;&lt;img width="1200" height="630" alt="logdrainsonproog" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fuser-attachments%2Fassets%2F9675896f-f6f0-4f02-9878-d50de657433c"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Log Drains are now available on Pro. Send your Postgres, Auth, Storage, Edge Functions, and Realtime logs to Datadog, Grafana Loki, Sentry, Axiom, S3, or your own endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/log-drains-now-available-on-pro" rel="noopener noreferrer"&gt;Blog Post&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Docs now export to Markdown for AI tools
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foiqhmon18pgiyqsox130.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foiqhmon18pgiyqsox130.png" alt="docsog" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every guide on &lt;a href="http://docs.supabase.com/" rel="noopener noreferrer"&gt;docs.supabase.com&lt;/a&gt; now has a "Copy as Markdown" option, plus direct links to ask ChatGPT and Claude. Copy any page into your agent or tool of choice with one click.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/docs/guides/database/overview" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Storage: major performance and security overhaul
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fuser-attachments%2Fassets%2Fb006f02a-6bff-4d35-abda-b4b4ce1a2244" class="article-body-image-wrapper"&gt;&lt;img width="1200" height="630" alt="storagethumb" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fuser-attachments%2Fassets%2Fb006f02a-6bff-4d35-abda-b4b4ce1a2244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Object listing is up to 14.8x faster on 60M+ row datasets. The prefixes table and its 6 triggers are gone, replaced with a hybrid skip-scan algorithm and cursor-based pagination. Security fixes close a path traversal vulnerability and prevent orphan objects from direct SQL deletes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/supabase-storage-performance-security-reliability-updates" rel="noopener noreferrer"&gt;Blog Post&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge Functions dashboard for self-hosted and CLI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fuser-attachments%2Fassets%2F69bd2d15-a50b-4c60-b053-27c98b36b15c" class="article-body-image-wrapper"&gt;&lt;img width="1200" height="630" alt="edgefunctionsog2" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fuser-attachments%2Fassets%2F69bd2d15-a50b-4c60-b053-27c98b36b15c"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;List and search your functions, view details, test directly from the dashboard, and download as &lt;code&gt;.zip&lt;/code&gt;. No longer cloud-only.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://x.com/kiwicopple/status/2026264137505087826" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Multigres Postgres parser: 2.5x faster than the cgo alternative
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fuser-attachments%2Fassets%2Fde2868f2-33fb-4eeb-98be-dfda0f6d0bad" class="article-body-image-wrapper"&gt;&lt;img width="1200" height="630" alt="postgresparserog" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fuser-attachments%2Fassets%2Fde2868f2-33fb-4eeb-98be-dfda0f6d0bad"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Built in 8 weeks using Claude Code. A comparable MySQL parser took over a year.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://multigres.com/blog/ai-parser-engineering" rel="noopener noreferrer"&gt;Blog Post&lt;/a&gt; &lt;a href="https://x.com/kiwicopple/status/2019694010244428104" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Product Announcements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;⚠️ Action Required: OpenAPI spec access via anon key deprecated March 11. The &lt;code&gt;/rest/v1/&lt;/code&gt; schema endpoint will only be accessible via service role or secret API keys after this date. Existing data API usage is unaffected. &lt;a href="https://github.com/supabase/supabase/discussions/42949" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Observability Overview page is rolling out. &lt;a href="https://x.com/kiwicopple/status/2017196489798201752" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Table filters now use AI. Describe what you want to find and the dashboard applies the right Postgres filters. Available under Feature Previews. &lt;a href="https://x.com/kiwicopple/status/2019370504944341361" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; &lt;a href="https://github.com/orgs/supabase/discussions/42461" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Queue table operations in the Table Editor. Stage inserts, edits, and deletes, review in Diff View, then commit with &lt;code&gt;cmd + s&lt;/code&gt;. &lt;a href="https://x.com/kiwicopple/status/2019339958281203769" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Supabase plugin for Cursor is live. &lt;a href="https://x.com/kiwicopple/status/2025954320492171409" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Copy AI prompts from the dashboard. The same prompts powering the Supabase AI Assistant are now exportable for use in your local agent or tool of choice. &lt;a href="https://x.com/kiwicopple/status/2021886071563268188" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Inline SQL Editor saves SQL Snippets. Create and update snippets from Studio. Share via git in the &lt;code&gt;supabase/snippets&lt;/code&gt; folder. &lt;a href="https://github.com/supabase/supabase/discussions/42031" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Command Menu gets Create and Search shortcuts. Hit &lt;code&gt;cmd + k&lt;/code&gt; to create tables, RLS policies, Edge Functions, and Storage buckets — or jump directly to an existing one. &lt;a href="https://x.com/kiwicopple/status/2020922098311684461" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Read replicas now managed from the database replication page. Rolling out gradually — if you manage read replicas, look in Database settings.&lt;/li&gt;
&lt;li&gt;Receipt downloads now available. Download receipts from the Invoices section in your org billing page.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Made with Supabase
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A purpose-built tool for running powerful affiliate and referral campaigns. &lt;a href="https://winwinkit.com/" rel="noopener noreferrer"&gt;Website&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Supabase x YCombinator Hackathon winner: An AI Agent Personal Trainer &lt;a href="https://proximafitness.com/" rel="noopener noreferrer"&gt;Website&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;AI video production for professionals &lt;a href="https://www.martini.film/" rel="noopener noreferrer"&gt;Website&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Generate APA citation-ready references from a URL or DOI in seconds &lt;a href="https://apacitationgenerator.online/" rel="noopener noreferrer"&gt;Website&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;SupaClaw - A basic version of OpenClaw but built entirely on Supabase built-in features &lt;a href="https://github.com/vincenzodomina/supaclaw" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Community Highlights
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Supabase is sponsoring Postgres Conference 2026. Deepthi and Sugu are speaking on Multigres: horizontal scalability and intelligent sharding for Postgres. April 21-23 in San Jose. Use code &lt;code&gt;2026_SUPABASE20&lt;/code&gt; for 20% off. &lt;a href="https://postgresconf.org/conferences/postgresconf_2026" rel="noopener noreferrer"&gt;Register&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Codepup AI launched Supabase integration. Build a complete web app with a real Supabase backend — auto-generated, tested, and fixed by AI in under 30 minutes. &lt;a href="https://www.codepup.ai/blog/codepup-ai-supabase-integration" rel="noopener noreferrer"&gt;Blog Post&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;BKND joins Supabase. Dennis Senn, creator of BKND, is joining to build a Lite offering for agentic workloads. BKND stays open source. &lt;a href="https://supabase.com/blog/bknd-joins-supabase" rel="noopener noreferrer"&gt;Blog Post&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Hydra joins Supabase. Joe Sciarrino, co-creator of Hydra, is joining to build Supabase Warehouse: an open data warehouse architecture for developers. Hydra co-developed &lt;code&gt;pg_duckdb&lt;/code&gt;, which accelerates analytics queries on Postgres by over 600x. &lt;a href="https://supabase.com/blog/hydra-joins-supabase" rel="noopener noreferrer"&gt;Blog Post&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Getting Started with Supabase - Official Guide &lt;a href="https://www.youtube.com/watch?v=i_bPeTZVlg0&amp;amp;pp=ygUIc3VwYWJhc2U%3D" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Supabase on Observable Flutter - Episode &lt;a href="https://www.youtube.com/live/4dCdCamVBHk?si=yqQuahnuiKPbpJn5" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Inside Supabase Edge Functions: How Serverless Magic Actually Works &lt;a href="https://dev.to/krish_kakadiya_5f0eaf6342/inside-supabase-edge-functions-how-serverless-magic-actually-works-2m1p"&gt;Blog&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Unlocking Scalable Backend Development: Why Supabase and Node.js are Revolutionizing Modern Applications in 2026 &lt;a href="https://medium.com/%40muhammadfaizanali18/unlocking-scalable-backend-development-why-supabase-and-node-js-c2bd2ab7eb36" rel="noopener noreferrer"&gt;Blog&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Adding GitHub, Google, and X Login to Next.js 15 with Supabase Auth &lt;a href="https://dev.to/mukitaro/adding-github-google-and-x-login-to-nextjs-15-with-supabase-auth-2mj8"&gt;Blog&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ Baking hot meme zone ⚠️&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvqsrlb74qhraesibufj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvqsrlb74qhraesibufj.png" alt="meme" width="741" height="822"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>opensource</category>
      <category>database</category>
      <category>devops</category>
    </item>
    <item>
      <title>Build "Sign in with Your App" using Supabase Auth</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Wed, 24 Dec 2025 15:45:00 +0000</pubDate>
      <link>https://dev.to/supabase/build-sign-in-with-your-app-using-supabase-auth-cc6</link>
      <guid>https://dev.to/supabase/build-sign-in-with-your-app-using-supabase-auth-cc6</guid>
      <description>&lt;p&gt;You've used "Sign in with Google" and "Sign in with GitHub" countless times. But what if your Supabase project could be the identity provider? Today, we're adding OAuth 2.1 and OpenID Connect server capabilities to Supabase Auth, turning your project into a full-fledged identity provider.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/tXpk8XUSguE"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This opens up powerful new possibilities: AI agents authenticating through your app via the Model Context Protocol (MCP), third-party developers building on your platform, partner integrations accessing your APIs securely, and enterprise single sign-on. All using the same battle-tested auth infrastructure you already rely on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Built This
&lt;/h2&gt;

&lt;p&gt;The immediate catalyst? &lt;strong&gt;Model Context Protocol (MCP)&lt;/strong&gt; authentication. As AI agents and LLM tools become ubiquitous, they need a standardized way to authenticate with services. MCP has emerged as that standard, and it's built on OAuth 2.1. Your Supabase project can now be the identity provider these AI tools authenticate against.&lt;/p&gt;

&lt;p&gt;But the applications extend far beyond AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Third-party developer ecosystems&lt;/strong&gt; - Let partners build apps that integrate with your platform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partner API access&lt;/strong&gt; - Grant secure access to external services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"Powered by [Your App]"&lt;/strong&gt; - Enable users to use their existing account on your platform to sign into partner applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise SSO&lt;/strong&gt; - Full OpenID Connect support with ID tokens, UserInfo endpoint, and organizational single sign-on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building a platform where other developers or services need secure access to user data, OAuth 2.1 server capabilities are now baked into your Supabase project.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Can Build
&lt;/h2&gt;

&lt;p&gt;With Supabase Auth as an OAuth 2.1 provider, you can:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For AI and Automation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MCP servers that authenticate users through your Supabase project&lt;/li&gt;
&lt;li&gt;AI agents that securely access user data with proper authorization&lt;/li&gt;
&lt;li&gt;LLM tools integrated into your application ecosystem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;For Developer Platforms:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Third-party apps offering "Sign in with [Your App]"&lt;/li&gt;
&lt;li&gt;Partner integrations with granular access control&lt;/li&gt;
&lt;li&gt;Developer API access with OAuth tokens&lt;/li&gt;
&lt;li&gt;Marketplace apps built on your platform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;For Enterprise:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenID Connect single sign-on (SSO) with ID tokens and UserInfo endpoint&lt;/li&gt;
&lt;li&gt;Centralized identity management across services&lt;/li&gt;
&lt;li&gt;Standards-compliant enterprise authentication&lt;/li&gt;
&lt;li&gt;Compliance-friendly audit trails&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How It Works: The Big Picture
&lt;/h2&gt;

&lt;p&gt;Supabase Auth implements &lt;strong&gt;OAuth 2.1 with OpenID Connect&lt;/strong&gt; (OIDC), the modern, secure standards for authentication and identity. At its core is the authorization code flow with PKCE (Proof Key for Code Exchange).&lt;/p&gt;

&lt;p&gt;The implementation uses the authorization code flow, the most secure OAuth flow for server-side apps and native applications. PKCE protects against authorization code interception attacks. Access tokens are JWTs containing standard Supabase claims (&lt;code&gt;user_id&lt;/code&gt;, &lt;code&gt;role&lt;/code&gt;) plus OAuth-specific claims like &lt;code&gt;client_id&lt;/code&gt;. For OpenID Connect flows, clients also receive ID tokens, standardized identity tokens with user profile information, and can access the UserInfo endpoint to retrieve user data. Refresh tokens enable long-lived sessions without re-authentication, while the JWKS endpoint provides public key infrastructure for third parties to validate tokens.&lt;/p&gt;

&lt;p&gt;The best part? Your existing Supabase security model extends naturally to OAuth: &lt;strong&gt;Row Level Security (RLS) policies apply to OAuth access tokens&lt;/strong&gt; just like they do to regular session tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Works with Your Existing Auth Stack
&lt;/h2&gt;

&lt;p&gt;One of the most powerful aspects of this implementation is how seamlessly it integrates with Supabase Auth features you're already using. When users authenticate through the OAuth flow, you can use all of Supabase Auth's existing methods: password authentication, magic links, social providers (Google, GitHub, etc.), multi-factor authentication (MFA), and phone authentication. Your third-party integrations get the benefit of your existing authentication security without you having to rebuild anything.&lt;/p&gt;

&lt;p&gt;Already using &lt;a href="https://supabase.com/docs/guides/auth/auth-hooks/custom-access-token-hook" rel="noopener noreferrer"&gt;Custom Access Token Hooks&lt;/a&gt; to add custom claims to user tokens? They work with OAuth tokens too. You can inject client-specific claims, add custom permissions, or implement any token customization logic you need. The flexibility you have with regular auth tokens extends to OAuth.&lt;/p&gt;

&lt;p&gt;Your RLS policies automatically apply to OAuth access tokens. The tokens include the standard &lt;code&gt;user_id&lt;/code&gt; and &lt;code&gt;role&lt;/code&gt; claims you're used to, plus a &lt;code&gt;client_id&lt;/code&gt; claim that identifies which OAuth client is making the request.&lt;/p&gt;

&lt;p&gt;This means you can grant different OAuth clients access to different subsets of user data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Grant your mobile app access to user profiles&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;POLICY&lt;/span&gt; &lt;span class="nv"&gt;"Mobile app can read profiles"&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;profiles&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;AND&lt;/span&gt;
  &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'client_id'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'mobile-app-client-id'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Grant a third-party analytics dashboard read-only access to metrics&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;POLICY&lt;/span&gt; &lt;span class="nv"&gt;"Analytics dashboard can read metrics"&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;user_metrics&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;AND&lt;/span&gt;
  &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'client_id'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'analytics-dashboard-client-id'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  MCP Authentication
&lt;/h2&gt;

&lt;p&gt;Supabase Auth fully complies with the Model Context Protocol's OAuth 2.1 authentication spec. Your Supabase project exposes standard OAuth authorization server metadata at &lt;code&gt;/.well-known/oauth-authorization-server&lt;/code&gt;, enabling automatic discovery of your authorization endpoints, token endpoints, and capabilities. MCP clients can register themselves dynamically using OAuth 2.1 dynamic client registration (no manual configuration required).&lt;/p&gt;

&lt;p&gt;Here's what this means in practice: point an MCP-compatible AI tool at your Supabase project's auth URL, and it handles the rest. The tool discovers your endpoints, registers itself as a client, initiates the OAuth flow, and obtains tokens. The AI agent authenticates as the user, with all your RLS policies enforced automatically. Users see your consent screen, approve access, and the AI tool operates on their behalf, with exactly the permissions you've defined. No passwords exposed, no custom API wrappers needed.&lt;/p&gt;

&lt;p&gt;We're just getting started with MCP. &lt;strong&gt;We're working on making it even easier to build MCP servers directly in Supabase&lt;/strong&gt;, bringing the same developer experience you love to AI agent integrations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Setting up OAuth 2.1 in your Supabase project starts with registering OAuth clients through the Supabase dashboard or Management API. You'll configure their allowed redirect URIs and receive a &lt;code&gt;client_id&lt;/code&gt;. Then you'll build your authorization flow, an endpoint that receives OAuth authorization requests, authenticates users (using existing Supabase Auth methods), presents a consent UI, and confirms approvals with Supabase Auth.&lt;/p&gt;

&lt;p&gt;Update your Row Level Security policies to handle OAuth clients appropriately, deciding which data third-party apps can access and what remains user-only. Third-party apps validate tokens using your public JWKS endpoint, no shared secrets required. They can verify tokens asymmetrically using standard OAuth 2.1 libraries.&lt;/p&gt;

&lt;p&gt;Complete documentation with code examples is available in our &lt;a href="https://supabase.com/docs/guides/auth/oauth-server" rel="noopener noreferrer"&gt;OAuth 2.1 guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenID Connect Support
&lt;/h2&gt;

&lt;p&gt;Beyond OAuth 2.1, Supabase Auth now includes full OpenID Connect (OIDC) support, making it perfect for enterprise single sign-on and standardized identity integrations.&lt;/p&gt;

&lt;p&gt;When authenticating with OIDC, clients receive an ID token alongside the access token. This standardized JWT contains user profile information and is signed by your Supabase project, allowing third parties to verify user identity without additional API calls. Your project also exposes the standard OIDC UserInfo endpoint, providing a secure way for clients to retrieve user profile information using their access token, enabling seamless integration with enterprise identity systems and standard OIDC libraries.&lt;/p&gt;

&lt;p&gt;Your project automatically exposes an OIDC discovery endpoint at &lt;code&gt;/.well-known/openid-configuration&lt;/code&gt;, making integration with enterprise tools and standard OIDC clients straightforward. Point an enterprise SSO system at your Supabase project, and it discovers everything it needs to integrate. This makes Supabase Auth a complete identity provider solution, compatible with any OIDC-compliant application or service.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;We're continuing to expand OAuth capabilities. Granular scopes are coming soon, allowing clients to request specific permissions rather than full user access &lt;code&gt;(scope=read:profile read:metrics)&lt;/code&gt;. We're making it even easier to build and deploy MCP servers directly in Supabase, bringing AI agent authentication into the same seamless developer experience you already know.&lt;/p&gt;

&lt;p&gt;We're building this in the open. The &lt;a href="https://github.com/orgs/supabase/discussions/38022" rel="noopener noreferrer"&gt;GitHub discussion&lt;/a&gt; is active, share your use cases and help shape the roadmap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Today
&lt;/h2&gt;

&lt;p&gt;OAuth 2.1 and OpenID Connect capabilities are now available in Supabase Auth. Get started:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://supabase.com/docs/guides/auth/oauth-server" rel="noopener noreferrer"&gt;Read the documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/orgs/supabase/discussions/38022" rel="noopener noreferrer"&gt;View the GitHub discussion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://discord.supabase.com/" rel="noopener noreferrer"&gt;Join the Discord&lt;/a&gt; to share what you're building&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whether you're building an MCP server for AI agents, implementing enterprise SSO with OpenID Connect, creating a developer platform, or just want to offer "Sign in with [Your App]", Supabase Auth now has you covered.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>javascript</category>
      <category>productivity</category>
      <category>ai</category>
    </item>
    <item>
      <title>The new Supabase power for Kiro</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Tue, 23 Dec 2025 14:18:00 +0000</pubDate>
      <link>https://dev.to/supabase/the-new-supabase-power-for-kiro-51df</link>
      <guid>https://dev.to/supabase/the-new-supabase-power-for-kiro-51df</guid>
      <description>&lt;p&gt;We are announcing new Supabase powers for &lt;a href="https://www.kiro.dev/" rel="noopener noreferrer"&gt;Amazon's Kiro IDE&lt;/a&gt;. With these powers, you can build full-stack applications faster by giving Kiro deep knowledge of your Supabase project, best practices for database migrations, edge functions, and security policies.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Kiro powers?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kiro.dev/blog/introducing-powers/" rel="noopener noreferrer"&gt;Kiro powers&lt;/a&gt; bundles MCP tools and steering files into a single install, giving your agent specialized knowledge without overwhelming it with context. Traditional MCP servers load all their tools into context immediately, overwhelming the agent. Powers load only when relevant.&lt;/p&gt;

&lt;p&gt;When you install a Supabase power in Kiro, the AI assistant gets immediate access to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your Supabase database schema&lt;/li&gt;
&lt;li&gt;Best practices for writing edge functions&lt;/li&gt;
&lt;li&gt;Security guidelines for Row Level Security (RLS) policies&lt;/li&gt;
&lt;li&gt;SQL syntax optimized for Postgres&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What you can do with the Supabase powers
&lt;/h2&gt;

&lt;p&gt;The Supabase powers connect Kiro to your Supabase projects and gives you one-click access to common development tasks. We're releasing two powers—one for developing on hosted projects, and one for developing with the local Supabase stack through Supabase CLI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manage database schemas&lt;/strong&gt;&lt;br&gt;
Kiro can read your database schema, suggest migrations, and help you structure tables following Postgres best practices. The powers uses Supabase's CLI and MCP server to understand your current schema and make informed suggestions about changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapgz6mgitkwn6vyzn1la.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapgz6mgitkwn6vyzn1la.webp" alt="dashboard example" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Review edge functions
&lt;/h2&gt;

&lt;p&gt;Click a button and Kiro will review your edge functions for common issues, performance problems, and best practices. The powers includes specific guidance on how Supabase edge functions work, including the Deno runtime, environment variables, and local testing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ny3m64tbo90ehhk1mdr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ny3m64tbo90ehhk1mdr.webp" alt="example dashboard" width="740" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Check security and performance
&lt;/h2&gt;

&lt;p&gt;The powers includes automations to check advisors that scan your local project for security issues and performance bottlenecks. Kiro can identify missing RLS policies, inefficient queries, and common security mistakes before you deploy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyx4pnxcj3sbrw8srjm7e.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyx4pnxcj3sbrw8srjm7e.webp" alt="security and performance" width="800" height="1052"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Write better SQL
&lt;/h2&gt;

&lt;p&gt;Kiro gets context-aware help for writing SQL that works well with Postgres. The powers include guidance on formatting queries, naming conventions, writing secure database functions, and structuring complex queries with CTEs. Kiro helps ensure your SQL follows Postgres standards and Supabase best practices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu4hsbxxstg6ux57aejo.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu4hsbxxstg6ux57aejo.webp" alt="context aware" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;Kiro powers use the Model Context Protocol (MCP) to connect AI assistants to external tools and knowledge. The Supabase power bundles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MCP server configuration&lt;/strong&gt; that connects to your hosted or local Supabase project&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Steering files&lt;/strong&gt; that tell Kiro how to use Supabase tools correctly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual triggers&lt;/strong&gt; that give you one-click access to reviews and checks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic context&lt;/strong&gt; that loads only when you're working on database, security, or edge function tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you ask Kiro to help with a database migration or edge function, it automatically loads the right context without cluttering the conversation with unnecessary information. Kiro also unloads the context when the power is no longer needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;

&lt;p&gt;To use the Supabase powers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install &lt;a href="https://kiro.dev/" rel="noopener noreferrer"&gt;Kiro&lt;/a&gt; and add the Supabase power for local or hosted development&lt;/li&gt;
&lt;li&gt;Ask Kiro to setup your Supabase project&lt;/li&gt;
&lt;li&gt;Start building&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Built on open standards
&lt;/h2&gt;

&lt;p&gt;Kiro powers use the same Supabase MCP Server used by other AI coding assistants, ensuring that your app development experience in Kiro remains familiar to you.&lt;/p&gt;

&lt;p&gt;We're committed to delivering a great Supabase development experience across every tool. These Kiro powers gives developers instant access to best practices, security checks, and deep platform knowledge without leaving their editor.&lt;/p&gt;

&lt;p&gt;Try the Supabase powers today and let us know what you build.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>backend</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Introducing Supabase ETL</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Mon, 22 Dec 2025 14:09:00 +0000</pubDate>
      <link>https://dev.to/supabase/introducing-supabase-etl-ohl</link>
      <guid>https://dev.to/supabase/introducing-supabase-etl-ohl</guid>
      <description>&lt;p&gt;We're introducing &lt;strong&gt;Supabase ETL&lt;/strong&gt;: a change-data-capture pipeline that replicates your Postgres tables to analytical destinations in near real time.&lt;/p&gt;

&lt;p&gt;Supabase ETL reads changes from your Postgres database and writes them to external destinations. It uses logical replication to capture inserts, updates, deletes and truncates as they happen. &lt;strong&gt;Setup takes minutes in the Supabase Dashboard.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first supported destinations are &lt;a href="https://supabase.com/blog/introducing-analytics-buckets" rel="noopener noreferrer"&gt;Analytics Buckets&lt;/a&gt; (powered by Iceberg) and BigQuery.&lt;/p&gt;

&lt;p&gt;Supabase ETL is open source. You can find the code on GitHub at &lt;a href="http://github.com/supabase/etl" rel="noopener noreferrer"&gt;github.com/supabase/etl&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/NqtPnND32sE"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Why separate OLTP and OLAP?
&lt;/h2&gt;

&lt;p&gt;Postgres is excellent for transactional workloads like reading a single user record or inserting an order. But when you need to scan millions of rows for analytics, Postgres slows down.&lt;/p&gt;

&lt;p&gt;Column-oriented systems like BigQuery, or those built on open formats like Apache Iceberg, are designed for this. They can aggregate massive datasets &lt;strong&gt;orders of magnitude faster&lt;/strong&gt;, compress data more efficiently, and handle complex analytical queries that would choke a transactional database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supabase ETL gives you the best of both worlds:&lt;/strong&gt; keep your app fast on Postgres while unlocking powerful analytics on purpose-built systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;Supabase ETL captures every change in your Postgres database and delivers it to your analytics destination in near real time.&lt;/p&gt;

&lt;p&gt;Here's how:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You create a Postgres publication that defines which tables to replicate&lt;/li&gt;
&lt;li&gt;You configure ETL to connect a publication to a destination&lt;/li&gt;
&lt;li&gt;ETL reads changes from the publication through a logical replication slot&lt;/li&gt;
&lt;li&gt;Changes are batched and written to your destination&lt;/li&gt;
&lt;li&gt;Your data is available for querying in the destination&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The pipeline starts with an initial copy of your selected tables, then switches to streaming mode. &lt;strong&gt;Your analytics stay fresh with latency measured in milliseconds to seconds&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up ETL
&lt;/h2&gt;

&lt;p&gt;You configure ETL entirely through the Supabase Dashboard.&lt;strong&gt;No code required.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create a publication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A publication defines which tables to replicate. You create it with SQL or via the UI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Replicate specific tables&lt;/span&gt;
&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="n"&gt;publication&lt;/span&gt; &lt;span class="n"&gt;analytics_pub&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Or replicate all tables in a schema&lt;/span&gt;
&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="n"&gt;publication&lt;/span&gt; &lt;span class="n"&gt;analytics_pub&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;tables&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="k"&gt;schema&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Enable replication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Navigate to &lt;code&gt;Database&lt;/code&gt; in your Supabase Dashboard. Select the &lt;code&gt;Replication&lt;/code&gt; tab and click &lt;code&gt;Enable Replication&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Add a destination&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;code&gt;Add Destination&lt;/code&gt; and choose your destination type.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: For Analytics Buckets, you will need to create an analytics bucket first in the Storage section.&lt;/p&gt;

&lt;p&gt;Configure the destination with your bucket credentials and select your publication. Click Create and &lt;code&gt;Start&lt;/code&gt; to begin replication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Monitor your pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Dashboard shows pipeline status and lag. You can start, stop, restart, or delete pipelines from the actions menu.&lt;/p&gt;

&lt;h2&gt;
  
  
  Available destinations
&lt;/h2&gt;

&lt;p&gt;Our goal with Supabase ETL is to let you connect your existing data systems to Supabase. We're actively expanding the list of supported destinations. Right now, the official destinations are Analytics Buckets and BigQuery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Analytics Buckets
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://supabase.com/docs/guides/storage/analytics/introduction" rel="noopener noreferrer"&gt;Analytics Buckets&lt;/a&gt; are specialized storage buckets built on Apache Iceberg, an open table format designed for large analytical datasets. Your data is stored in Parquet files on S3.&lt;/p&gt;

&lt;p&gt;When you replicate to Analytics Buckets, your tables are created with a changelog structure. Each row includes a &lt;code&gt;cdc_operation&lt;/code&gt; column indicating whether the change was an &lt;code&gt;INSERT&lt;/code&gt;, &lt;code&gt;UPDATE&lt;/code&gt;, or &lt;code&gt;DELETE&lt;/code&gt;. &lt;strong&gt;This append-only format preserves the complete history of all changes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You can query Analytics Buckets from PyIceberg, Apache Spark, DuckDB, Amazon Athena, or any tool that supports the Iceberg REST Catalog API.&lt;/p&gt;

&lt;h2&gt;
  
  
  BigQuery
&lt;/h2&gt;

&lt;p&gt;BigQuery is Google's serverless data warehouse, built for large-scale analytics. It handles petabytes of data and integrates well with existing BI tools and data pipelines.&lt;/p&gt;

&lt;p&gt;When you replicate to BigQuery, Supabase ETL creates a view for each table and uses an underlying versioned table to support all operations efficiently. You query the view, and ETL handles the rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding and removing tables
&lt;/h2&gt;

&lt;p&gt;You can modify which tables are replicated after your pipeline is running.&lt;/p&gt;

&lt;p&gt;To add a table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;alter&lt;/span&gt; &lt;span class="n"&gt;publication&lt;/span&gt; &lt;span class="n"&gt;analytics_pub&lt;/span&gt; &lt;span class="k"&gt;add&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="n"&gt;products&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To remove a table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;alter&lt;/span&gt; &lt;span class="n"&gt;publication&lt;/span&gt; &lt;span class="n"&gt;analytics_pub&lt;/span&gt; &lt;span class="k"&gt;drop&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After changing your publication, restart the pipeline from the Dashboard actions menu for the changes to take effect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: ETL does not remove data from your destination when you remove a table from a publication. This is by design to prevent accidental data loss.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to use ETL vs read replicas
&lt;/h2&gt;

&lt;p&gt;Read replicas and ETL solve different problems.&lt;/p&gt;

&lt;p&gt;Read replicas help when you need to scale concurrent queries, but they're still Postgres. They don't make analytics faster.&lt;/p&gt;

&lt;p&gt;ETL moves your data to systems built for analytics. You get &lt;strong&gt;faster queries on large datasets, lower storage costs&lt;/strong&gt; through compression, and &lt;strong&gt;complete separation&lt;/strong&gt; between your production workload and analytics.&lt;/p&gt;

&lt;p&gt;You can use both: read replicas for application read scaling, ETL for analytics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to know
&lt;/h2&gt;

&lt;p&gt;Replication with Supabase ETL has a few constraints to be aware of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tables must have primary keys (this is a Postgres logical replication requirement)&lt;/li&gt;
&lt;li&gt;Generated columns are not supported&lt;/li&gt;
&lt;li&gt;Custom data types are replicated as strings&lt;/li&gt;
&lt;li&gt;Schema changes are not automatically propagated to destinations&lt;/li&gt;
&lt;li&gt;Data is replicated as-is, without transformation&lt;/li&gt;
&lt;li&gt;During the initial copy phase, changes accumulate in the WAL and are replayed once streaming begins&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We're working on schema change support and additional destinations, and evaluating different streaming techniques to improve flexibility and performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;p&gt;Supabase ETL is usage-based:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;$25 per connector per month&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;$15 per GB&lt;/strong&gt; of change data processed after the initial sync&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Initial copy is free&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Supabase ETL is in private alpha.&lt;/strong&gt; To request access, contact your account manager or fill out the form in the Dashboard.&lt;/p&gt;

&lt;p&gt;If you want to dive into the code, &lt;strong&gt;the ETL framework is open source&lt;/strong&gt; and written in Rust. Check out the repository at &lt;a href="http://github.com/supabase/etl" rel="noopener noreferrer"&gt;github.com/supabase/etl&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>vectordatabase</category>
      <category>ai</category>
      <category>programming</category>
      <category>database</category>
    </item>
    <item>
      <title>Introducing Analytics Buckets</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Fri, 19 Dec 2025 14:45:00 +0000</pubDate>
      <link>https://dev.to/supabase/introducing-analytics-buckets-ikp</link>
      <guid>https://dev.to/supabase/introducing-analytics-buckets-ikp</guid>
      <description>&lt;p&gt;Supabase is introducing Analytics Buckets, which you can use to store huge sets of data in Supabase Storage. Postgres is great for your app. But Postgres isn't designed for analytical workloads.&lt;/p&gt;

&lt;p&gt;Analytics Buckets are a specialized storage type in Supabase designed for analytical workloads and built on &lt;a href="https://iceberg.apache.org/" rel="noopener noreferrer"&gt;Apache Iceberg&lt;/a&gt; and Amazon S3. They store data in columnar Parquet format, which is optimized for scans, aggregations, and time-series queries.&lt;/p&gt;

&lt;p&gt;Think of them as cold storage for your data, with a query engine attached.&lt;/p&gt;

&lt;p&gt;Your hot transactional data stays in Postgres. Your historical data and analytical workloads live in Analytics Buckets. You query both using familiar tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  What do they do?
&lt;/h2&gt;

&lt;p&gt;Analytics Buckets give you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost-effective storage.&lt;/strong&gt; S3 pricing instead of database storage. Documented savings of 30-90% on storage costs for large datasets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open table format.&lt;/strong&gt; Apache Iceberg means no vendor lock-in. Query your data from any compatible tool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema evolution.&lt;/strong&gt; Change your table schema without rewriting data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time travel.&lt;/strong&gt; Query historical snapshots of your data. See what a table looked like at any point in time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full audit history.&lt;/strong&gt; Every change is preserved. Track what changed, when, and how.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to use Analytics Buckets vs Postgres
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Analytics Buckets and Postgres are complementary. They serve different workloads.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep data in Postgres when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need low-latency reads for your application&lt;/li&gt;
&lt;li&gt;Data changes frequently and consistency matters&lt;/li&gt;
&lt;li&gt;Your dataset is small to medium size&lt;/li&gt;
&lt;li&gt;You need real-time access from your app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use Analytics Buckets when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are storing millions or billions of rows&lt;/li&gt;
&lt;li&gt;You run heavy analytical queries that scan large tables&lt;/li&gt;
&lt;li&gt;You need long-term retention at low cost&lt;/li&gt;
&lt;li&gt;You want to query data from multiple tools&lt;/li&gt;
&lt;li&gt;You need complete audit history and time travel&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Many teams use both. Keep the last 90 days in Postgres. Archive everything to Analytics Buckets. Query historical data when needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  How they work
&lt;/h2&gt;

&lt;p&gt;Analytics Buckets use &lt;a href="https://iceberg.apache.org/" rel="noopener noreferrer"&gt;Apache Iceberg&lt;/a&gt;, an open table format created for large analytical datasets. Here is what happens under the hood:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data is stored in Parquet files on S3&lt;/li&gt;
&lt;li&gt;Iceberg manages metadata including schema, partitions, and snapshots&lt;/li&gt;
&lt;li&gt;An Iceberg REST Catalog provides the interface for querying&lt;/li&gt;
&lt;li&gt;You connect using any Iceberg-compatible tool&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The separation of compute and storage means you can scale each independently. Store petabytes of data and query only what you need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an Analytics Bucket
&lt;/h2&gt;

&lt;p&gt;You can create an Analytics Bucket from the Dashboard or using the SDK.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using the Dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to Storage in your Supabase Dashboard&lt;/li&gt;
&lt;li&gt;Click Create Bucket&lt;/li&gt;
&lt;li&gt;Enter a name for your bucket&lt;/li&gt;
&lt;li&gt;Select Analytics Bucket as the bucket type&lt;/li&gt;
&lt;li&gt;Click Create&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can use the Supabase Dashboard to define columns and set data types, including complex types like decimal with precision and scale. The foreign data wrapper schema will automatically be configured for you. Once your table is created, you can manage Analytics Buckets in the same way that you manage your Postgres tables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using the SDK&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@supabase/supabase-js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://your-project.supabase.co&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;your-service-key&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createBucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my-analytics-bucket&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ANALYTICS&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Connecting to Analytics Buckets
&lt;/h2&gt;

&lt;p&gt;Analytics Buckets require authentication with two services:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iceberg REST Catalog&lt;/strong&gt; manages metadata for your tables. It handles schema, partitions, and snapshots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3-Compatible Storage&lt;/strong&gt; stores the actual data in Parquet format.&lt;/p&gt;

&lt;p&gt;You authenticate using your Supabase service key for the catalog and S3 credentials for storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming data with Supabase ETL
&lt;/h2&gt;

&lt;p&gt;Analytics Buckets work hand in hand with &lt;a href="https://supabase.com/blog/introducing-supabase-etl" rel="noopener noreferrer"&gt;Supabase ETL&lt;/a&gt;. ETL captures changes from your Postgres database and streams them to Analytics Buckets in near real time.&lt;/p&gt;

&lt;p&gt;This gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic replication of your Postgres tables&lt;/li&gt;
&lt;li&gt;Near real-time data in your analytics bucket&lt;/li&gt;
&lt;li&gt;Complete changelog with every insert, update, and delete&lt;/li&gt;
&lt;li&gt;No manual data movement or scheduled jobs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To set up replication, create a Postgres publication for the tables you want to replicate, then add an Analytics Buckets destination in the Replication section of the Dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Querying from Postgres
&lt;/h2&gt;

&lt;p&gt;You can query Analytics Buckets directly from Postgres using Foreign Data Wrappers. This lets you join hot data in Postgres with historical data in Analytics Buckets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Create a foreign server for your Iceberg data&lt;/span&gt;
&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="n"&gt;iceberg_server&lt;/span&gt;
&lt;span class="k"&gt;foreign&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="n"&gt;wrapper&lt;/span&gt; &lt;span class="n"&gt;iceberg_wrapper&lt;/span&gt;
&lt;span class="k"&gt;options&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;aws_access_key_id&lt;/span&gt; &lt;span class="s1"&gt;'your-access-key'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;aws_secret_access_key&lt;/span&gt; &lt;span class="s1"&gt;'your-secret-key'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;region_name&lt;/span&gt; &lt;span class="s1"&gt;'us-east-1'&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Import tables from your analytics bucket&lt;/span&gt;
&lt;span class="n"&gt;import&lt;/span&gt; &lt;span class="k"&gt;foreign&lt;/span&gt; &lt;span class="k"&gt;schema&lt;/span&gt; &lt;span class="nv"&gt;"analytics"&lt;/span&gt;
&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="n"&gt;iceberg_server&lt;/span&gt;
&lt;span class="k"&gt;into&lt;/span&gt; &lt;span class="n"&gt;iceberg&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Query historical data&lt;/span&gt;
&lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;iceberg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;events&lt;/span&gt;
&lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;event_timestamp&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'2024-01-01'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Data tiering pattern
&lt;/h2&gt;

&lt;p&gt;A common architecture is data tiering: keeping recent data in Postgres and archiving history to Analytics Buckets.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Partition tables by time in Postgres, keeping a rolling window like the last 90 days&lt;/li&gt;
&lt;li&gt;Stream all data to Analytics Buckets using Supabase ETL&lt;/li&gt;
&lt;li&gt;Drop old partitions from Postgres&lt;/li&gt;
&lt;li&gt;Query recent data from Postgres, historical data from Analytics Buckets&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This keeps your Postgres database small and fast. Storage costs drop. Analytics queries run on data optimized for scans.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compatible tools
&lt;/h2&gt;

&lt;p&gt;Analytics Buckets work with any tool that supports the Iceberg REST Catalog API:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PyIceberg&lt;/li&gt;
&lt;li&gt;Apache Spark&lt;/li&gt;
&lt;li&gt;DuckDB&lt;/li&gt;
&lt;li&gt;Amazon Athena&lt;/li&gt;
&lt;li&gt;Trino&lt;/li&gt;
&lt;li&gt;Apache Flink&lt;/li&gt;
&lt;li&gt;Snowflake (via external tables)&lt;/li&gt;
&lt;li&gt;BigQuery (via BigLake)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pricing and availability
&lt;/h2&gt;

&lt;p&gt;Analytics Buckets are free during the Private Alpha. Standard egress charges apply when you move data out of the region.&lt;/p&gt;

&lt;p&gt;To request access, fill out the form at &lt;a href="http://forms.supabase.com/analytics-buckets" rel="noopener noreferrer"&gt;forms.supabase.com/analytics-buckets&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Request access to the Private Alpha&lt;/li&gt;
&lt;li&gt;Create an Analytics Bucket in the Dashboard&lt;/li&gt;
&lt;li&gt;Connect using PyIceberg, Spark, or your tool of choice&lt;/li&gt;
&lt;li&gt;Set up ETL to stream data from Postgres automatically&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Separate your transactional and analytical workloads. Keep Postgres fast. Store history at S3 prices. Query from any tool.&lt;/p&gt;

&lt;p&gt;We are excited to see what you build.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>analytics</category>
      <category>backend</category>
      <category>ai</category>
    </item>
    <item>
      <title>Introducing Vector Buckets</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Thu, 18 Dec 2025 17:30:51 +0000</pubDate>
      <link>https://dev.to/supabase/introducing-vector-buckets-3134</link>
      <guid>https://dev.to/supabase/introducing-vector-buckets-3134</guid>
      <description>&lt;p&gt;We're introducing &lt;a href="https://supabase.com/docs/guides/storage/vector/introduction" rel="noopener noreferrer"&gt;Vector Buckets&lt;/a&gt;, a new storage option that gives you the durability and cost efficiency of Amazon S3 with built-in similarity search.&lt;/p&gt;

&lt;p&gt;Vector search is becoming a core primitive for modern apps: semantic search, recommendations, RAG, image and audio similarity, and more.&lt;/p&gt;

&lt;p&gt;Supabase already gives you powerful tools for vectors, such as &lt;code&gt;pgvector&lt;/code&gt; in Postgres. With Vector Buckets, you now have more options for how you store vectors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use pgvector for smaller, latency-sensitive datasets that belong tightly in your database.&lt;/li&gt;
&lt;li&gt;Use Vector Buckets when you need to store a large amount of vectors—up to tens of millions—on a durable storage layer with similarity search built in.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are Vector Buckets?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Vector Buckets&lt;/strong&gt; are a new bucket type in Supabase Storage.&lt;/p&gt;

&lt;p&gt;Conceptually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;Vector Bucket&lt;/strong&gt; is where your vector indexes live.&lt;/li&gt;
&lt;li&gt;Inside each bucket, you define one or more &lt;strong&gt;vector indexes&lt;/strong&gt; (for example: &lt;code&gt;documents-openai&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Each index stores high-dimensional vectors plus optional metadata.&lt;/li&gt;
&lt;li&gt;You query those indexes using Supabase clients or directly from Postgres via a foreign data wrapper.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What do Vector Buckets bring to the table?
&lt;/h2&gt;

&lt;p&gt;Scalable vector storage for large datasets&lt;/p&gt;

&lt;p&gt;Embeddings add up quickly: thousands of floats per vector, multiplied by millions of items.&lt;/p&gt;

&lt;p&gt;Instead of putting everything in Postgres, Vector Buckets store your embeddings in S3-backed object storage, which gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Capacity for tens of millions of vectors per index&lt;/li&gt;
&lt;li&gt;A storage layer designed for large, durable datasets&lt;/li&gt;
&lt;li&gt;Room to keep full archives of vectors without over-optimising your Postgres schema or worrying about table bloat&lt;/li&gt;
&lt;li&gt;Your vectors live in a storage layer built for large datasets, while you still query them through Postgres.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Built-in similarity search
&lt;/h2&gt;

&lt;p&gt;Vector Buckets are not just blobs of float arrays. Each index supports similarity search out of the box.&lt;/p&gt;

&lt;p&gt;Similarity search lets you find items that are conceptually related based on their vector representations, not just exact keyword matches. That’s what powers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic document search (“find content about this topic, even if the keywords differ”)&lt;/li&gt;
&lt;li&gt;Product and content recommendations (“find items similar to this one”)&lt;/li&gt;
&lt;li&gt;Image, audio, or video similarity (“find assets that look or sound like this”)&lt;/li&gt;
&lt;li&gt;De-duplication and near-duplicate detection across large media libraries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;With Vector Buckets, you can:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Insert vectors with a key, a float32 vector, and metadata&lt;/li&gt;
&lt;li&gt;Run k-NN queries (for example, “return the 20 closest vectors to this embedding”)&lt;/li&gt;
&lt;li&gt;Use a familiar distance metric such as cosine similarity&lt;/li&gt;
&lt;li&gt;Ask for distances and metadata along with the results&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No extra vector database to run, no new query language. Just vector indexes with search, available from the same Supabase SDKs you already use or directly via Postgres.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance that fits most app workflows
&lt;/h2&gt;

&lt;p&gt;Vector Buckets are designed to provide sub-second similarity search over large datasets, which is more than enough for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backend workflows and batch processing&lt;/li&gt;
&lt;li&gt;AI agents and background jobs&lt;/li&gt;
&lt;li&gt;Dashboards and internal tools&lt;/li&gt;
&lt;li&gt;Many user-facing features where “fast” means hundreds of milliseconds, not single-digit milliseconds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re chasing ultra-low latency at very high QPS, &lt;code&gt;pgvector&lt;/code&gt; in a tuned Postgres cluster (or a dedicated vector database) remains the best place to push performance. Vector Buckets focus on simple, scalable similarity search at large scale, not on being the absolute fastest option.&lt;/p&gt;

&lt;h2&gt;
  
  
  Metadata filtering
&lt;/h2&gt;

&lt;p&gt;Each vector can include an arbitrary metadata object, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Getting started with Vector Buckets&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;doc&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;language&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;en&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;project_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1234&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Filter by metadata during similarity search (e.g. &lt;code&gt;type = 'doc' AND language = 'en'&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Query through Postgres and join the results with your relational tables&lt;/li&gt;
&lt;li&gt;Build multi-tenant or multi-project search just by encoding tenant/project IDs into metadata&lt;/li&gt;
&lt;li&gt;This makes it easy to build domain-aware, tenant-aware semantic search.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When should you use Vector Buckets vs &lt;code&gt;pgvector&lt;/code&gt;?
&lt;/h2&gt;

&lt;p&gt;Vector Buckets and &lt;code&gt;pgvector&lt;/code&gt; are complementary. They serve different roles and work best together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use pgvector when…
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You’re optimizing for &lt;strong&gt;lowest possible latency&lt;/strong&gt; on user-facing queries&lt;/li&gt;
&lt;li&gt;Vectors are &lt;strong&gt;part of your core relational model&lt;/strong&gt; (for example, a column on &lt;code&gt;documents&lt;/code&gt; or &lt;code&gt;products&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;You want &lt;strong&gt;transactional guarantees&lt;/strong&gt; (data and embeddings written together)&lt;/li&gt;
&lt;li&gt;Your vector dataset is &lt;strong&gt;small to medium&lt;/strong&gt; and you’re comfortable scaling Postgres specifically for vector workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Vector Buckets when…
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You want &lt;strong&gt;S3-style durability and scale&lt;/strong&gt; for embeddings&lt;/li&gt;
&lt;li&gt;You’re dealing with a &lt;strong&gt;large amount of vectors&lt;/strong&gt; (up to tens of millions) that you don’t want sitting in Postgres&lt;/li&gt;
&lt;li&gt;You’re building &lt;strong&gt;AI-heavy Supabase apps&lt;/strong&gt; (semantic search, recommendations, RAG, media similarity) and want a managed vector storage tier&lt;/li&gt;
&lt;li&gt;You prefer a clear split between:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hot vectors&lt;/strong&gt; in &lt;code&gt;pgvector&lt;/code&gt; for the highest-traffic / most latency-sensitive queries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Warm or cold vectors&lt;/strong&gt; in Vector Buckets for everything else&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, many apps will use both:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep your most frequently queried vectors (for example, current content, top products) in &lt;code&gt;pgvector&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Store the full archive (older content, long tail SKUs, historical embeddings, large media corpora) in Vector Buckets.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How do Vector Buckets work?
&lt;/h2&gt;

&lt;p&gt;At a high level, here’s what happens under the hood:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Vector Bucket in Supabase Storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You create a bucket of type Vector Bucket in the Dashboard or via API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@supabase/supabase-js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://your-project.supabase.co&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;your-service-key&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vectors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createBucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;embeddings&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Create Vector indexes inside the bucket&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Inside the Vector Bucket, you create one or more indexes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Create an index in that bucket&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vectors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;embeddings&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;createIndex&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;documents-openai&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;dimension&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1536&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;distanceMetric&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cosine&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Store vectors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can store vectors directly from the SDK, an Edge Function, or Postgres.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Postgres&lt;/span&gt;
&lt;span class="nx"&gt;INSERT&lt;/span&gt; &lt;span class="nx"&gt;INTO&lt;/span&gt; &lt;span class="nx"&gt;s3_vectors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;documents_openai &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;VALUES&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;doc-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[0.1, 0.2, 0.3, /* ... rest of embedding ... */]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nx"&gt;embd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;{"title": "Getting Started with Vector Buckets", "source": "documentation"}&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nx"&gt;jsonb&lt;/span&gt;
  &lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;doc-2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[0.4, 0.5, 0.6, /* ... rest of embedding ... */]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nx"&gt;embd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;{"title": "Advanced Vector Search", "source": "blog"}&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nx"&gt;jsonb&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// JS-SDK (server only)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vectors&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;embeddings&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;documents-openai&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;putVectors&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;vectors&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;doc-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;float32&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt; &lt;span class="cm"&gt;/* ... */&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Getting started with Vector Buckets&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;doc&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;language&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;en&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Query vectors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can run similarity search queries against your indexes, either via the SDK or Postgres.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Postgres&lt;/span&gt;
&lt;span class="nx"&gt;SELECT&lt;/span&gt;
  &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;title&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nf"&gt;embd_distance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;distance&lt;/span&gt;
&lt;span class="nx"&gt;FROM&lt;/span&gt; &lt;span class="nx"&gt;s3_vectors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;documents_openai&lt;/span&gt;
&lt;span class="nx"&gt;WHERE&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;==&amp;gt;&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[0.1, 0.2, 0.3, /* ... embedding ... */]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nx"&gt;embd&lt;/span&gt;
&lt;span class="nx"&gt;ORDER&lt;/span&gt; &lt;span class="nx"&gt;BY&lt;/span&gt; &lt;span class="nf"&gt;embd_distance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;ASC&lt;/span&gt;
&lt;span class="nx"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// JS-SDK (Server only)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vectors&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;embeddings&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;documents-openai&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;// Query with a vector embedding&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;queryVectors&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;queryVector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;float32&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt; &lt;span class="cm"&gt;/* ... embedding of 1536 dimensions ... */&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;topK&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;returnDistance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;returnMetadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Designed for workloads up to tens of millions of vectors
&lt;/h2&gt;

&lt;p&gt;Vector Buckets currently can handle large-but-not-infinite workloads:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each vector index supports up to &lt;strong&gt;tens of millions of vectors&lt;/strong&gt; (50M per index today).&lt;/li&gt;
&lt;li&gt;You can create multiple indexes per bucket (for tenants, models, or domains).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That makes Vector Buckets a great fit for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-tenant SaaS apps&lt;/li&gt;
&lt;li&gt;Documentation and content libraries&lt;/li&gt;
&lt;li&gt;Product catalogues and recommendation systems&lt;/li&gt;
&lt;li&gt;Media libraries and image/video/audio similarity search&lt;/li&gt;
&lt;li&gt;AI builders who want semantic search without running their own vector infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example scenarios
&lt;/h2&gt;

&lt;p&gt;A few concrete ways to put Vector Buckets to work:&lt;/p&gt;

&lt;p&gt;1.&lt;strong&gt;AI documentation search&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store all your documentation (including old versions, drafts, and translations) as embeddings in a Vector Bucket.&lt;/li&gt;
&lt;li&gt;Keep the most recent / highest-traffic docs in &lt;code&gt;pgvector&lt;/code&gt; for instant in-app search.&lt;/li&gt;
&lt;li&gt;Implement a search endpoint that queries &lt;code&gt;pgvector&lt;/code&gt; first and falls back to Vector Buckets when needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2.&lt;strong&gt;Long-tail product search and recommendations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vectorise your entire catalogue and store it in a Vector Bucket.&lt;/li&gt;
&lt;li&gt;Include metadata for category, brand, stock status, and region.&lt;/li&gt;
&lt;li&gt;Use metadata filters to refine search (e.g. “in stock, in this region, same category”).&lt;/li&gt;
&lt;li&gt;Let recommendation jobs and AI agents work against the full set of products without bloating Postgres.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3.&lt;strong&gt;Media similarity and de-duplication&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store embeddings for images, audio or video frames in a Vector Bucket.&lt;/li&gt;
&lt;li&gt;Use similarity search to:&lt;/li&gt;
&lt;li&gt;Find visually similar assets for content discovery or recommendations&lt;/li&gt;
&lt;li&gt;Detect possible copyright issues by finding near-duplicate content&lt;/li&gt;
&lt;li&gt;Clean up your library by removing duplicate or near-duplicate media&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Availability
&lt;/h2&gt;

&lt;p&gt;Vector Buckets are currently available in &lt;strong&gt;Public Alpha&lt;/strong&gt; for Pro projects and above.&lt;/p&gt;

&lt;p&gt;Currently supported in the following regions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;us-east-1&lt;/li&gt;
&lt;li&gt;us-east-2&lt;/li&gt;
&lt;li&gt;us-west-2&lt;/li&gt;
&lt;li&gt;eu-central-1&lt;/li&gt;
&lt;li&gt;ap-southeast-2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More regions will be added in the near future.&lt;/p&gt;

&lt;p&gt;We’re using this phase to refine the APIs, scaling behaviour, and search experience based on real workloads. Limits may evolve as we learn from how you use the feature in production.&lt;/p&gt;

&lt;p&gt;Vector Buckets are &lt;strong&gt;free to use (fair use policy applies)&lt;/strong&gt; during Public Alpha. Egress costs still apply.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;

&lt;p&gt;You can try Vector Buckets in your project today:&lt;/p&gt;

&lt;p&gt;1.&lt;strong&gt;Create a Vector Bucket&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Dashboard → &lt;strong&gt;Storage → Create bucket → Vector Bucket.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;Create an index&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pick a dimension that matches your embedding model and choose a distance metric.&lt;/p&gt;

&lt;p&gt;3.&lt;strong&gt;Store vectors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use Supabase clients to upsert vectors with metadata.&lt;/p&gt;

&lt;p&gt;4.&lt;strong&gt;Query vectors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Build endpoints for semantic search, recommendations, or retrieval-augmented generation.&lt;/p&gt;

&lt;p&gt;5.&lt;strong&gt;Layer with &lt;code&gt;pgvector&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Keep your hottest, most latency-sensitive vectors in &lt;code&gt;pgvector&lt;/code&gt;, and store large archives and media-heavy datasets in Vector Buckets.&lt;/p&gt;

&lt;p&gt;We’re excited to see what you build with this new vector storage tier.&lt;/p&gt;

&lt;p&gt;As you try Vector Buckets during the Public Alpha, please send feedback—what works, what’s confusing, and what you’d like to see next will directly shape where we take this feature.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>vectordatabase</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Top 10 Launches of Launch Week 15</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Fri, 18 Jul 2025 22:40:00 +0000</pubDate>
      <link>https://dev.to/supabase/top-10-launches-of-launch-week-15-225n</link>
      <guid>https://dev.to/supabase/top-10-launches-of-launch-week-15-225n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lj7wor6aiuwu06ht4y7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lj7wor6aiuwu06ht4y7.png" alt="decorative" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are the top 10 launches from the past week. They're all very exciting so make sure to check out every single one.&lt;/p&gt;

&lt;h2&gt;
  
  
  #1: New API Keys + JWT Signing Keys
&lt;/h2&gt;

&lt;p&gt;Supabase Platform released new API keys, Publishable and Secret, and Supabase Auth now supports asymmetric JWTs with Elliptic Curve and RSA cryptographic algorithms. These changes improve the performance, reliability, and security of your Supabase projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/jwt-signing-keys" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  #2: Analytics Buckets with Apache Iceberg Support
&lt;/h2&gt;

&lt;p&gt;We launched Supabase Analytics Buckets in Private Alpha—storage buckets optimized for analytics with built-in support for Apache Iceberg. We’ve coupled this with the new Supabase Iceberg Wrapper to make it easier for you to query your analytical data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/analytics-buckets" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  #3: OpenTelemetry Support
&lt;/h2&gt;

&lt;p&gt;We’ve added support for OpenTelementry (OTel) across our services so you can soon send logs, metrics, and traces to any OTel-compatible tooling. We’ve also unified logs under a single interface in our Dashboard as well as added new capabilities to our AI Assistant to improve the debugging experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/new-observability-features-in-supabase" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  #4: Build with Figma Make and Supabase
&lt;/h2&gt;

&lt;p&gt;We’ve partnered with Figma so you can hook up a Supabase backend to your Figma Make project, enabling you to persist data and tap into the suite of Supabase products to help you build prototypes quickly and scale them when you gain traction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/figma-make-support-for-supabase" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  #5: Storage: 500 GB Uploads and Cheaper Cached Egress
&lt;/h2&gt;

&lt;p&gt;You can now upload files as large as 500 GB (up from 50 GB), enjoy much cheaper cached egress pricing at $0.03/GB (down from 0.09/GB), and increased egress quota that doubles your egress before you have to start paying.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/storage-500gb-uploads-cheaper-egress-pricing" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  #6: Edge Functions: Deno 2, 97% Faster Boot Times, and Persistent File Storage
&lt;/h2&gt;

&lt;p&gt;Edge Functions now support Deno 2.1, persistent file storage so you can mount any S3-compatible storage and read and write to them inside of your functions, up to 97% faster boot times, and support for Deno’s Sync APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/persistent-storage-for-faster-edge-functions" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  #7: Branching 2.0: GitHub Optional
&lt;/h2&gt;

&lt;p&gt;You can now spin up, view diffs, and merge your branches directly from the Supabase Dashboard without having to connect to GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/branching-2-0" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  #8: Supabase UI: Platform Kit
&lt;/h2&gt;

&lt;p&gt;We’ve built out several UI components to make it easy for you to feature the core of Supabase Dashboard inside your own app so you or your users can interact with Supabase projects natively with a customizable interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/supabase-ui-platform-kit" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  #9: Stripe-To-Postgres Sync Engine as an NPM Package
&lt;/h2&gt;

&lt;p&gt;Now you can conveniently sync your Stripe data to your Supabase database by importing the npm package @supabase/stripe-sync-engine, whether in your Node.js app or even deploying it in a Supabase Edge Function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/stripe-engine-as-sync-library" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  #10: Algolia Connector for Supabase
&lt;/h2&gt;

&lt;p&gt;We’ve been collaborating closely with Algolia to bring you a connector for Supabase so you can easily index your data and enable world class search experiences.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/algolia-connector-for-supabase" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Launch Week Continues
&lt;/h2&gt;

&lt;p&gt;There's always more activities for you to get involved with:&lt;/p&gt;

&lt;h3&gt;
  
  
  Launch Week 15: Meetups
&lt;/h3&gt;

&lt;p&gt;Our community is hosting more meetups around the world. This is your chance to engage with others building with Supabase in a city near you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/events?category=meetup" rel="noopener noreferrer"&gt;See events&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Launch Week 15: Hackathon
&lt;/h3&gt;

&lt;p&gt;We've got another hackathon that you wouldn't want to miss! Now's your chance to vibe code something amazing, show it off to the community, and win some limited edition Supabase swag.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/lw15-hackathon" rel="noopener noreferrer"&gt;Read more&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Supabase Launch Week 15 Hackathon</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Fri, 18 Jul 2025 19:11:00 +0000</pubDate>
      <link>https://dev.to/supabase/supabase-launch-week-15-hackathon-2ajk</link>
      <guid>https://dev.to/supabase/supabase-launch-week-15-hackathon-2ajk</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjt3msyqamrqwj8u473p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjt3msyqamrqwj8u473p.png" alt="decorative"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have just concluded &lt;a href="https://supabase.com/launch-week" rel="noopener noreferrer"&gt;Launch Week 15 with so many new updates&lt;/a&gt;, but no launch week is complete without a hackathon! The Supabase Launch Week 15 Hackathon begins now! Open your favorite IDE or AI agent and start building!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/launch-week" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⚡️ More on Launch Week&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;As of the time of publishing this blog post, the hackathon has begun and will conclude on Sunday, July 27th, at 11:59 pm PT. You could win an extremely limited edition Supabase swag and add your name to the Supabase Hackathon Hall of Fame.&lt;/p&gt;

&lt;p&gt;For some inspiration, check out all the &lt;a href="https://supabase.com/blog/tags/hackathon" rel="noopener noreferrer"&gt;winners from previous hackathons&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This is the perfect excuse to "Build in a weekend, scale to millions.” Since you retain all the rights to your submissions, you can use the hackathon as a launch pad for your new Startup ideas, side projects, or indie hacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Facts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You have 10 days to build a new o*&lt;em&gt;pen-source&lt;/em&gt;* project using Supabase in some capacity

&lt;ul&gt;
&lt;li&gt;Starting 10:00 am PT Friday, July 18th, 2025&lt;/li&gt;
&lt;li&gt;The submission deadline is 11:59 pm Sunday, midnight PT, July 27th, 2025&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Enter as an individual or as a team of up to 4 people&lt;/li&gt;

&lt;li&gt;Build whatever you want - a project, app, tool, or library. Anything.&lt;/li&gt;

&lt;li&gt;1-minute video containing the following:

&lt;ul&gt;
&lt;li&gt;Name of the project&lt;/li&gt;
&lt;li&gt;Demonstration of the project&lt;/li&gt;
&lt;li&gt;How Supabase is used within the project&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;a href="https://youtu.be/KaWJQzTTx5k" rel="noopener noreferrer"&gt;Here is an example video&lt;/a&gt;. We do not assess the quality of the video itself. Remember to keep it concise.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prizes
&lt;/h2&gt;

&lt;p&gt;There are 5 categories, and there will be prizes for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Best overall project&lt;/li&gt;
&lt;li&gt;Best use of AI&lt;/li&gt;
&lt;li&gt;Most fun / best easter egg&lt;/li&gt;
&lt;li&gt;Most technically impressive&lt;/li&gt;
&lt;li&gt;Most visually pleasing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There will be a winner and a runner-up prize for each category. Every team member on winning/runner-up teams gets a Supabase Launch Week swag kit, and the winner of the best overall project will get this cool mechanical keyboard as well!&lt;/p&gt;

&lt;h2&gt;
  
  
  Submission
&lt;/h2&gt;

&lt;p&gt;You should submit your project from the submission form before 11:59 pm Sunday midnight PT, July 27th, 2025. The submission form will be put up here on this article before the deadline. Come back in about a week to find it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Judges
&lt;/h2&gt;

&lt;p&gt;The Supabase team will judge the winners for each category.&lt;br&gt;
We will be looking for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creativity/inventiveness&lt;/li&gt;
&lt;li&gt;Functions correctly/smoothly&lt;/li&gt;
&lt;li&gt;Visually pleasing&lt;/li&gt;
&lt;li&gt;Technically impressive&lt;/li&gt;
&lt;li&gt;Use of Supabase features&lt;/li&gt;
&lt;li&gt;FUN! 😃&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Rules
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Team size 1-4 (all team members on winning teams will receive a prize)&lt;/li&gt;
&lt;li&gt;You cannot be on multiple teams&lt;/li&gt;
&lt;li&gt;One submission per team&lt;/li&gt;
&lt;li&gt;It's not a requirement to use AI&lt;/li&gt;
&lt;li&gt;All design elements, code, etc., for your project must be created &lt;strong&gt;during&lt;/strong&gt; the event

&lt;ul&gt;
&lt;li&gt;Using frameworks/ libraries is fine&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;All entries must be Open Source (link to source code required in entry)&lt;/li&gt;

&lt;li&gt;Must use Supabase in some capacity&lt;/li&gt;

&lt;li&gt;Can be any language or framework&lt;/li&gt;

&lt;li&gt;You must submit before the deadline (no late entries)&lt;/li&gt;

&lt;li&gt;Include a link to a 1-minute demo video&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional Info
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Any intellectual property developed during the hackathon will belong to the team that developed it. We expect that each team will have an agreement between themselves regarding the IP, but this is not required.&lt;/li&gt;
&lt;li&gt;By making a submission, you grant Supabase permission to use screenshots, code snippets, and/or links to your project or content of your README on our Twitter, blog, website, email updates, and in the Supabase discord server. Supabase does not make any claims over your IP.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Launch Week 15
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Main Stage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/jwt-signing-keys" rel="noopener noreferrer"&gt;Day 1 - Introducing JWT Signing Keys&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/analytics-buckets" rel="noopener noreferrer"&gt;Day 2 - Introducing Supabase Analytics Buckets with Iceberg Support&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/branching-2-0" rel="noopener noreferrer"&gt;Day 3 - Introducing Branching 2.0&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/new-observability-features-in-supabase" rel="noopener noreferrer"&gt;Day 4 - Introducing New Observability Features in Supabase&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/persistent-storage-for-faster-edge-functions" rel="noopener noreferrer"&gt;Day 5 - Introducing Persistent Storage for Edge Functions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build Stage&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/supabase-ui-platform-kit" rel="noopener noreferrer"&gt;01 - Supabase UI: Platform Kit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/figma-make-support-for-supabase" rel="noopener noreferrer"&gt;02- Create a Supabase backend using Figma Make&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/stripe-engine-as-sync-library" rel="noopener noreferrer"&gt;03- Introducing stripe-sync-engine npm package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://supabase.com/blog/improved-security-controls" rel="noopener noreferrer"&gt;04 - Improved Security Controls and A New Home for Security&lt;/a&gt;
-&lt;a href="https://supabase.com/blog/algolia-connector-for-supabase" rel="noopener noreferrer"&gt;05 - Algolia Connector for Supabase&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/storage-500gb-uploads-cheaper-egress-pricing" rel="noopener noreferrer"&gt;06 - Storage: 10x Larger Uploads, 3x Cheaper Cached Egress &amp;amp; 2x Egress Quota&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://supabase.com/events" rel="noopener noreferrer"&gt;Worldwide Community Meetups&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>hackathon</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
    <item>
      <title>Storage: 10x Larger Uploads, 3x Cheaper Cached Egress, and 2x Egress Quota</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Fri, 18 Jul 2025 17:58:27 +0000</pubDate>
      <link>https://dev.to/supabase/storage-10x-larger-uploads-3x-cheaper-cached-egress-and-2x-egress-quota-46ed</link>
      <guid>https://dev.to/supabase/storage-10x-larger-uploads-3x-cheaper-cached-egress-and-2x-egress-quota-46ed</guid>
      <description>&lt;p&gt;&lt;a href="https://supabase.com/launch-week" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⚡️ More on Launch Week&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86687ey9aul4utpuj1x6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86687ey9aul4utpuj1x6.png" alt="decorative"&gt;&lt;/a&gt;&lt;br&gt;
We're very excited to announce &lt;a href="https://dev.to/storage"&gt;Supabase Storage&lt;/a&gt; is getting better for everyone. We are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Increasing the maximum file size to 500 GB, up from 50 GB&lt;/li&gt;
&lt;li&gt;Reducing egress costs for requests cached by our API Gateway is charged at $0.03/GB, down from $0.09/GB&lt;/li&gt;
&lt;li&gt;Free plans get 5 GB of cached egress in addition to 5 GB of uncached egress. All paid plans get 250 GB of cached egress and 250 GB of uncached egress, bundled in.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The 500 GB limit for individual files is available for all paid plans starting next week. Lower cached egress pricing and increased quotas for cached egress will be rolling out gradually to all users over the next few weeks and will take effect at the end of your current billing cycle. This should be a price reduction for all users for Storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  10x Larger Uploads
&lt;/h2&gt;

&lt;p&gt;Our community has asked for better support for increasingly large files, from high resolution video platforms and media heavy applications to SaaS platforms handling user generated data, storing 3D models and data archival.&lt;/p&gt;

&lt;p&gt;We have made several optimizations to our platform infrastructure and API gateway to ensure reliable handling of very large files, allowing us to increase the limit from 50 GB to 500 GB for all paid plans.&lt;/p&gt;

&lt;p&gt;Once it's released next week, you can take advantage of this feature by setting the new upload size limit &lt;a href="https://dev.to/dashboard/project/_/settings/storage"&gt;here&lt;/a&gt; and use the new storage-specific hostname for your uploads. You can do this by adding &lt;code&gt;storage&lt;/code&gt; after your project ref in the standard Supabase url. Replace &lt;code&gt;project-ref.supabase.co&lt;/code&gt; with &lt;code&gt;project-ref.storage.supabase.co&lt;/code&gt;. The older URL format will continue to work.&lt;/p&gt;

&lt;p&gt;For uploading large files, we recommend using one of our multipart upload options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.to/docs/guides/storage/uploads/resumable-uploads"&gt;&lt;strong&gt;Resumable uploads using TUS&lt;/strong&gt;&lt;/a&gt; - Perfect for cases where network interruptions might occur, allowing uploads to resume from where they left off&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/docs/guides/storage/uploads/s3-uploads"&gt;&lt;strong&gt;S3 protocol multipart uploads&lt;/strong&gt;&lt;/a&gt; - Ideal for applications that need S3-compatible upload workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both approaches automatically handle breaking large files into manageable chunks during upload while presenting them as single objects for download.&lt;/p&gt;

&lt;h2&gt;
  
  
  3x Cheaper Cached Egress
&lt;/h2&gt;

&lt;p&gt;All Supabase traffic flows through our API Gateway, which also functions as a content delivery network (CDN). When an asset is cached at the edge (and frequently accessed storage objects typically are), the CDN delivers it immediately. If it isn't cached, the request is forwarded to the region hosting your Supabase project before returning to the user.&lt;/p&gt;

&lt;p&gt;Initially, we leaned towards keeping our pricing model simple instead of reflecting regional and cache-status variations in egress costs. This unfortunately meant that customers with very high cached storage bandwidth couldn't benefit from our lower cached egress rates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshx1g92ys3k9s8bif9zt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshx1g92ys3k9s8bif9zt.png" alt="cached egress"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzh36merk4c8ai66wlh7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzh36merk4c8ai66wlh7.png" alt="uncached egress"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today, we are introducing a new pricing line item and are able to offer cached egress at a much lower rate of $0.03/GB. Combined with the &lt;a href="https://dev.to/docs/guides/storage/cdn/smart-cdn"&gt;Smart CDN for storage&lt;/a&gt;, which increases the cache hit rate for storage significantly, this would significantly reduce egress bill for our largest storage users.&lt;/p&gt;

&lt;h2&gt;
  
  
  2x Egress Quota
&lt;/h2&gt;

&lt;p&gt;Paid plans previously included 250 GB of unified egress. We've now split that into 250 GB of cached egress and 250 GB of uncached egress, so customers with high cache hit rates effectively get twice the free egress. Free plans now include 5 GB of cached egress alongside 5 GB of uncached egress.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Will You Build?
&lt;/h2&gt;

&lt;p&gt;Check out &lt;a href="https://dev.to/blog/analytics-buckets"&gt;Analytics Buckets&lt;/a&gt;, the other Storage launch this launch week, and how we built persistent file storage for edge functions with Storage here.&lt;/p&gt;

&lt;p&gt;If you have any requests for improving Supabase Storage, &lt;a href="https://x.com/supabase" rel="noopener noreferrer"&gt;let us know&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  Launch Week 15
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Main Stage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/jwt-signing-keys" rel="noopener noreferrer"&gt;Day 1 - Introducing JWT Signing Keys&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/analytics-buckets" rel="noopener noreferrer"&gt;Day 2 - Introducing Supabase Analytics Buckets with Iceberg Support&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/branching-2-0" rel="noopener noreferrer"&gt;Day 3 - Introducing Branching 2.0&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/new-observability-features-in-supabase" rel="noopener noreferrer"&gt;Day 4 - Introducing New Observability Features in Supabase&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/persistent-storage-for-faster-edge-functions" rel="noopener noreferrer"&gt;Day 5 - Introducing Persistent Storage for Edge Functions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build Stage&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/supabase-ui-platform-kit" rel="noopener noreferrer"&gt;01 - Supabase UI: Platform Kit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/figma-make-support-for-supabase" rel="noopener noreferrer"&gt;02- Create a Supabase backend using Figma Make&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/stripe-engine-as-sync-library" rel="noopener noreferrer"&gt;03- Introducing stripe-sync-engine npm package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://supabase.com/blog/improved-security-controls" rel="noopener noreferrer"&gt;04 - Improved Security Controls and A New Home for Security&lt;/a&gt;
-&lt;a href="https://supabase.com/blog/algolia-connector-for-supabase" rel="noopener noreferrer"&gt;05 - Algolia Connector for Supabase&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/storage-500gb-uploads-cheaper-egress-pricing" rel="noopener noreferrer"&gt;06 - Storage: 10x Larger Uploads, 3x Cheaper Cached Egress &amp;amp; 2x Egress Quota&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/lw15-hackathon" rel="noopener noreferrer"&gt;Supabase Launch Week 15 Hackathon&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/events" rel="noopener noreferrer"&gt;Worldwide Community Meetups&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Persistent Storage and 97% Faster Cold Starts for Edge Functions</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Fri, 18 Jul 2025 15:30:10 +0000</pubDate>
      <link>https://dev.to/supabase/persistent-storage-and-97-faster-cold-starts-for-edge-functions-516f</link>
      <guid>https://dev.to/supabase/persistent-storage-and-97-faster-cold-starts-for-edge-functions-516f</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xpbl0etesl3u8m3gplt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xpbl0etesl3u8m3gplt.png" alt="example code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today, we are introducing Persistent Storage and up to 97% faster cold start times for Edge Functions. Previously, Edge Functions only supported ephemeral file storage by writing to &lt;code&gt;/tmp&lt;/code&gt; directory. Many common libraries for performing tasks, such as zipping/unzipping files and image transformations, are built to work with persistent file storage, so making them work with Edge Functions required extra steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/launch-week" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⚡️ More on Launch Week&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;The persistent storage option is built on top of the S3 protocol. It allows you to mount any &lt;a href="https://supabase.com/docs/guides/storage/s3/compatibility" rel="noopener noreferrer"&gt;S3-compatible bucket&lt;/a&gt;, including &lt;a href="https://supabase.com/docs/guides/storage" rel="noopener noreferrer"&gt;Supabase Storage Buckets&lt;/a&gt;, as a directory for your Edge Functions. You can perform operations such as reading and writing files to the mounted buckets as you would in a POSIX file system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// read from S3 bucket&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/s3/my-bucket/results.csv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;// make a directory&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mkdir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/s3/my-bucket/sub-dir&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;// write to S3 bucket&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeTextFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/s3/my-bucket/demo.txt&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hello world&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/h3mQrDC4g14"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  How to configure
&lt;/h2&gt;

&lt;p&gt;To access an S3 bucket from Edge Functions, you must set the following as environment variables in Edge Function Secrets.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;S3FS_ENDPOINT_URL&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;S3FS_REGION&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;S3FS_ACCESS_KEY_ID&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;S3FS_SECRET_ACCESS_KEY&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are using Supabase Storage, &lt;a href="https://supabase.com/docs/guides/storage/s3/authentication" rel="noopener noreferrer"&gt;follow this guide&lt;/a&gt; to enable and create an access key and id.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case: SQLite in Edge Functions
&lt;/h2&gt;

&lt;p&gt;The S3 File System simplifies workflows that involve reading and transforming data stored in an S3 bucket.&lt;/p&gt;

&lt;p&gt;For example, imagine you are building an IoT app where a device backs up its SQLite database to S3. You can set up a scheduled Edge Function to read this data and then push the data to your primary Postgres database for aggregates and reporting.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Following example is simplified for readability&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;DB&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://deno.land/x/sqlite@v3.9.1/mod.ts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../shared/client.ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;today&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toISOString&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;T&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;backupDBPath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`backups/backup-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;today&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.db`&lt;/span&gt;

&lt;span class="c1"&gt;// Use S3 FS to read the Sqlite DB&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`/s3/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;backupDBPath&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Create an in-memory SQLite from the data downloaded from S3&lt;/span&gt;
&lt;span class="c1"&gt;// This is faster than directly reading from S3&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;DB&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;deserialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;calculateStats&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IoTData&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="nx"&gt;date&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;StatsSummary&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// ....&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Assuming IoT data is stored in a table called 'sensor_data'&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;queryEntries&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;IoTData&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`
    SELECT * FROM sensor_data
    WHERE date(timestamp) = date('now', 'localtime')
  `&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="c1"&gt;// Calculate statistics&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stats&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;calculateStats&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;today&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

 &lt;span class="c1"&gt;// Insert stats into Supabase&lt;/span&gt;
 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;iot_daily_stats&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;OK);
});

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  97% Faster Function Boot Times, Even Under Load
&lt;/h2&gt;

&lt;p&gt;Previously, Edge Functions with large dependencies or doing preparation work at the start (e.g., parsing/loading configs, initializing AI models) would incur a noticeable boot delay. Sometimes, these slow neighbors can impact other functions running on the same machine. All JavaScript &lt;em&gt;workers&lt;/em&gt; in the Supabase Edge Functions Runtime were cooperatively scheduled on the same &lt;a href="https://github.com/tokio-rs/tokio" rel="noopener noreferrer"&gt;&lt;strong&gt;Tokio thread pool&lt;/strong&gt;&lt;/a&gt;. If one worker had heavy startup logic, such as parsing JavaScript modules or running synchronous operations, it could delay every worker scheduled after. This led to occasional long‑tail latency spikes in high-traffic projects.&lt;br&gt;
To address this issue, we moved workers which are still performing initial script evaluation onto a dedicated blocking pool. This approach prevents heavy initialization tasks from blocking the Tokio thread, significantly reducing boot time spikes for other functions.&lt;/p&gt;
&lt;h3&gt;
  
  
  The result
&lt;/h3&gt;

&lt;p&gt;Boot times are now more predictable and wait times for cold starts are now much faster. Here’s a result of a &lt;a href="https://github.com/supabase/edge-runtime/blob/develop/k6/specs/mixed.ts" rel="noopener noreferrer"&gt;benchmark&lt;/a&gt; we did to compare boot times before and after these changes.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;(Delta)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Avg&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;870 ms&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;42 ms&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;95 %&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;P95&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;8,502 ms&lt;/td&gt;
&lt;td&gt;86 ms&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;99 %&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;P99&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;15,069 ms&lt;/td&gt;
&lt;td&gt;460 ms&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;97 %&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worst&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;24,300 ms&lt;/td&gt;
&lt;td&gt;1 630 ms&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;93 %&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Spikes &amp;gt; 1 s&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;47 %&lt;/td&gt;
&lt;td&gt;4 %&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;43 pp&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  Support for Synchronous APIs
&lt;/h2&gt;

&lt;p&gt;By offloading expensive compute at function boot time onto a separate pool, we were able to enable the use of synchronous File APIs during function boot time. Some libraries only support synchronous File APIs (eg, SQLite), and this would allow you to set them up on Edge Functions before it starts processing requests.&lt;br&gt;
You can now safely use the following synchronous Deno APIs (and their Node counterparts) &lt;em&gt;during&lt;/em&gt; initial script evaluation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deno.statSync&lt;/li&gt;
&lt;li&gt;Deno.removeSync&lt;/li&gt;
&lt;li&gt;Deno.writeFileSync&lt;/li&gt;
&lt;li&gt;Deno.writeTextFileSync&lt;/li&gt;
&lt;li&gt;Deno.readFileSync&lt;/li&gt;
&lt;li&gt;Deno.readTextFileSync&lt;/li&gt;
&lt;li&gt;Deno.mkdirSync&lt;/li&gt;
&lt;li&gt;Deno.makeTempDirSync&lt;/li&gt;
&lt;li&gt;Deno.readDirSync&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Keep in mind&lt;/strong&gt; that the sync APIs are available only during initial script evaluation and aren’t supported in callbacks like HTTP handlers or setTimeout.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;statSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// ✅&lt;/span&gt;

&lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;statSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// 💣 ERROR! Deno.statSync is blocklisted on the current context&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;serve&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;statSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// 💣 ERROR! Deno.statSync is blocklisted on the current context&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Try it on Preview Today
&lt;/h2&gt;

&lt;p&gt;These changes will be rolled out along with the Deno 2 upgrade to all clusters within the next 2 weeks. Meanwhile, you can use the Preview cluster if you'd like to try them out today. Please see &lt;a href="https://github.com/orgs/supabase/discussions/36814" rel="noopener noreferrer"&gt;this guide&lt;/a&gt; on how to test your functions in Preview cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Launch Week 15
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Main Stage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/jwt-signing-keys" rel="noopener noreferrer"&gt;Day 1 - Introducing JWT Signing Keys&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/analytics-buckets" rel="noopener noreferrer"&gt;Day 2 - Introducing Supabase Analytics Buckets with Iceberg Support&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/branching-2-0" rel="noopener noreferrer"&gt;Day 3 - Introducing Branching 2.0&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/new-observability-features-in-supabase" rel="noopener noreferrer"&gt;Day 4 - Introducing New Observability Features in Supabase&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/persistent-storage-for-faster-edge-functions" rel="noopener noreferrer"&gt;Day 5 - Introducing Persistent Storage for Edge Functions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build Stage&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/supabase-ui-platform-kit" rel="noopener noreferrer"&gt;01 - Supabase UI: Platform Kit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/figma-make-support-for-supabase" rel="noopener noreferrer"&gt;02- Create a Supabase backend using Figma Make&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/stripe-engine-as-sync-library" rel="noopener noreferrer"&gt;03- Introducing stripe-sync-engine npm package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://supabase.com/blog/improved-security-controls" rel="noopener noreferrer"&gt;04 - Improved Security Controls and A New Home for Security&lt;/a&gt;
-&lt;a href="https://supabase.com/blog/algolia-connector-for-supabase" rel="noopener noreferrer"&gt;05 - Algolia Connector for Supabase&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://supabase.com/events" rel="noopener noreferrer"&gt;Worldwide Community Meetups&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Algolia Connector for Supabase</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Thu, 17 Jul 2025 19:25:33 +0000</pubDate>
      <link>https://dev.to/supabase/algolia-connector-for-supabase-2nk</link>
      <guid>https://dev.to/supabase/algolia-connector-for-supabase-2nk</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yd2qdryd1sauuv0lq1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yd2qdryd1sauuv0lq1x.png" alt="decorative"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today, Algolia is launching a new Supabase Connector, making it easier than ever to index your Postgres data and power world-class search experiences without writing a single line of code.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/sLr6-K7_Av8"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;With just a few clicks, you can connect your Supabase database to Algolia, select the tables you want to sync, and configure how often the data updates. Algolia handles the rest. You get a fast, reliable, scalable search index, and your team gets to focus on building.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/launch-week" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⚡️ More on Launch Week&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Partners Integrating with Supabase
&lt;/h2&gt;

&lt;p&gt;Supabase is more than a backend. It is a growing ecosystem of tools that work well together so developers can build faster, scale more easily, and stay focused on their product.&lt;/p&gt;

&lt;p&gt;Partners like Algolia bring best-in-class functionality (in Algolia’s case, fast and flexible search) directly into the Supabase workflow. For developers, that means fewer workarounds, no glue code, and a smoother path from idea to production.&lt;/p&gt;

&lt;p&gt;For partners, integrating with Supabase means more than technical compatibility. It means product visibility to tens of thousands of active projects. Supabase regularly features integrations in our docs, Launch Weeks, blog, and community programs. Developers discover and adopt your product in the context where they’re already building.&lt;/p&gt;

&lt;p&gt;Read on to see how the Algolia Connector for Supabase works.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use Algolia Connector for Supabase
&lt;/h2&gt;

&lt;p&gt;To get started with Algolia’s connector, prepare the data in your Supabase database, create Supabase as a source in Algolia’s dashboard, set up your Algolia index and configure your sync job. Here’s how you can &lt;a href="https://www.algolia.com/doc/guides/sending-and-managing-data/send-and-update-your-data/connectors/supabase" rel="noopener noreferrer"&gt;get started&lt;/a&gt; in just a few minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Prepare your data in Supabase
&lt;/h3&gt;

&lt;p&gt;Before you connect to Algolia, you will want to ensure all the fields you want to make searchable are in one place. If the fields you want to index live in more than one table, you can stitch them together in a &lt;a href="https://dev.to/docs/guides/graphql/views"&gt;Postgres View&lt;/a&gt;, allowing Algolia’s connector to get all the data you want to index.&lt;/p&gt;

&lt;p&gt;For example, imagine you’re creating an app that allows you to easily find a movie to watch. You want to search across movie titles, genres, rating and actors. However, movies and actors are in two separate tables. You can create a view (e.g., &lt;code&gt;movies_view&lt;/code&gt;) that combines the columns you need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="k"&gt;view&lt;/span&gt; &lt;span class="n"&gt;movies_view&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt;
  &lt;span class="k"&gt;select&lt;/span&gt;
    &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;objectID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;-- Algolia’s unique key&lt;/span&gt;
    &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;array_agg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;distinct&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;actor_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;actor_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;genre&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rating&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vote_count&lt;/span&gt;
  &lt;span class="k"&gt;from&lt;/span&gt;
    &lt;span class="n"&gt;movies&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt;
    &lt;span class="k"&gt;left&lt;/span&gt; &lt;span class="k"&gt;join&lt;/span&gt; &lt;span class="n"&gt;movie_cast&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="k"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;movie_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;
  &lt;span class="k"&gt;group&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rating&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vote_count&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Later in the Algolia dashboard, you will be able to pick exactly which columns you want to index.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Go to Algolia dashboard
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In Algolia, go to &lt;a href="https://dashboard.algolia.com/connectors" rel="noopener noreferrer"&gt;Data Sources → Connectors&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Find "Supabase" in the list and click &lt;a href="https://dashboard.algolia.com/connectors/supabase/create" rel="noopener noreferrer"&gt;Connect&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3. Configure your data source
&lt;/h3&gt;

&lt;p&gt;First, you will need to fill in your Supabase connection info. From the Supabase dashboard:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the &lt;a href="https://dev.to/dashboard/project/_?showConnect=true"&gt;Connect&lt;/a&gt; button found in the top of our header&lt;/li&gt;
&lt;li&gt;Scroll down to &lt;strong&gt;Connection Info → Transaction Pooler&lt;/strong&gt; and copy &lt;strong&gt;host&lt;/strong&gt;, &lt;strong&gt;port&lt;/strong&gt;, &lt;strong&gt;database name&lt;/strong&gt;, and &lt;strong&gt;username&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Paste the database credentials into the Algolia setup screen&lt;/li&gt;
&lt;li&gt;Enter your Supabase database &lt;strong&gt;password&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select your &lt;strong&gt;schema&lt;/strong&gt; (usually &lt;code&gt;public&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Give your source a name like &lt;code&gt;supabase_movies&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Algolia will check the connection and confirm your credentials&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  4. Configure your destination
&lt;/h3&gt;

&lt;p&gt;Once you create Supabase as a data source, you'll need to tell Algolia where to index your data.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select an existing or create a new Algolia index (e.g. &lt;code&gt;supabase_movies_index&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Add Index Credentials to this destination by clicking &lt;strong&gt;Create one for me&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create destination&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  5. Configure your task and run your sync job
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Choose how often you want it to sync your data (e.g. every 6 hours)&lt;/li&gt;
&lt;li&gt;Pick whether to do full syncs or partial updates&lt;/li&gt;
&lt;li&gt;Select the table or view you want to index. We recommend selecting only one table or view for each index&lt;/li&gt;
&lt;li&gt;Choose your &lt;a href="https://www.algolia.com/doc/guides/sending-and-managing-data/prepare-your-data/in-depth/what-is-in-a-record/#unique-record-identifier" rel="noopener noreferrer"&gt;objectID&lt;/a&gt; (usually your primary key)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once configured, create the task. Algolia will start syncing records from Supabase into your search index (in the YouTube demo above, 8,800+ movie records were synced in under a minute).&lt;/p&gt;

&lt;p&gt;You can now instantly search your Supabase data using Algolia's lightning-fast API.&lt;/p&gt;

&lt;h2&gt;
  
  
  No more data pipelines. Just fast search.
&lt;/h2&gt;

&lt;p&gt;With the Algolia + Supabase connector, you don’t need to build or maintain custom data pipelines. With Algolia, you don’t need to worry about scaling your own search infrastructure. With Algolia’s API clients, you just connect and go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://dev.to/dashboard"&gt;Supabase&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dashboard.algolia.com/users/sign_up" rel="noopener noreferrer"&gt;Algolia&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Launch Week 15
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Main Stage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/jwt-signing-keys" rel="noopener noreferrer"&gt;Day 1 - Introducing JWT Signing Keys&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/analytics-buckets" rel="noopener noreferrer"&gt;Day 2 - Introducing Supabase Analytics Buckets with Iceberg Support&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/branching-2-0" rel="noopener noreferrer"&gt;Day 3 - Introducing Branching 2.0&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/new-observability-features-in-supabase" rel="noopener noreferrer"&gt;Day 4 - Introducing New Observability Features in Supabase&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build Stage&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/supabase-ui-platform-kit" rel="noopener noreferrer"&gt;01 - Supabase UI: Platform Kit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/figma-make-support-for-supabase" rel="noopener noreferrer"&gt;02- Create a Supabase backend using Figma Make&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/stripe-engine-as-sync-library" rel="noopener noreferrer"&gt;03- Introducing stripe-sync-engine npm package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://supabase.com/blog/improved-security-controls" rel="noopener noreferrer"&gt;04 - Improved Security Controls and A New Home for Security&lt;/a&gt;
-&lt;a href="https://supabase.com/blog/algolia-connector-for-supabase" rel="noopener noreferrer"&gt;05 - Algolia Connector for Supabase&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://supabase.com/events" rel="noopener noreferrer"&gt;Worldwide Community Meetups&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
    <item>
      <title>New Observability Features in Supabase</title>
      <dc:creator>Yuri</dc:creator>
      <pubDate>Thu, 17 Jul 2025 14:37:34 +0000</pubDate>
      <link>https://dev.to/supabase/new-observability-features-in-supabase-ep3</link>
      <guid>https://dev.to/supabase/new-observability-features-in-supabase-ep3</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92bdqjqpqd6dk0rh7k9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92bdqjqpqd6dk0rh7k9r.png" alt="decorative"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are starting to add OpenTelemetry support to &lt;a href="https://github.com/supabase/storage/pull/494" rel="noopener noreferrer"&gt;all&lt;/a&gt; &lt;a href="https://github.com/supabase/auth/pull/679" rel="noopener noreferrer"&gt;our&lt;/a&gt; &lt;a href="https://github.com/supabase/edge-runtime/pull/554" rel="noopener noreferrer"&gt;core&lt;/a&gt; &lt;a href="https://github.com/supabase/realtime/commit/c9683f3f5f94bd2e37494f02c1f4415551e96e5b" rel="noopener noreferrer"&gt;products&lt;/a&gt; and &lt;a href="https://github.com/Logflare/logflare/pulls?q=is%3Apr+otel+sort%3Acreated-asc" rel="noopener noreferrer"&gt;our Telemetry server&lt;/a&gt;. OpenTelemetry (OTel) standardizes logs, metrics, and traces in a vendor-agnostic format, so you can ingest data into tools like Datadog, Honeycomb, or any monitoring solution you already use. While you'll still have the freedom to bring your own observability stack, we're preparing to surface this data natively in the Supabase dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/launch-week" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;⚡️ More on Launch Week&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;Today we are launching&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Preview of our new logging Interface&lt;/li&gt;
&lt;li&gt;Advanced Product Reports&lt;/li&gt;
&lt;li&gt;Supabase AI Assistant with debugging capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These updates mark the first step toward unified, end-to-end observability. You won't get the full OTel visualization just yet, but with these foundations in place, you'll soon be able to trace, analyze errors and performance issues, and troubleshoot your entire stack without leaving Supabase.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/pLto2PD4-O8"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  New logging Interface
&lt;/h2&gt;

&lt;p&gt;Supabase is a collection of seamlessly integrated services. Storage talks to Postgres via the dedicated connection pooler. Edge Functions can talk to Auth and Realtime. If storage uploads fail, you must determine whether the problem lies with the storage server, the dedicated connection pooler, or the database. Until now, pinpointing the root cause meant jumping between multiple log streams.&lt;/p&gt;

&lt;p&gt;Starting today, there is one interleaved stream of logs across all services. You can trace a single request across the entire Supabase stack. No more jumping between tabs to diagnose errors.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1mg3w4lqa0r3d0ilk8w.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1mg3w4lqa0r3d0ilk8w.webp" alt="unified logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have also added contextual log views. You can now jump from a function's invocation log directly into its execution logs. What used to require two disconnected sources is now stitched together in one view.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mjbteqvs9wib7ql67dh.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mjbteqvs9wib7ql67dh.webp" alt="contextual logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The new interface also supports filtering logs by the request status code, method, path, log level and the auth user associated with the request. This means you can quickly find all Postgrest 500 errors, or all requests made by a specific user with a few clicks.&lt;/p&gt;

&lt;p&gt;Shoutout to &lt;a href="http://openstatus.dev" rel="noopener noreferrer"&gt;openstatus.dev&lt;/a&gt; for providing the inspiration for some of our Log components.&lt;/p&gt;

&lt;p&gt;The new logging interface is available as a feature preview today, which you can enable from the dashboard &lt;a href="https://supabase.com/dashboard/project/_?featurePreviewModal=supabase-ui-preview-unified-logs" rel="noopener noreferrer"&gt;here&lt;/a&gt;. The new interface currently supports API Gateway logs and Postgres logs, with logs for the other products coming soon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Product Reports
&lt;/h2&gt;

&lt;p&gt;Apart from making our logs better, we also revamped the metrics exposed in our product reports. Previously, you had to host your own &lt;a href="https://github.com/supabase/supabase-grafana" rel="noopener noreferrer"&gt;Grafana dashboard&lt;/a&gt; to access some of these advanced metrics. We are bringing some of these metrics directly into the dashboard, so that you can access them without any additional setup or maintaining your own production ready monitoring infrastructure.&lt;/p&gt;

&lt;p&gt;Each product has its own dedicated &lt;a href="https://supabase.com/dashboard/project/_/reports/api-overview" rel="noopener noreferrer"&gt;report&lt;/a&gt; with a common set of metrics like number of requests, egress, and response time, along with product specific metrics like “Realtime connected clients”.&lt;/p&gt;

&lt;p&gt;Additionally, you can drill into a specific time frame and filter by various request and response parameters across all reports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5reinlld5m7p5khw3hh.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5reinlld5m7p5khw3hh.webp" alt="reports"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Free users get a basic set of metrics for all products, while some of the advanced metrics (like p99 response time) is available for all paid customers.&lt;/p&gt;

&lt;p&gt;Try out the new reports &lt;a href="https://supabase.com/dashboard/project/_/reports/api-overview" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Supabase AI Assistant with debugging capabilities
&lt;/h2&gt;

&lt;p&gt;The Supabase AI Assistant now offers powerful new debugging capabilities, making it easier to identify and resolve issues across your stack.&lt;/p&gt;

&lt;p&gt;You can now ask the Assistant to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Retrieve logs for any Supabase product&lt;/li&gt;
&lt;li&gt;Analyze log volume over time to identify spikes&lt;/li&gt;
&lt;li&gt;Drill into specific time windows to investigate anomalies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means you can go from "something looks off" to concrete answers, without leaving the chat.&lt;/p&gt;

&lt;p&gt;The Assistant also comes with several quality-of-life upgrades:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automatic chat renaming&lt;/strong&gt; based on your queries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Branch diff reviews&lt;/strong&gt;, perfect for projects using branching environments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A more capable model&lt;/strong&gt; with additional controls for data privacy and security improvements built in&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's the fastest way to get answers for your project, whether you're debugging a failing function, reviewing changes between branches, or just trying to understand how your app is behaving in production.&lt;/p&gt;

&lt;p&gt;This is an example of how the Assistant can analyze logs to identify issues:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjbt0jsdg9b9utz4chc9.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjbt0jsdg9b9utz4chc9.webp" alt="ai debugging"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It can also provide recommendations for fixes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26l369u8u3362onndb6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26l369u8u3362onndb6t.png" alt="ai recommendations"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;We'll keep adding more metrics across our reports&lt;/li&gt;
&lt;li&gt;We're adding logs from the remaining products to new logging interface&lt;/li&gt;
&lt;li&gt;We plan to make the new logging interface the default experience for all projects soon&lt;/li&gt;
&lt;li&gt;Expose OpenTelemetry trace information in the logging interface&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Launch Week 15
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Main Stage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://supabase.com/blog/jwt-signing-keys" rel="noopener noreferrer"&gt;Day 1 - Introducing JWT Signing Keys&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/analytics-buckets" rel="noopener noreferrer"&gt;Day 2 - Introducing Supabase Analytics Buckets with Iceberg Support&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/branching-2-0" rel="noopener noreferrer"&gt;Day 3 - Introducing Branching 2.0&lt;/a&gt;&lt;br&gt;
&lt;a href="https://supabase.com/blog/new-observability-features-in-supabase" rel="noopener noreferrer"&gt;Day 4 - Introducing New Observability Features in Supabase&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build Stage&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/supabase-ui-platform-kit" rel="noopener noreferrer"&gt;01 - Supabase UI: Platform Kit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/figma-make-support-for-supabase" rel="noopener noreferrer"&gt;02- Create a Supabase backend using Figma Make&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/stripe-engine-as-sync-library" rel="noopener noreferrer"&gt;03- Introducing stripe-sync-engine npm package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://supabase.com/blog/improved-security-controls" rel="noopener noreferrer"&gt;04 - Improved Security Controls and A New Home for Security&lt;/a&gt;
-&lt;a href="https://supabase.com/blog/algolia-connector-for-supabase" rel="noopener noreferrer"&gt;05 - Algolia Connector for Supabase&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://supabase.com/events" rel="noopener noreferrer"&gt;Worldwide Community Meetups&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
