<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Outerbase</title>
    <description>The latest articles on DEV Community by Outerbase (@outerbase).</description>
    <link>https://dev.to/outerbase</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/outerbase"/>
    <language>en</language>
    <item>
      <title>5 AI-Powered Developer Tools Changing the Way We Work in 2025</title>
      <dc:creator>brandon</dc:creator>
      <pubDate>Sat, 18 Jan 2025 16:58:17 +0000</pubDate>
      <link>https://dev.to/outerbase/5-ai-powered-developer-tools-changing-the-way-we-work-in-2025-17ff</link>
      <guid>https://dev.to/outerbase/5-ai-powered-developer-tools-changing-the-way-we-work-in-2025-17ff</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Whether you are excited about it or not, AI is not just a passing trend — it's here to stay. AI now helps developers write code, design interfaces, analyze data, create documentation, and even review code. With almost every  single tool labeled “AI-powered,” figuring out which ones that can truly help you feels daunting. In this article, I explore five dependable AI tools and suggest a few alternatives, so you can pick the best option for your workflow and improve your efficiency.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftz4461ckoiisfykls9c2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftz4461ckoiisfykls9c2.png" alt="Cursor AI on Outerbase Studio Repo" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. AI Code Editor: Cursor
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Replaces: VS Code&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Cursor is probably the most popular AI code editor that exists. It works with you through your entire journey writing code. It helps with autocompletion, code generation, and with it's new agent workflow it can build and navigate entire apps for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alternative to consider:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Windsurf&lt;/strong&gt; – A new code editor that rivals established tools by offering powerful AI capabilities.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoszzefrh4cgdfjb65wj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoszzefrh4cgdfjb65wj.png" alt="v0 AI generating Outerbase to do list" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. AI Design Tool: v0
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Replaces: Figma&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;v0 uses AI to streamline your design process. It offers automated layout suggestions, quick prototypes, and an easy-to-use chat interface, and even generates the code for you so you spend less time setting up and more time improving your designs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Another design tool:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Magic Patterns&lt;/strong&gt; – Use AI to create custom components and pages with minimal effort.
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9y8qw0n9la1405zx6iw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9y8qw0n9la1405zx6iw.png" alt="Outerbase Dashboard Exploration" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. AI Data Platform: Outerbase
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Replaces: DataGrip, Tableau&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Outerbase combines database management with data visualization. Its user-friendly interface makes querying and analyzing data straightforward, helping teams see insights faster and make better decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check out this tool as well:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ThoughtSpot&lt;/strong&gt; – An BI solution that uses AI for in-depth analytics.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqwxehx0cxeu5x6z1cej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqwxehx0cxeu5x6z1cej.png" alt="Mintlify Documentation" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. AI Documentation Platform: Mintlify
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Replaces: Docusaurus&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Mintlify lets you create clear, SEO-friendly documentation without a heavy setup. It offers version control, full-text search, and simple customization, making it easy to keep your docs organized.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other documentation solution:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Notion AI&lt;/strong&gt; – Helps you write and organize documentation quickly.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6cjswyzw8qrbffhxloa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6cjswyzw8qrbffhxloa.png" alt="Code Rabbit PR Reviewer" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. AI Code Review Tool: CodeRabbit
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Replaces: Traditional Pull Request Reviews&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;CodeRabbit adds AI insights to the code review process. It scans for possible errors, offers suggestions for improvements, and keeps your codebase tidy. It also helps reviews go faster while still maintaining quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Another tool to consider:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Greptile&lt;/strong&gt; – Uses AI to understand your codebase, review pull requests, detect bugs, and streamline tasks.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;I truly believe these 5 AI-powered devtools are changing the way developers work. Whether you’re writing code, designing user interfaces, analyzing data, documenting projects, or reviewing code, there’s already an AI tool ready to speed things up. &lt;/p&gt;

&lt;p&gt;Try them out, compare features, and pick the ones that fit your workflow best.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>database</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Postgres vs. MySQL</title>
      <dc:creator>brandon</dc:creator>
      <pubDate>Thu, 16 Jan 2025 17:38:23 +0000</pubDate>
      <link>https://dev.to/outerbase/postgres-vs-mysql-14cp</link>
      <guid>https://dev.to/outerbase/postgres-vs-mysql-14cp</guid>
      <description>&lt;p&gt;Hello I'm Brandon, I’m the CEO &amp;amp; Co-founder of &lt;a href="https://www.outerbase.com" rel="noopener noreferrer"&gt;Outerbase&lt;/a&gt;, where we are building a modern data platform. We work with thousands of developers managing their data every day, and I’ve seen firsthand how PostgreSQL and MySQL stand out as two of the most popular (and powerful!) databases. In this article, I’ll compare both—covering their strengths, weaknesses, and nuanced differences — so you can decide which one suits your needs best.&lt;/p&gt;

&lt;p&gt;Relational databases have powered countless applications for decades, and they remain the backbone of many modern systems. When it comes to production-ready options there are two that stand out as the most widely used, &lt;strong&gt;PostgreSQL&lt;/strong&gt; and &lt;strong&gt;MySQL&lt;/strong&gt;. Both deliver solid performance, reliability, and community support, but there are notable differences in the way they handle data, their feature sets, and how easy they are to configure. Understanding these nuances can help you pick the right one for your specific needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR – When Should You Use PostgreSQL or MySQL?
&lt;/h2&gt;

&lt;p&gt;The table below summarizes some of the biggest differences at a glance:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Criterion&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;PostgreSQL&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;MySQL&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data Model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Advanced (schemas, custom types, JSON)&lt;/td&gt;
&lt;td&gt;Simpler (distinct databases)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Complex Queries&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Excellent (window functions, CTE)&lt;/td&gt;
&lt;td&gt;Adequate, but fewer advanced capabilities&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Performance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Strong in complex writes, concurrency&lt;/td&gt;
&lt;td&gt;Strong in read-heavy workloads&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Extensibility&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Highly extensible (custom functions)&lt;/td&gt;
&lt;td&gt;More limited, but large ecosystem&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Licensing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;PostgreSQL license (BSD/MIT-like)&lt;/td&gt;
&lt;td&gt;GPL + commercial license from Oracle&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h3&gt;
  
  
  Feature Overview
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;PostgreSQL&lt;/strong&gt;&lt;br&gt;
PostgreSQL uses schemas to organize data within a single database, giving teams fine-grained control over permissions and logical data partitions. It also supports a wide range of data types, including JSON, arrays, ranges, and even custom-defined types, making it attractive for applications that handle complex or semi-structured data. The database uses Multi-Version Concurrency Control (MVCC) to reduce lock contention, so it typically excels at heavy write loads and complex queries that benefit from features like window functions and Common Table Expressions (CTEs). Another key strength is extensibility: you can add custom functions, operators, or extensions such as PostGIS for geospatial data—handy if your application requires specialized capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MySQL&lt;/strong&gt;&lt;br&gt;
MySQL, on the other hand, organizes data more simply, using distinct databases rather than schemas. This can make life easier for smaller projects or teams that want to keep data isolated by simply spinning up a new database. One of MySQL’s biggest selling points is its strong performance in read-heavy scenarios, especially when the InnoDB engine is paired with proper indexing and caching. It’s also known for straightforward replication, which many high-traffic websites use to distribute read operations across multiple servers and deliver faster responses to users around the globe. MySQL is generally easy to set up and has a vast knowledge base, which is appealing if you need to get a project off the ground quickly or if your team is already familiar with the MySQL ecosystem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Database Details
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Read/Write Throughput&lt;/strong&gt;&lt;br&gt;
MySQL typically shines in handling read-intensive workloads, provided that indexes and caching layers are properly tuned. Some large-scale users, such as Uber, have found success with MySQL even for hefty write loads, once the database is carefully configured. For straightforward inserts and updates, MySQL can match PostgreSQL in many benchmarks. However, PostgreSQL often takes the lead with more complex writes and intricate queries. Its concurrency features, enhanced by MVCC, reduce lock contention and allow it to maintain high performance in scenarios that involve numerous transactions simultaneously. With proper tuning, PostgreSQL can match or exceed MySQL’s performance in typical OLTP or analytical workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;br&gt;
Both databases scale well, but they do so differently. PostgreSQL responds favorably to vertical scaling—adding more CPU, RAM, or faster storage often yields significant benefits. Horizontal scaling is a bit more involved; tools like PgBouncer for connection pooling and logical replication can help, and large platforms like Instagram and Notion have demonstrated that it can support vast user bases. MySQL has long been praised for its straightforward replication (master-replica), making it easy to offload read traffic and distribute those queries across multiple servers. This built-in replication setup is often enough for many use cases where global read scalability is paramount.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Indexing and Query Optimization&lt;/strong&gt;&lt;br&gt;
PostgreSQL provides diverse index types, such as B-tree, GiST, GIN, and BRIN, which cater to specific types of queries and can significantly enhance performance. It also has sophisticated JSON indexing and full-text search capabilities, though you may need to enable certain extensions. MySQL’s InnoDB engine primarily relies on B-tree indexes, suitable for most common query patterns, and it has some full-text indexing capability—though not as extensive as PostgreSQL’s.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Tuning&lt;/strong&gt;&lt;br&gt;
Both PostgreSQL and MySQL require tuning parameters (e.g., buffer sizes, caching, checkpoint intervals) to optimize performance. PostgreSQL can be more involved, especially for new users, but with well-designed indexes and queries, either database can scale effectively in most production environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recent Trends and Recognition&lt;/strong&gt;&lt;br&gt;
In recent years, PostgreSQL has been gaining popularity at a rapid pace, earning accolades like DBMS of the Year and making strides in developer surveys. Its permissive license and modern feature set continue to draw new users. Nonetheless, MySQL remains the most installed open-source relational database worldwide, fueled by Oracle’s backing and an enormous community. Its stability, simplicity, and ecosystem of hosting providers and tools ensure its continued dominance in many scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License Considerations&lt;/strong&gt;&lt;br&gt;
MySQL’s Community Edition is GPL licensed, which can be restrictive if you want to keep your own code proprietary. In that case, a commercial license from Oracle might be necessary. PostgreSQL’s license is similar to BSD/MIT, carrying fewer restrictions and no requirement to disclose your source code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Comparison&lt;/strong&gt;&lt;br&gt;
PostgreSQL’s object hierarchy is structured as Databases → Schemas → Tables, whereas MySQL uses Databases → Tables. PostgreSQL is fully ACID-compliant and can handle DML and DDL transactions; MySQL is also ACID-compliant through the InnoDB engine, and supports atomic DDL in version 8.0+. On the security front, PostgreSQL provides Row Level Security (RLS) out of the box, whereas MySQL requires workarounds such as views or stored procedures to mimic similar functionality.&lt;/p&gt;

&lt;p&gt;In terms of replication, PostgreSQL supports both physical (WAL-based) and logical (pub/sub) methods. MySQL uses the binary log to facilitate logical replication and is commonly configured for read scaling with master-replica setups. JSON handling is more comprehensive in PostgreSQL, thanks to its robust indexing and array of functions. While MySQL also includes JSON features in version 8.0+, its indexing for JSON data is somewhat limited. PostgreSQL’s window functions and CTEs are more mature, although MySQL has caught up by adding these features recently. If you value extensibility, PostgreSQL offers a wide array of extensions—PostGIS for geospatial use cases, pg_stat_statements for detailed query insights, and the ability to define custom data types—while MySQL’s customization options focus on stored procedures and plugins.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnfm9e6m3vt9gfvjok6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnfm9e6m3vt9gfvjok6i.png" alt="Postgres vs MySQL Disk Usage" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Postgres vs MySQL Performance
&lt;/h3&gt;

&lt;p&gt;In tests using Go clients with similar configurations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Insert (Write) Test&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Setup&lt;/strong&gt;: Multiple virtual clients continuously insert randomized records.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Results&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;PostgreSQL hovered around 19,000 inserts/second on a 4-CPU server with an SSD, versus MySQL’s 10,000.
&lt;/li&gt;
&lt;li&gt;PostgreSQL showed lower latency at the 99th percentile and used CPU, disk, and memory more efficiently.
&lt;/li&gt;
&lt;li&gt;MySQL performance dropped around 5,500 queries/second, incurring higher CPU usage.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Select (Read) Test&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Setup&lt;/strong&gt;: Queries involved a random event ID joined against a ~70-million-row customer table.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Results&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;PostgreSQL again displayed lower latency, scaling nicely up to ~32,000 queries/second.
&lt;/li&gt;
&lt;li&gt;MySQL started showing latency spikes closer to 18,000 queries/second, tied to rising CPU usage.
&lt;/li&gt;
&lt;li&gt;Both eventually reached CPU saturation, but PostgreSQL stretched further before hitting a wall.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Write Efficiency&lt;/strong&gt;: PostgreSQL handled heavy insert loads with less resource usage.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read Performance&lt;/strong&gt;: MySQL did well initially but dropped off sooner under high concurrency.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Utilization&lt;/strong&gt;: PostgreSQL generally used fewer system resources at equivalent loads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Real-world performance will vary depending on hardware, indexing strategy, query patterns, and configuration. Always test in an environment that reflects your production setup before making a final choice.&lt;/p&gt;

&lt;p&gt;To simplify testing and working with both Postgres and MySQL, Outerbase offers a powerful interface for exploring, querying, and visualizing your databases. Whether you're comparing benchmarks or managing production workloads, &lt;a href="https://www.outerbase.com" rel="noopener noreferrer"&gt;Outerbase&lt;/a&gt; can help streamline your process.&lt;/p&gt;




&lt;h2&gt;
  
  
  So, Postgres vs MySQL which is Better
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Consider PostgreSQL If&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need advanced features like window functions, CTEs, custom data types, or PostGIS for geospatial queries.
&lt;/li&gt;
&lt;li&gt;You expect complex or highly concurrent workloads.
&lt;/li&gt;
&lt;li&gt;You want a more permissive license with fewer restrictions.
&lt;/li&gt;
&lt;li&gt;You’re eager to tap into a rapidly expanding ecosystem and community.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Consider MySQL If&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your primary focus is read-heavy workloads with straightforward queries.
&lt;/li&gt;
&lt;li&gt;You want something quick and simple to deploy, backed by a massive knowledge base.
&lt;/li&gt;
&lt;li&gt;Your team already knows MySQL, or your hosting environment is optimized for it.
&lt;/li&gt;
&lt;li&gt;You prefer easy replication for horizontal scaling.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The best approach is often to &lt;strong&gt;test both&lt;/strong&gt;. Spin up a few instances, replicate your real-world workload, and see how each performs. You might discover one database naturally suits your data and query patterns better, especially once you factor in how comfortable your team is with each technology.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You might favor PostgreSQL if you need advanced features like window functions, CTEs, custom data types, or PostGIS for geospatial work. It also excels with heavier concurrency or complex workloads, and its permissive license won’t impose many restrictions on your own code. Meanwhile, MySQL remains a compelling choice if your application is read-heavy and you want something quick to deploy, especially if your team is already familiar with MySQL or your environment is optimized for it. Its simpler replication mechanisms are convenient for those who need to scale out reads.&lt;/p&gt;

&lt;p&gt;In the end, the best approach is to test both databases in an environment that mirrors your production setup. Examine how they perform with your actual data, queries, and concurrency levels. The “better” option often comes down to factors like feature requirements, workload profiles, operational familiarity, licensing, and long-term scalability goals. While PostgreSQL’s feature set is attracting a fast-growing user base, MySQL’s proven track record and massive community ensure it will remain a mainstay for years to come.&lt;/p&gt;

&lt;p&gt;If you need an easy way to test both Postgres and MySQL please check out our open-source repo &lt;a href="https://github.com/outerbase/studio" rel="noopener noreferrer"&gt;Outerbase Studio&lt;/a&gt; which gives you the ability to view, edit, query and even deploy them both.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Thanks for Reading!&lt;/strong&gt; If you have any further suggestions or want to see additional metrics, don’t hesitate to reach out.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Shout out to Anton P for the &lt;a href="https://www.youtube.com/watch?v=R7jBtnrUmYI" rel="noopener noreferrer"&gt;benchmarking&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>mysql</category>
      <category>database</category>
    </item>
    <item>
      <title>The evolution of Serverless Postgres</title>
      <dc:creator>brandon</dc:creator>
      <pubDate>Thu, 30 May 2024 16:21:07 +0000</pubDate>
      <link>https://dev.to/outerbase/the-evolution-of-serverless-postgres-4i5l</link>
      <guid>https://dev.to/outerbase/the-evolution-of-serverless-postgres-4i5l</guid>
      <description>&lt;p&gt;Among the many options available for running managed Postgres, Amazon Aurora Serverless initially stood out as unique when it was announced. It promised to introduce the concepts of scaling to zero, autoscaling, and usage-based pricing to Postgres. &lt;/p&gt;

&lt;p&gt;A lot has changed since then,&lt;a href="https://www.reddit.com/r/aws/comments/18sx0i6/aurora_serverless_v1_eol_december_31_2024/" rel="noopener noreferrer"&gt; including AWS's decision to deprecate scale-to-zero in Aurora. &lt;/a&gt;Today, developers have other options for running serverless Postgres, such as &lt;a href="https://neon.tech" rel="noopener noreferrer"&gt;Neon&lt;/a&gt;. In this comparison, we'll examine the key differences between Aurora and Neon, focusing on their serverless capabilities and pricing models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Navigating the Amazon Aurora universe
&lt;/h3&gt;

&lt;p&gt;Let’s start by clarifying terminology. When developers refer to “Amazon Aurora”, they might be referring to &lt;em&gt;three&lt;/em&gt; different products: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/User_DBInstanceBilling.html" rel="noopener noreferrer"&gt;Amazon Aurora provisioned&lt;/a&gt;&lt;/strong&gt; is the “traditional” version of Amazon Aurora, where you provision database instances with a fixed capacity. You have to specify the instance size upfront, and you are billed based on the allocated resources regardless of usage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://aws.amazon.com/rds/aurora/serverless/" rel="noopener noreferrer"&gt;Amazon Aurora Serverless v1&lt;/a&gt;&lt;/strong&gt; came next as the first serverless version of Amazon Aurora. The two core functionalities it introduced were scale to zero and autoscaling: Aurora Serverless v1 instances automatically start up, shut down, and scale capacity up or down based on your application's needs. It's positioned as a more optimal choice for applications with intermittent, unpredictable, or variable workloads. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://aws.amazon.com/rds/aurora/serverless/" rel="noopener noreferrer"&gt;Amazon Aurora Serverless v2&lt;/a&gt;&lt;/strong&gt; aimed to address the limitations of v1. It claimed to offer more fine-grained scaling, improved performance, and the same high availability and durability as the provisioned instances. But these improvements came at a high price: *&lt;em&gt;losing scale to zero. *&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Aurora Serverless v1, if there were no database connections or activity for a period of time, the database could automatically pause and reduce capacity to zero, effectively eliminating costs during idle periods. This capability was essential to the claim of serverless as a way to reduce costs for applications with “infrequent usage patterns” running in Aurora. &lt;/p&gt;

&lt;p&gt;In contrast, Aurora Serverless v2 maintains a minimum capacity of 0.5 ACUs (Aurora Capacity Units) even when there is no database activity. As we’ll explore later in the post, this means that there are always some costs incurred, regardless of usage. This approach was taken to ensure instant provisioning and &lt;a href="https://neon.tech/blog/aurora-serverless-v1-to-neon#:~:text=Faster%20cold%20starts%20%E2%80%93%20500ms%20P95%20start%20time%20on%20Neon%2C%20vs%2020%2D60s%20on%20V1" rel="noopener noreferrer"&gt;to eliminate the latency associated with cold starts&lt;/a&gt;, but it came with a trade-off in costs for users. &lt;/p&gt;

&lt;h3&gt;
  
  
  Now, meet Neon
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://neon.tech/blog/architecture-decisions-in-neon" rel="noopener noreferrer"&gt;Neon's architecture&lt;/a&gt; is inspired by Amazon Aurora and its separation of compute and storage. But Neon takes this design one step further by adopting &lt;a href="https://neon.tech/blog/what-you-get-when-you-think-of-postgres-storage-as-a-transaction-journal" rel="noopener noreferrer"&gt;a custom-built storage engine that keeps a history&lt;/a&gt; of Postgres transactions. This enables Neon not only to offer a truly serverless experience with scale to zero, but also to focus on &lt;a href="https://neon.tech/flow" rel="noopener noreferrer"&gt;improving development workflows by offering features like database branching&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Neon &lt;a href="https://neon.tech/docs/introduction/auto-suspend" rel="noopener noreferrer"&gt;automatically scales compute instances to zero&lt;/a&gt; when inactive for a specified period (5 minutes by default). Similar to Aurora Serverless, it includes &lt;a href="https://neon.tech/docs/introduction/autoscaling" rel="noopener noreferrer"&gt;autoscaling&lt;/a&gt; to dynamically adjust compute resources based on the current load within user-defined limits. Differently to Aurora, Neon comes with a free tier. &lt;/p&gt;

&lt;h3&gt;
  
  
  Features: Neon vs Aurora Serverless v2
&lt;/h3&gt;

&lt;p&gt;Let’s dig deeper into how Neon compares to Amazon Aurora Serverless in terms of features.  This table gives you the high-level view: &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Feature&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Amazon Aurora Serverless v2&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Neon&lt;/strong&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Scale-to-zero
   &lt;/td&gt;
   &lt;td&gt;No, it maintains a minimum capacity of 0.5 ACU at all times*
   &lt;/td&gt;
   &lt;td&gt;Yes, instances can be configured to automatically suspend after a period of inactivity  
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Instant provisioning 
   &lt;/td&gt;
   &lt;td&gt;No, new instances take up to 20 minutes 
   &lt;/td&gt;
   &lt;td&gt;Yes, rapid provisioning of new instances (&amp;lt;500ms)
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Compute autoscaling
   &lt;/td&gt;
   &lt;td&gt;Yes, by 0.5 ACU increments 
   &lt;/td&gt;
   &lt;td&gt;Yes, based on real-time load 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;On-demand storage 
   &lt;/td&gt;
   &lt;td&gt;Yes, by 10 GB increments 
   &lt;/td&gt;
   &lt;td&gt;Yes, by 2-10 GB increments depending on plan
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Database branching
   &lt;/td&gt;
   &lt;td&gt;No
   &lt;/td&gt;
   &lt;td&gt;Yes, with data and schema via copy-on-write 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Multi-AZ replicas
   &lt;/td&gt;
   &lt;td&gt;Yes
   &lt;/td&gt;
   &lt;td&gt;No, under development 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Read replicas 
   &lt;/td&gt;
   &lt;td&gt;Yes, using separate instances  
   &lt;/td&gt;
   &lt;td&gt;Yes, without storage redundancy (compute-only)
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Point-in-time recovery
   &lt;/td&gt;
   &lt;td&gt;Yes, via backups and transaction logs (takes from minutes to hours)
   &lt;/td&gt;
   &lt;td&gt;Yes, via database branching (instant)
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;API support 
   &lt;/td&gt;
   &lt;td&gt;Yes, via RDS API 
   &lt;/td&gt;
   &lt;td&gt;Yes, via Neon API 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;CLI support 
   &lt;/td&gt;
   &lt;td&gt;Yes, via AWS CLI
   &lt;/td&gt;
   &lt;td&gt;Yes, via Neon CLI 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Postgres extensions 
   &lt;/td&gt;
   &lt;td&gt;Limited 
   &lt;/td&gt;
   &lt;td&gt;Extensive (200+)
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Custom extensions
   &lt;/td&gt;
   &lt;td&gt;Not supported  
   &lt;/td&gt;
   &lt;td&gt;Supports custom extensions 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Connection pooling 
   &lt;/td&gt;
   &lt;td&gt;Yes, using RDS Proxy (for a fee) 
   &lt;/td&gt;
   &lt;td&gt;Yes, integrated within Neon’s architecture 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;IP Allowlist 
   &lt;/td&gt;
   &lt;td&gt;Yes, via security groups
   &lt;/td&gt;
   &lt;td&gt;Yes, via customizable access control 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Organization accounts
   &lt;/td&gt;
   &lt;td&gt;Yes, via AWS IAM and AWS Organizations 
   &lt;/td&gt;
   &lt;td&gt;Yes, natively supported 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Integrations 
   &lt;/td&gt;
   &lt;td&gt;Limited outside AWS ecosystem 
   &lt;/td&gt;
   &lt;td&gt;Yes, for CI/CD workflows 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Support 
   &lt;/td&gt;
   &lt;td&gt;Yes, at extra cost 
   &lt;/td&gt;
   &lt;td&gt;Yes, included with plan 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Free tier
   &lt;/td&gt;
   &lt;td&gt;No
   &lt;/td&gt;
   &lt;td&gt;Yes 
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;*If you’re wondering what the heck is an ACU, see the next section.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pricing model: Aurora Serverless
&lt;/h3&gt;

&lt;p&gt;When it’s time to evaluate pricing for Aurora Serverless, you’ll very quickly be confronted with what seems like an easy question to answer: &lt;strong&gt;what is an ACU?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ohdwe7jjwkb6js03njl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ohdwe7jjwkb6js03njl.png" alt="Unsolved Mysteries: What are ACUs?" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s backtrack. In Aurora Serverless, ACUs (Aurora Capacity Units) are the units of measure used to define the capacity of database instances. When running an instance as a user, you’ll define a minimum and maximum ACU limit. Aurora will scale up and down automatically between these minimum and maximum limits, in 0.5 ACU increments.&lt;/p&gt;

&lt;p&gt;The minimum number of ACUs varies between Aurora Serverless v1 and v2:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Aurora Serverless v1, you can define 0 as the minimum limit (v1 scales to zero).&lt;/li&gt;
&lt;li&gt;In Aurora Serverless v2, the minimum possible limit is 0.5 ACU. We’ll break down what this implies cost-wise in the next section.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, to size an Aurora Serverless instance, the first thing you would like to know is how much CPU and memory is contained in each ACU. You will be billed according to how many ACUs you have used in a month, so this is highly relevant: for example, you may suspect that 1 CPU would be enough to handle your peak load, and therefore you would like to set up your maximum ACU limit at a corresponding capacity.&lt;/p&gt;

&lt;p&gt;Unfortunately, AWS is opaque in disclosing this information, making pricing in Aurora very uncertain. &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html" rel="noopener noreferrer"&gt;According to the Aurora docs&lt;/a&gt;, an ACU is “a combination of approximately 2 gibibytes (GiB) of memory, corresponding CPU, and networking”.&lt;/p&gt;

&lt;p&gt;We’re as confused as you are about what this means. &lt;a href="https://www.reddit.com/r/aws/comments/uswz6h/aurora_serverless_v2_in_production/" rel="noopener noreferrer"&gt;Some folks online have experimented with this&lt;/a&gt; and concluded that probably 1 ACU = 0.25 vCPU, 2 GiB memory. But we can’t know for sure.&lt;/p&gt;

&lt;p&gt;ACU mysteries aside, your monthly Aurora Serverless bill will be calculated as the sum of a few elements, included below. If you avoid the I/O charges by using I/O optimized storage (highly recommended), compute and database storage will most likely be the main elements contributing to your monthly costs.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Billing component in Aurora Serverless&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Description  &lt;/strong&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Compute 
   &lt;/td&gt;
   &lt;td&gt;Billed per ACU-hour based on the capacity used, with a minimum of 0.5 ACU. 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Database storage 
   &lt;/td&gt;
   &lt;td&gt;Billed per GB-month in 10 GB increments with a &lt;a href="https://aws.amazon.com/rds/aurora/faqs/#:~:text=The%20minimum%20storage%20is%2010,no%20impact%20to%20database%20performance." rel="noopener noreferrer"&gt;minimum of 10 GB&lt;/a&gt;.  
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;I/O requests 
   &lt;/td&gt;
   &lt;td&gt;Only applicable to standard storage (&lt;a href="https://aws.amazon.com/rds/aurora/pricing/" rel="noopener noreferrer"&gt;included for I/O optimized&lt;/a&gt;).  Billed per million requests. 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Backup storage 
   &lt;/td&gt;
   &lt;td&gt;Automated backups up to the size of your database are free. Additional backup storage is billed per GB-month.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Data transfer costs 
   &lt;/td&gt;
   &lt;td&gt;Data transfer within the same AWS region is free. Cross-region and outbound data transfer is billed per GB.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Multi-AZ deployments 
   &lt;/td&gt;
   &lt;td&gt;Additional costs for the resources used in the additional AZ. 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Read replicas 
   &lt;/td&gt;
   &lt;td&gt;Billed for ACU usage, storage, and I/O operations for each read replica. Cross-region replication incurs additional data transfer charges.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Backtrack 
   &lt;/td&gt;
   &lt;td&gt;When you “rewind” an Aurora database without restoring from backup. Billed per GB-month for the change records stored.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;API 
   &lt;/td&gt;
   &lt;td&gt;Charges for using certain APIs provided by Aurora. Billed per million API requests.
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Snapshot or cluster export 
   &lt;/td&gt;
   &lt;td&gt;Charges for exporting snapshots or clusters to S3. Billed per GB of snapshot or cluster exported.
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Since Aurora Serverless v2 doesn’t have a free tier and has minimum requirements for both compute and storage, we can estimate the minimum costs for the smallest database possible running 24/7: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://calculator.aws/#/createCalculator/AuroraPostgreSQL" rel="noopener noreferrer"&gt;Minimum monthly cost: $65.65 USD.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the absolute monthly minimum for a database running in us-east-1, assuming we’re using I/O optimized storage to avoid extra I/O charges.&lt;/p&gt;

&lt;p&gt;This calculation assumes that you’re using 0.5 ACU (the minimum) at all times. However, an important practical consideration is that in reality, you’ll be forced to pick an ACU range, and the &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2-administration.html" rel="noopener noreferrer"&gt;lowest maximum ACU possible is 1 ACU&lt;/a&gt;. So, a better expectation is that, in the previous example, costs would oscillate between $60 and $120 USD approximately.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pricing model: Neon
&lt;/h3&gt;

&lt;p&gt;Compute in Neon &lt;a href="https://neon.tech/docs/introduction/autoscaling" rel="noopener noreferrer"&gt;with autoscaling enabled&lt;/a&gt; works similarly to Aurora Serverless, but without the opacity. In Neon, 1 CU = 1 vCPU and 4 GiB of memory. You’ll be able to set up minimum and maximum autoscaling limits (with the minimum being able to scale to zero if you wish), and your compute consumption will be billed in CU-hours at the end of the month.&lt;/p&gt;

&lt;p&gt;In terms of billing components, Neon offers three &lt;a href="https://neon.tech/pricing" rel="noopener noreferrer"&gt;different pricing plans&lt;/a&gt;. Your monthly bill will account for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The monthly fee corresponding to each plan ($0, $19, or $69)&lt;/li&gt;
&lt;li&gt;Any additional compute or storage usage over what is included within each plan&lt;/li&gt;
&lt;li&gt;Charges per additional projects (the logical equivalent of an instance in Neon)&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Billing component in Neon&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Free&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Launch&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Scale&lt;/strong&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Monthly fee
   &lt;/td&gt;
   &lt;td&gt;0 \

   &lt;/td&gt;
   &lt;td&gt;19 USD
   &lt;/td&gt;
   &lt;td&gt;69 USD 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Additional compute usage
   &lt;/td&gt;
   &lt;td&gt;N/A - Includes capacity for 24/7 usage with 0.5 CU 
   &lt;/td&gt;
   &lt;td&gt;300 CU-hours included with monthly fee. Additional charges after that. 
   &lt;/td&gt;
   &lt;td&gt;750 CU-hours included with monthly fee. Additional charges after that. 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Additional storage
   &lt;/td&gt;
   &lt;td&gt;N/A - Includes 0.5 GB 
   &lt;/td&gt;
   &lt;td&gt;10 GB included with monthly fee. Additional charges after that. 
   &lt;/td&gt;
   &lt;td&gt;50 GB included with monthly fee. Additional charges after that. 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Additional projects 
   &lt;/td&gt;
   &lt;td&gt;N/A - Includes 1 project 
   &lt;/td&gt;
   &lt;td&gt;10 projects included with monthly fee. Additional charges after that. 
   &lt;/td&gt;
   &lt;td&gt;50 projects included with monthly fee. Additional charges after that. 
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Comparing compute costs: Neon vs Aurora Serverless v2
&lt;/h3&gt;

&lt;p&gt;Estimating compute costs is often the hardest piece with serverless databases. To bring some clarity to this, let’s work through some example workloads that teams might see for serverless applications.&lt;/p&gt;

&lt;p&gt;For this exercise, we’ll use the following equivalence: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 ACU in Aurora = 0.5 CU in Neon = 0.5 vCPU, 2 GiB memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Taking into consideration that AWS discloses that 1 ACU equals 2 GiB of memory, and that 1 CU in Neon equals 4 GiB of memory, this equivalence seems like a fair assumption—but note that this is approximate. We have reasons to believe that ACUs are even smaller than that CPU-wise, so your** real workload may require higher ACU limits** (and therefore higher costs) than estimated here.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Example workload &lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Compute costs in Neon&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Compute costs in Aurora Serverless v2&lt;/strong&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Low compute (testing) 
   &lt;/td&gt;
   &lt;td&gt;41 USD 
   &lt;/td&gt;
   &lt;td&gt;701 USD 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;Medium compute (analytics) 
   &lt;/td&gt;
   &lt;td&gt;69 USD 
   &lt;/td&gt;
   &lt;td&gt;467 USD 
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;High compute (application) 
   &lt;/td&gt;
   &lt;td&gt;1,059 USD 
   &lt;/td&gt;
   &lt;td&gt;4,064 USD 
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Low compute, testing workloads
&lt;/h4&gt;

&lt;p&gt;Imagine a small team working on a new feature. They need multiple dev, staging, and testing environments, but each environment has minimal traffic and data storage needs. These databases are often idle for extended periods and only need to be active during specific testing windows. &lt;/p&gt;

&lt;p&gt;Using &lt;a href="https://neon.tech/docs/introduction/branching" rel="noopener noreferrer"&gt;database branches&lt;/a&gt;, we could do this on the Neon free tier. But, if this workload requires multiple projects, we cand use the Launch tier instead. Let’s say we’re using 1 vCPU (1 CU) for each of our three projects (dev, staging, and testing), but overall, they are idle 80% of the time. So, this becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 CU * 3 Projects * 730 * 0.2 = 438 CU-hours&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is over the 300 compute hours included in the Launch tier, so we’ll also have to pay $0.16 per extra compute hour: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$19 + (138 compute hours * $0.16) = $41.08&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, just over $40 monthly for this testing workload with Neon. For Aurora Serverless v2, it is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2 ACU * 3 instances * 730 = 4,380 ACU-hours&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;This costs us many more compute hours because we have no scale to zero&lt;/strong&gt;. Now, using the standard configuration pricing of $0.16 per ACU-hour in I/O optimized instances:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4,380 ACU-hours * $0.16 = $700.8&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Aurora compute cost would be 17x more than the Neon cost for this sceario, mainly due to Neon’s ability to scale to zero. &lt;/p&gt;

&lt;h4&gt;
  
  
  Medium compute, analytics workloads
&lt;/h4&gt;

&lt;p&gt;Here, the team might need to batch-run analytics queries and generate reports to gain insights into user behavior and application performance.&lt;/p&gt;

&lt;p&gt;Let’s do Neon first. Let’s assume we’re still in the Launch tier, that we’ll use 2 vCPUs (CUs), and 1 project.  Again, these analytics runs aren’t constant—we’ll assume that we’re using them 50% of the time and that they’re idle the rest. With Neon, this looks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2 CUs * 1 Project * 730 * 0.5 = 730 CU-hours&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we stayed on the Launch tier, we’d have to pay 430 extra compute hours, so the extra cost would be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$19 + (430 CU-hours * $0.16) = $448.8&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;We can get this cost much lower if we upgraded to the Scale tier, which includes 750 CU-hours within the $69 monthly fee. *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With Aurora Serverless v2, we get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4 ACUs * 1 Instance * 730 = 2,920 ACU-hours&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Again, if we assume a I/O-optimized storage &lt;a href="https://aws.amazon.com/blogs/database/planning-i-o-in-amazon-aurora/" rel="noopener noreferrer"&gt;so we’re not abused by I/O costs&lt;/a&gt;, the monthly price would be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2,920 ACU-hours* $0.16 = $467.2&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  High compute, application workloads
&lt;/h4&gt;

&lt;p&gt;A high compute, application workload will be a production environment with significant traffic and low latency.&lt;/p&gt;

&lt;p&gt;Here, we’ll use a variable workload, with 8 vCPUs used during working hours (180 hours / month) and 2 /vCPUs during off-peak hours (550 hours / month). We’ll assume 5 instances/projects. &lt;strong&gt;In this scenario, there is no idle time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the Scale tier on Neon, this works out as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;(8 CUs *180) + (2 CUs * 550) * 5 Projects = 6,940 CU-hours&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We have 750 CU-hours included in the Scale tier, so the cost for this would be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$69 + (6,190 * $0.16) = $1,059.4&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Aurora Serverless: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;(16 ACUs * 180) + (4 ACUs * 550) * 5 Instances = 25,400 ACU-hours &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the monthly price: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;25,400 ACU-hours * $0.16 = $4,064&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Both Amazon Aurora and Neon offer serverless options to run managed Postgres instances. While Aurora provides robust scalability and a rich set of features, Neon stands out with some advantages, mainly the capacity to scale to zero and a simpler and more transparent pricing structure with a free tier. This makes it a more attractive choice for startups and mid-sized businesses. &lt;/p&gt;

</description>
      <category>postgres</category>
      <category>serverless</category>
      <category>neon</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
