<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kirk Kirkconnell</title>
    <description>The latest articles on DEV Community by Kirk Kirkconnell (@nosqlknowhow).</description>
    <link>https://dev.to/nosqlknowhow</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nosqlknowhow"/>
    <language>en</language>
    <item>
      <title>Train it or feed it? Teaching LLMs your data the smart way</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Fri, 03 Oct 2025 04:00:00 +0000</pubDate>
      <link>https://dev.to/nosqlknowhow/train-it-or-feed-it-teaching-llms-your-data-the-smart-way-n2a</link>
      <guid>https://dev.to/nosqlknowhow/train-it-or-feed-it-teaching-llms-your-data-the-smart-way-n2a</guid>
      <description>&lt;p&gt;I received an interesting question during a webinar yesterday, and I wanted to do some research to explore the topic further, without getting too detailed. The question was, and I am paraphrasing, "Which is best, training or fine-tuning an LLM with specific data to create a custom LLM, or using Retrieval-Augmented Generation (RAG) with your application?" If you’re not familiar with RAG, it refers to the process of sending data from an external source, such as a database (e.g., MongoDB), external API, etc., to an LLM to augment its existing knowledge. The data is then used by the LLM to generate its response.&lt;/p&gt;

&lt;h2&gt;
  
  
  Training or Fine-Tuning with Specific Data to Create a Custom LLM
&lt;/h2&gt;

&lt;p&gt;Using this method, you either train a model from scratch with your data (which is rare due to the cost and level of effort) or fine-tune an existing LLM with your data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fine-tuning an LLM
&lt;/h3&gt;

&lt;p&gt;The specifics of fine-tuning are beyond the scope of this post, but essentially, you are instructing an existing LLM with knowledge it wouldn’t have otherwise and nudging it via various settings so that its predictions better align with the patterns in your data. With that, you get a customized LLM for your needs. As with everything, there are some trade-offs, so let’s get into those.&lt;/p&gt;

&lt;h3&gt;
  
  
  Training an LLM
&lt;/h3&gt;

&lt;p&gt;If you go this route, it’s all on you. From picking the LLM’s purpose to exactly what data it learns from, preparing that data, to every setting, and all of the processing, compute, and performance characteristics. There is a ton more to it, but it reminds me of the days of doing custom builds of Apache Web Server to only include exactly the capabilities we needed and nothing more. Nevertheless, there’s a reason why this approach is less common; however, if you require complete control and precision for your use case, this is the method you might want to consider.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Preparation
&lt;/h3&gt;

&lt;p&gt;Regardless of which method you use, you are responsible for data preparation, and this can be an intensive task. This includes, but is not limited to, formatting, cleaning, labeling (if supervised), chunking, and tokenization of the data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Timeliness
&lt;/h3&gt;

&lt;p&gt;A custom LLM is only as up-to-date as when you last trained and fine-tuned it. If that was last week, and you have new data, the LLM doesn’t know anything about the new data. The training and fine-tuning process via iteration is slow because every time updates are made, you have to train, test, tune, and ensure there are no regressions, among other steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  When to choose a custom model
&lt;/h3&gt;

&lt;p&gt;Using either method, fine-tune or from scratch, is appropriate when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your data is stable, closed-domain, and you require tight control over output behavior (e.g., legal, medical, scientific text). An example is the &lt;a href="https://blog.voyageai.com/2024/04/15/domain-specific-embeddings-and-retrieval-legal-edition-voyage-law-2/" rel="noopener noreferrer"&gt;voyage-law-2 model by Voyage AI&lt;/a&gt;, which is optimized for legal topics.&lt;/li&gt;
&lt;li&gt;You need low-latency responses or offline capability.&lt;/li&gt;
&lt;li&gt;You're building a productized LLM with predictable behavior and no reliance on real-time external sources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pros:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;No need for external retrieval at runtime.&lt;/li&gt;
&lt;li&gt;Tighter integration of domain-specific language or tone.&lt;/li&gt;
&lt;li&gt;Potential for lower latency and cost once deployed.&lt;/li&gt;
&lt;li&gt;Deployable in a software product with no external data sources&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Expensive, time-consuming, hard to update.&lt;/li&gt;
&lt;li&gt;Prone to “hallucination” if the domain shifts.&lt;/li&gt;
&lt;li&gt;Risk of catastrophic forgetting during fine-tuning if not managed carefully.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Retrieval-Augmented Generation (RAG)
&lt;/h2&gt;

&lt;p&gt;In this case, you use an "off-the-shelf" LLM, but your app is augmenting the response from the LLM with data that it wasn’t trained on. The app retrieves data from an API, a database, or any other source, and that data becomes part of the LLM prompt. For example, say you have an LLM that doesn’t have access to non-public knowledge base articles. A user asks a question, and the app searches for semantically matching articles using MongoDB Atlas Vector Search. The app injects the retrieved articles into the LLM, enabling it to generate a better response to the user. &lt;/p&gt;

&lt;p&gt;Additionally, this method is significantly faster to set up, as it eliminates the need for training time associated with custom models. It can be built and tuned in hours to days, rather than days to weeks.&lt;/p&gt;

&lt;h3&gt;
  
  
  When to choose it
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You need dynamic access to evolving or non-public data.&lt;/li&gt;
&lt;li&gt;You don’t have resources or need to fine-tune.&lt;/li&gt;
&lt;li&gt;You want to prototype quickly and iterate based on user feedback.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pros
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cheaper and faster to build and maintain.&lt;/li&gt;
&lt;li&gt;Updatable in real time, and no retraining is needed when documents change.&lt;/li&gt;
&lt;li&gt;Works well with off-the-shelf models, such as OpenAI GPT-5, Anthropic Claude, or open-source LLMs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Context length limits can be a bottleneck.&lt;/li&gt;
&lt;li&gt;Needs a robust retrieval system, or you'll get garbage in, garbage out.&lt;/li&gt;
&lt;li&gt;Output quality depends heavily on chunking strategy, embedding quality, and retrieval relevance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Which one should you choose? TL;DR
&lt;/h2&gt;

&lt;p&gt;Use RAG if&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want to quickly prototype and integrate with existing LLMs.&lt;/li&gt;
&lt;li&gt;Your data changes often, is usually private, or includes long documents.&lt;/li&gt;
&lt;li&gt;You value flexibility, lower cost, and ease of iteration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use a custom LLM if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your application demands fast, self-contained models with no reliance on external systems.&lt;/li&gt;
&lt;li&gt;You have highly specialized content or tone that general LLMs can't reproduce.&lt;/li&gt;
&lt;li&gt;You’re delivering a product where latency, control, and data privacy are paramount.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>rag</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Simplify serverless scaling and data management with Fauna and Cloudflare Workers</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Wed, 13 Nov 2024 19:07:47 +0000</pubDate>
      <link>https://dev.to/fauna/simplify-scaling-and-data-management-with-fauna-and-cloudflare-workers-lkf</link>
      <guid>https://dev.to/fauna/simplify-scaling-and-data-management-with-fauna-and-cloudflare-workers-lkf</guid>
      <description>&lt;p&gt;In a world where we demand speed, flexibility, and seamless scalability, serverless architecture has emerged as a new gold standard. For many developers, achieving a truly serverless experience—where every element of scaling, replication, and data consistency is managed without additional infrastructure headaches—can still seem elusive. Fauna, in partnership with Cloudflare, has made True Serverless possible. In this post, I cover how &lt;a href="https://docs.fauna.com/fauna/current/build/integration/cloudflare/?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.cloudflare-workers-plus-fauna" rel="noopener noreferrer"&gt;Fauna and Cloudflare Workers&lt;/a&gt; empower you to build, deploy, and scale globally without compromising on true serverless simplicity, easy global scaling, and the ability to evolve applications without downtime.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50gk3j8q75kdly25r3q4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50gk3j8q75kdly25r3q4.png" alt="an image showing the global distribution and connectivity of Cloudflare Workers and Fauna together." width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But don't just take my word for it; ask &lt;a href="https://fauna.com/blog/how-connexin-delivers-world-class-broadband-and-smart-city-services?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.cloudflare-workers-plus-fauna" rel="noopener noreferrer"&gt;Cameron Bell, Head of Software, Systems, &amp;amp; Data at Connexin shared&lt;/a&gt;, "Fauna is a true serverless database. This is very important for applications like ours where much of the infrastructure – like Cloudflare Workers – is connectionless and requires solutions that can meet the demands of real-time, operational data. The serverless attributes of Cloudflare and Fauna mean we can genuinely do more, faster."&lt;/p&gt;

&lt;h2&gt;
  
  
  True serverless: No provisioning, replication, or clustering
&lt;/h2&gt;

&lt;p&gt;One of the core tenets of a serverless architecture is zero operational overhead. Fauna and Cloudflare’s integration lets you forget about provisioning servers, configuring clusters, or handling complex replication. Fauna’s &lt;a href="https://fauna.com/product/document-relational?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.cloudflare-workers-plus-fauna" rel="noopener noreferrer"&gt;document-relational model&lt;/a&gt; natively supports &lt;a href="https://fauna.com/product/distributed-transaction-engine?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.cloudflare-workers-plus-fauna" rel="noopener noreferrer"&gt;data consistency across multiple regions&lt;/a&gt;, and Cloudflare’s global edge network seamlessly distributes workloads. This combination allows your application to scale automatically based on usage, giving you all the benefits of global performance without needing to manage infrastructure. In other words, the architecture scales to your application’s needs—without requiring you to manage the backend setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ultra-fast multi-region and edge workloads with strong consistency
&lt;/h2&gt;

&lt;p&gt;When it comes to applications that serve a global audience, speed and data consistency are essential. Traditional databases struggle to provide both, often sacrificing one for the other. Fauna’s distributed transaction engine provides data consistency across all regions in a region group without the lag of traditional replication methods. In addition, your data is always strongly consistent across those regions. Paired with &lt;a href="https://workers.cloudflare.com/" rel="noopener noreferrer"&gt;Cloudflare Workers&lt;/a&gt; and their edge network, your application can support ultra-low-latency requests around the globe, and your app always connects to the closest Fauna replica. This setup not only ensures fast responses but also guarantees consistency, letting users access the most up-to-date data with low latency, no matter where they’re located.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rapid time-to-market with simplified deployments
&lt;/h2&gt;

&lt;p&gt;With &lt;a href="https://docs.fauna.com/fauna/current/build/integration/cloudflare/?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.cloudflare-workers-plus-fauna" rel="noopener noreferrer"&gt;Fauna’s native integration with Cloudflare Workers&lt;/a&gt;, app development is accelerated by eliminating development and infrastructure management tasks. Fauna’s database enables instant global access through its API, which communicates over HTTPS, so your app can connect securely using Fauna’s client libraries or make direct HTTPS requests. This native integration means Cloudflare Workers can interact directly with Fauna without needing traditional database connection pools, &lt;a href="https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping" rel="noopener noreferrer"&gt;ORMs&lt;/a&gt;, middleware, or complex backend setups. This makes data handling within your Cloudflare Workers faster, more efficient, and easier to manage over time.&lt;/p&gt;

&lt;p&gt;By reducing development steps from start to deployment, Fauna and Cloudflare enable you to skip backend infrastructure management, allowing you to focus entirely on building features. The streamlined connection keeps your code lean and responsive, helping you to bring products to market faster, while delivering a scalable, high-performance experience to your users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep applications lightweight, performant, and easy to maintain
&lt;/h2&gt;

&lt;p&gt;Complex app logic often makes apps slower to execute, tricky to test, and difficult to manage over time. &lt;a href="https://docs.fauna.com/fauna/current/learn/query/?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.cloudflare-workers-plus-fauna" rel="noopener noreferrer"&gt;Fauna’s powerful query language&lt;/a&gt;, FQL, enables you to build complex data relationships without writing extensive application code. With &lt;a href="https://docs.fauna.com/fauna/current/learn/schema/user-defined-functions/?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.cloudflare-workers-plus-fauna" rel="noopener noreferrer"&gt;Fauna’s UDFs (user-defined functions)&lt;/a&gt;, you can embed FQL code to run in the database. This streamlined data management means that, within Cloudflare Workers, your logic remains lean and focused. Fauna handles the heavy lifting of relational data and consistency, enabling your application to remain lightweight and performant—even as it scales to meet global demand. In addition, testing new application code is easier and more repeatable as the heavy lifting can be done in UDFs.&lt;/p&gt;

&lt;p&gt;Fauna also has &lt;a href="https://docs.fauna.com/fauna/current/learn/schema/#schema?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.cloudflare-workers-plus-fauna#schema-migrations" rel="noopener noreferrer"&gt;native zero-downtime migrations&lt;/a&gt; for &lt;a href="https://fauna.com/product/database-schema?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.cloudflare-workers-plus-fauna" rel="noopener noreferrer"&gt;database schema&lt;/a&gt; changes, which allows you to evolve your data model without taking your application offline or disrupting the user experience. As your application grows and requires changes to its data structure, you can roll out updates smoothly, maintaining high availability and minimizing the need for extensive backend reconfiguration. This enables you and your team to iterate quickly, delivering new features and products to market faster but in a controlled fashion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By combining Fauna’s flexible, distributed, and strongly consistent database with Cloudflare Worker’s edge-first architecture, you unlock a fully managed serverless stack that lets you focus on building your application, not on managing infrastructure. This partnership enables you to scale without limits, ensures ultra-fast, globally consistent data, and keeps your codebase lean—all key to getting your products to market faster and your customers happier.&lt;/p&gt;

&lt;p&gt;If you’re ready to embrace true serverless architecture, check out &lt;a href="https://docs.fauna.com/fauna/current/build/workshops/cloudflare/?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.cloudflare-workers-plus-fauna" rel="noopener noreferrer"&gt;this workshop&lt;/a&gt; to see just how easy it is to get started. &lt;/p&gt;

</description>
      <category>serverless</category>
      <category>database</category>
      <category>fauna</category>
      <category>cloudflare</category>
    </item>
    <item>
      <title>Overcoming MongoDB Limitations with Fauna</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Wed, 16 Oct 2024 16:56:39 +0000</pubDate>
      <link>https://dev.to/nosqlknowhow/overcoming-mongodb-limitations-with-fauna-32on</link>
      <guid>https://dev.to/nosqlknowhow/overcoming-mongodb-limitations-with-fauna-32on</guid>
      <description>&lt;p&gt;Developers are constantly seeking solutions that offer simplicity, scalability, and reliability. For many, MongoDB has long been a go-to when needing a NoSQL database with flexibility in data storage, but as applications grow in complexity, MongoDB’s limitations become harder to ignore. From managing schema validation inconsistencies to navigating its verbose Aggregation Framework, developers often write more code to maintain data integrity and optimize queries. Enter Fauna, a fully managed, &lt;a href="https://fauna.com/product/serverless-database?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.mongodb-limitations" rel="noopener noreferrer"&gt;truly serverless&lt;/a&gt; database designed to eliminate these pain points. With strict serializability, a &lt;a href="https://fauna.com/product/fql?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.mongodb-limitations" rel="noopener noreferrer"&gt;more intuitive and powerful query language with native relational features&lt;/a&gt;, JSON documents with relationships like a relational database, and stronger built-in data consistency, Fauna offers a modern alternative. It helps reduce code bloat, speeds up developments, and lowers operational overhead. In this post, we’ll dive into five key differences between MongoDB and Fauna, as well as explore why Fauna might just be the database you didn’t know you needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  MongoDB doesn’t have traversable relationships
&lt;/h2&gt;

&lt;p&gt;MongoDB’s document-oriented data model doesn’t natively support complex traversable relationships (like joins in relational databases). While MongoDB provides manual alternatives such as embedding documents (denormalization) or performing multiple queries and handling relationships in the application logic, these approaches become complex and inefficient, especially for highly normalized data. The lack of native relationships often forces developers to rely on data duplication or quasi-join-like operations through MongoDB’s complex aggregation framework, which can be slow and cumbersome for complex queries. To put it differently, to get decent performance and scalability, you must sacrifice a data model that makes sense.&lt;/p&gt;

&lt;p&gt;Yes, Mongoose ORM can help with this, but if there are 50 documents related to one document, even though this appears as one request in Mongoose, there are still 51 separate queries to MongoDB. Mongoose masks these queries from you, but your users will experience the inefficiencies.&lt;/p&gt;

&lt;p&gt;Fauna, on the other hand, offers &lt;a href="https://fauna.com/product/document-relational?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.mongodb-limitations" rel="noopener noreferrer"&gt;native support for relational (normalized)&lt;/a&gt; JSON documents, and the ability to traverse those relationships like a foreign key makes it easier to query deeply connected data sets without the need for complex joins, aggregation pipelines, or ORMs. In Fauna, if you have the same 50 documents related to one document I mentioned before, instead of 51 queries, it’s one query in Fauna. Fauna traverses document relationships to get the necessary information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Schema validation approaches
&lt;/h2&gt;

&lt;p&gt;MongoDB offers optional schema validation that's limited in utility. With validation enabled, MongoDB's schema validation system makes no guarantees about the structure of existing documents created or modified before the schema was added to the database. This forces developers to implement type-checking in their application code, increasing complexity. Moreover, MongoDB lacks built-in tooling to migrate from schemaless or an existing schema to the target schema with validation, much less migrate the data that doesn’t comply with the target schema. This leaves developers to handle schema changes and data migration on their own. Because MongoDB's schema validation system is not reliably consistent with the underlying data and lacks real support for migration, developers are no better off using it than before.&lt;/p&gt;

&lt;p&gt;In contrast, Fauna’s schema validation, when enabled, is strictly enforced for the fields you define but allows some fields to be schemaless if desired. Documents with fields that don’t conform to the defined schema are not accepted, eliminating the need for additional checks in the application code. This ensures clean, consistent data and reduces code bloat, allowing developers to focus on application logic rather than data integrity management. In addition, Fauna offers native migration functionality to ensure the schema and the data in the database adhere to that new schema. A migration runs inside the database, is a background process, and completes while the database is serving production traffic with zero downtime. If an inbound transaction includes a document that has not been migrated yet, since Fauna is aware of the migration, it automatically migrates the document(s) to the new schema and then does the inbound transaction.&lt;/p&gt;

&lt;h2&gt;
  
  
  MongoDB’s querying is limiting and complex
&lt;/h2&gt;

&lt;p&gt;MongoDB’s query language and Aggregation Framework are powerful but can become overly complex and verbose, but also limiting, especially for advanced queries. The framework requires chaining multiple stages ($match, $group, $project, etc.), which leads to long and difficult queries to work with. This verbosity makes writing and maintaining complex queries more time-consuming, with a steep learning curve for advanced operations.&lt;/p&gt;

&lt;p&gt;While many developers turn to the Mongoose ORM to avoid MongoDB’s query tools, this approach introduces its challenges. Mongoose adds a layer of obfuscation that can lead to performance overhead and increased difficulty in debugging. Although it simplifies basic CRUD operations, developers still write raw MongoDB queries for more complex operations, undermining many of the benefits of using the ORM. Additionally, Mongoose does not solve MongoDB’s lack of global consistency, traversable relationships between documents, or strict serializability. It’s putting lipstick on a pig.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://fauna.com/product/fql?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.mongodb-limitations" rel="noopener noreferrer"&gt;Fauna Query Language (FQL)&lt;/a&gt; offers a more concise, expressive, and functional syntax. Unlike MongoDB’s multi-stage aggregation model or Mongoose, FQL allows queries to be written in a composable, declarative way, making it easier to construct and maintain. FQL’s approach leads to cleaner, more readable code, especially for complex operations. If you’ve coded JavaScript or TypeScript, you’ll pick up FQL very quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  MongoDB is not a fully managed or truly serverless database
&lt;/h2&gt;

&lt;p&gt;While MongoDB Atlas is a managed service, it still requires manual configuration for scaling, backups, and infrastructure management. You must decide on instance sizes, regions, and clusters, which adds operational complexity. While MongoDB also has Atlas Serverless and automates some tasks, it is not a truly serverless solution because you still need to manage some of the underlying infrastructure, such as provisioning resources and handling scaling decisions. Further, &lt;a href="https://fauna.com/blog/mongodb-data-api-and-https-endpoints-deprecation-exploring-your-options?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.mongodb-limitations" rel="noopener noreferrer"&gt;MongoDB no longer supports&lt;/a&gt; native HTTPS connectivity, so for serverless or edge architectures, developers must implement additional management, configuration, and middleware to establish stateless communication with functions like &lt;a href="https://fauna.com/blog/announcing-faunas-native-integration-with-cloudflare-workers?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.mongodb-limitations" rel="noopener noreferrer"&gt;Cloudflare Workers&lt;/a&gt; or AWS Lambda, increasing operational burden and potentially impacting performance.&lt;/p&gt;

&lt;p&gt;Fauna is a fully managed, truly serverless database out of the box. Developers don’t need to worry about server management, scaling, backups, or provisioning. Fauna automatically scales based on demand and handles infrastructure management behind the scenes. This means no manual sharding, replication, or instance selection—everything is abstracted away. Fauna’s serverless nature allows it to scale effortlessly with zero operational overhead, making it ideal for dynamic workloads that require elastic scaling without intervention. In addition, unlike MongoDB, Fauna’s connectivity is a native HTTP-based API that is ideal for serverless apps, especially edge apps. You can use a lightweight language-specific client driver or plain HTTP to transact with Fauna without worrying about connection pooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strict serializability out of the box
&lt;/h2&gt;

&lt;p&gt;MongoDB provides eventual consistency by default and only offers stricter consistency guarantees at the cost of performance. To achieve full ACID transactions and strict consistency, developers must explicitly configure MongoDB, which can result in added complexity and slower write performance. Even then, MongoDB’s consistency can vary depending on how it’s set up, and global data distribution can introduce latency or consistency trade-offs.&lt;/p&gt;

&lt;p&gt;Fauna, by contrast, provides &lt;a href="https://fauna.com/blog/serializability-vs-strict-serializability-the-dirty-secret-of-database-isolation-levels?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.mongodb-limitations" rel="noopener noreferrer"&gt;strict serializability by default&lt;/a&gt;, meaning all transactions are executed in a globally consistent manner, ensuring that every read reflects the most recent write regardless of where that read originated geographically. This makes Fauna ideal for applications without worrying about complex configurations or performance trade-offs. Fauna’s unique consistency model provides global ACID transactions without sacrificing performance, making it simpler and more reliable for developers who need strong consistency across distributed data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It’s clear that while MongoDB offers flexibility, it comes with trade-offs that require additional effort in code and operations. From schema validation inconsistencies to the complexities of its Aggregation Framework, MongoDB often requires extra tools and workarounds, like the Mongoose ORM, to attempt to maintain data integrity and performance. Fauna, on the other hand, provides a truly serverless experience with strict serializability, an intuitive query language, bonafide relationships between document data, and built-in global consistency, reducing the burden on developers. By eliminating the need for manual schema migrations, complex aggregations, and operational overhead, Fauna helps you build faster, more reliable applications with less effort. If you’re looking for a database that scales with your needs while simplifying your workflow, Fauna is the choice for modern development.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>fauna</category>
      <category>database</category>
      <category>nosql</category>
    </item>
    <item>
      <title>Five DynamoDB limitations you should know before using it</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Wed, 09 Oct 2024 16:55:52 +0000</pubDate>
      <link>https://dev.to/nosqlknowhow/five-dynamodb-limitations-you-should-know-before-using-it-9ng</link>
      <guid>https://dev.to/nosqlknowhow/five-dynamodb-limitations-you-should-know-before-using-it-9ng</guid>
      <description>&lt;p&gt;Amazon DynamoDB is a NoSQL database known for its scalability, but it comes with several limitations that can hinder developers and applications in certain scenarios. In this article, we’ll dive into five specific areas where &lt;a href="https://fauna.com/compare/dynamodb?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;DynamoDB falls short&lt;/a&gt; and explain how Fauna solves these challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  The trouble with transactions
&lt;/h2&gt;

&lt;p&gt;One of the more frustrating limitations of DynamoDB’s transactions is that you can’t read a value and then use that value in a subsequent write or update within the same transaction. This can be a major hindrance when implementing workflows or business logic that requires reading data and then immediately performing updates based on that data all while keeping data strongly consistent.&lt;/p&gt;

&lt;p&gt;Furthermore, DynamoDB transactions lack support for complex logic, such as branching conditions or multi-step decision processes. You’re limited to basic read, write, and update operations, without the ability to incorporate logic based on prior reads. This forces developers to handle workflows outside the database, bloating the application code and increasing the risk of data consistency issues, performance impacts, and maintenance overhead. This is especially problematic for serverless and edge applications.&lt;/p&gt;

&lt;p&gt;Contrast that with Fauna, where transactions are &lt;a href="https://fauna.com/product/fql?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;far more powerful and flexible&lt;/a&gt;. Fauna allows you to read a set of documents based on a query, apply complex logic to those results, and update multiple documents with each operation, all within the same transaction. This seamless read-modify-write capability ensures that all operations are executed atomically and consistently, with no need for complex client-side logic. Additionally, Fauna’s transactions maintain &lt;a href="https://fauna.com/product/distributed-transaction-engine?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;strong consistency across all regions by default&lt;/a&gt;, ensuring that data is always in sync, regardless of where it’s accessed. For example, &lt;a href="https://fauna.com/blog/how-skylark-unlocked-differentiated-feature-development-with-fauna?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;after migrating from DynamoDB to Fauna&lt;/a&gt;, Skylark reduced its application code by 80%, thanks to Fauna’s ability to handle complex operations natively within the database. This global, distributed consistency model eliminates the need to worry about data conflicts or eventual consistency problems, making Fauna the better choice for applications that demand reliable, multi-region operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-region misfortunes
&lt;/h2&gt;

&lt;p&gt;While DynamoDB is built for extreme scale, high availability, and low-latency operations, it cannot do multi-region replication with strong consistency. DynamoDB’s replication model when using the Global Tables feature, is eventually consistent across AWS regions. This works for many use cases but introduces challenges when strong consistency is required. For global applications where data integrity must be maintained across multiple regions, DynamoDB’s lack of strong consistency guarantees across regions becomes a significant drawback.&lt;/p&gt;

&lt;p&gt;Fauna, on the other hand, offers strong consistency across all regions by default. Thanks to Fauna’s &lt;a href="https://fauna.com/product/distributed-transaction-engine?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;unique, globally distributed, multi-region architecture&lt;/a&gt;, you get strong consistency without sacrificing performance. This means that no matter where your application reads or writes data, Fauna ensures that every transaction is immediately visible everywhere. Unlike DynamoDB, Fauna handles all the complexity of synchronizing data across multiple regions automatically as part of every transaction and still performs well. Developers don’t need to worry about data consistency issues or working around complex replication mechanisms.&lt;/p&gt;

&lt;p&gt;With Fauna, you can confidently build globally distributed applications that require real-time access to strongly consistent data without the headaches of managing eventual consistency. This advantage translates into simpler code, fewer bugs, and better user experiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sub-optimal API/query language
&lt;/h2&gt;

&lt;p&gt;DynamoDB’s API and query language can be difficult to work with, particularly for developers who are used to the power and flexibility of relational databases. DynamoDB uses a basic query model with limited functionality, requiring you to plan your application’s data access patterns up front and model your data in convoluted ways. Queries are limited to primary key lookups or specific secondary indexes, and more complex filtering must often be done client-side, which can lead to inefficiencies and higher costs. If you get access patterns or data models wrong, it can quickly lead to higher costs, lower performance, or worse yet, limit how you can evolve your application.&lt;/p&gt;

&lt;p&gt;Fauna offers a more flexible and powerful approach to querying data. Fauna uses &lt;a href="https://fauna.com/product/fql?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;FQL (Fauna Query Language)&lt;/a&gt;, a fully expressive, server-side query language that supports various operations, including joins, filtering, and complex data relationships. Unlike DynamoDB, Fauna doesn’t limit you to predefined access patterns. Instead, you can query your data dynamically, based on whatever criteria you need, with support for relational and document-based data structures. FQL also allows you to compose complex, nested queries that run efficiently on the server side, eliminating the need for additional client-side logic, especially when made into server-side functions. &lt;a href="https://fauna.com/blog/supporting-insights-ggs-100k-global-daily-active-users-with-fauna-cloudflare?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;Fauna customer Insights.gg&lt;/a&gt; reduced their home page rendering time from 4 seconds to 100 milliseconds, going from 50 database round trips to 1. In addition, they didn't have to aggregate the response data into a JSON document, as Fauna’s server-side functions did all the heavy lifting.&lt;/p&gt;

&lt;p&gt;This flexibility not only simplifies development but also enables more sophisticated use cases that would be hard to implement in DynamoDB. Whether it’s querying across relationships or performing complex transformations, Fauna’s query language provides the power and versatility that DynamoDB’s API lacks. Developers can write concise, powerful queries that reduce application complexity and improve performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Schema definitions and enforcement snafus
&lt;/h2&gt;

&lt;p&gt;DynamoDB’s schemaless design offers flexibility, but it also introduces risks. Without schema enforcement, developers can easily introduce inconsistencies into the data model, particularly in large teams, multiple applications/functions, or rapidly evolving projects. Over time, this lack of structure and control can lead to messy data, unexpected application errors, and additional maintenance headaches. Furthermore, DynamoDB offers no built-in tools to help with table migrations or evolving your schema over time. If you need to change something as fundamental as the primary key of your base table, you’re often left managing a complex and error-prone migration process entirely on your own, which can be disruptive and costly.&lt;/p&gt;

&lt;p&gt;Fauna strikes a &lt;a href="https://fauna.com/product/database-schema?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;balance between flexibility and control&lt;/a&gt;, allowing you to start schemaless while gradually introducing schema definitions as your application evolves. With Fauna’s built-in &lt;a href="https://docs.fauna.com/fauna/current/learn/schema/?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations#schema-migrations" rel="noopener noreferrer"&gt;zero-downtime migration&lt;/a&gt; functionality, you can safely evolve your schema over time without disrupting your application. This means you can iterate quickly in the early stages, and as your data model grows more complex, you can enforce constraints and relationships without the risk of downtime or performance degradation. This ability to quickly iterate and migrate data models to add new application features was a key challenge for Skylark when they used DynamoDB. By migrating to Fauna, they can adapt their data model as they add new customer features without operational disruptions.&lt;/p&gt;

&lt;p&gt;Fauna’s &lt;a href="https://fauna.com/product/document-relational?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;document-relational&lt;/a&gt; approach enforces rules and relationships within your data model, ensuring data integrity while retaining the agility of a NoSQL approach. The ability to evolve your schema over time gives you the flexibility to adapt to changing requirements without facing the operational challenges that come with DynamoDB’s primary key + schemaless design. This combination of control and adaptability in Fauna helps developers maintain long-term data consistency, manage complex data models, and scale their applications more easily.&lt;/p&gt;

&lt;h2&gt;
  
  
  Indexing impediments
&lt;/h2&gt;

&lt;p&gt;One of DynamoDB’s significant limitations is its inability to index nested values within documents. While you can create secondary indexes on top-level attributes of a base table, you’re out of luck regarding nested data fields. This makes efficient querying or filtering based on nested data impossible inside the database and forces developers to flatten their data models, duplicate data, or add additional logic at the application layer to work around this impediment. The lack of support for indexing nested values can make DynamoDB a poor choice for applications that rely on complex, hierarchical data structures.&lt;/p&gt;

&lt;p&gt;Fauna handles this more effectively by allowing you to index any field, including nested values within documents. Fauna’s flexible, document-relational-based model supports deeply nested structures, and you can create indexes on any level of the data hierarchy. This means you can query and filter your data efficiently, no matter how complex or nested. Unlike DynamoDB, Fauna enables you to build sophisticated queries and maintain performance without needing to restructure your data or write custom code to handle nested data.&lt;/p&gt;

&lt;p&gt;Fauna’s powerful indexing system also integrates with its relational and document-based data model, allowing you to work with rich, structured data without sacrificing query performance. This capability makes Fauna an ideal choice for use cases where you need to store and query deeply nested data—such as in applications handling complex documents, configurations, or user profiles—without the limitations that come with DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While DynamoDB is a powerful NoSQL database for handling large-scale workloads, its limitations in transactional logic, multi-region consistency, querying, schema evolution, and indexing make it less suitable for complex, dynamic applications. Fauna, by comparison, addresses these challenges with ease. With Fauna’s support for flexible, powerful transactions, globally distributed strong consistency, expressive query language, seamless schema evolution, and robust indexing of nested data, developers can build sophisticated applications without the operational headaches that come with DynamoDB.&lt;/p&gt;

&lt;p&gt;By migrating from DynamoDB to Fauna, many companies have not only simplified their codebases but also unlocked higher performance and scalability, all while reducing the complexity of their data management. If your application requires advanced data handling, real-time consistency, or more efficient querying, Fauna offers a more robust, developer-friendly alternative that can help you innovate faster and with fewer constraints.&lt;/p&gt;

&lt;p&gt;Interested in trying Fauna to see for yourself? Explore the &lt;a href="https://docs.fauna.com/fauna/current/?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;Fauna docs&lt;/a&gt; and &lt;a href="https://dashboard.fauna.com/register?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;get started for free&lt;/a&gt;. Don't hesitate to contact myself or &lt;a href="https://fauna.com/contact-us?utm_medium=organicsocial&amp;amp;utm_source=devto&amp;amp;utm_campaign=ch.organicsocial_tgt.developers_con.dynamodb-limitations" rel="noopener noreferrer"&gt;Fauna&lt;/a&gt; with any questions!&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>dynamodb</category>
      <category>fauna</category>
      <category>database</category>
    </item>
    <item>
      <title>Introduction to serverless databases</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Mon, 16 Sep 2024 07:00:00 +0000</pubDate>
      <link>https://dev.to/nosqlknowhow/introduction-to-true-serverless-databases-2hl4</link>
      <guid>https://dev.to/nosqlknowhow/introduction-to-true-serverless-databases-2hl4</guid>
      <description>&lt;p&gt;Serverless computing allows developers to build and run applications without managing or worrying about the underlying infrastructure. Resources are automatically available based on demand, so you only pay for what you use. This means you can focus entirely on writing code and delivering features while the service provider handles all the operational details.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://fauna.com/product/serverless-database" rel="noopener noreferrer"&gt;serverless database&lt;/a&gt; combines the power of a database with the agility and simplicity of a serverless architecture. Using a serverless database should eliminate the headaches of complex database management and enables you to seamlessly interact with your data through a straightforward cloud-based API. Many databases claim to be serverless or have a serverless option, but are they truly serverless? How does one judge these services to see if they are a truly serverless database?&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a serverless database?
&lt;/h2&gt;

&lt;p&gt;A true serverless database must abstract every element of &lt;strong&gt;deployment&lt;/strong&gt; and &lt;strong&gt;infrastructure&lt;/strong&gt; management, from provisioning to scaling, allowing developers to focus entirely on building their applications. It offers infinite scalability without capacity planning, automatically adjusting resources based on demand. Consumption is usage-based, meaning you only pay for the database operations performed and storage used, with no costs for idle capacity. A truly serverless database can be deployed instantly with just a single API call, and there is no planned downtime, ensuring continuous availability. This makes it an ideal choice for modern applications that require simplicity, scalability, and reliability.&lt;/p&gt;

&lt;p&gt;This benefits teams by freeing developers from the complexities of managing database infrastructure, allowing them to focus entirely on writing and optimizing application code. The simplicity of starting with a single API call and the assurance of no planned downtime accelerates development cycles. These attributes ensure high availability, making building and deploying robust applications easier. With automatic scaling and usage-based pricing, there’s no need to worry about over-provisioning or under-provisioning, so there is no paying for unused capacity. You simply use capacity or don’t, as there is no ramping up or down.&lt;/p&gt;

&lt;h2&gt;
  
  
  A litmus test for true serverless databases
&lt;/h2&gt;

&lt;p&gt;Below is a litmus test to use when evaluating databases claiming to be serverless. A true serverless database must have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nothing to provision or manage&lt;/li&gt;
&lt;li&gt;Zero capacity planning, with infinite scale&lt;/li&gt;
&lt;li&gt;Usage-based consumption model&lt;/li&gt;
&lt;li&gt;Ready with a single API call&lt;/li&gt;
&lt;li&gt;No planned downtime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;small&gt;&lt;strong&gt;Note:&lt;/strong&gt; This list was modified expressly for serverless databases but is based on the more general &lt;a href="https://www.gomomento.com/blog/fighting-off-fake-serverless-bandits-with-the-true-definition-of-serverless/?utm=kirk" rel="noopener noreferrer"&gt;Serverless Litmus Test&lt;/a&gt; that Khawaja Shams and I co-authored, to give credit where credit is due.&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;Let’s dive into each of these tests a bit deeper and see what to look for.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes a truly serverless database?
&lt;/h2&gt;

&lt;p&gt;Let’s review each item in the litmus tests to see what makes a serverless database and what fake serverless databases do and say.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nothing to provision, nothing to manage
&lt;/h3&gt;

&lt;p&gt;A serverless database has no network, servers or instances, cluster, storage, etc. to specify, spin up, or manage. Sign up for the service, create a database with your data model (tables, collections, functions, roles, etc.), and start using it. You should configure backups for a production database, but that’s about it for the infrastructure you need to worry about.&lt;/p&gt;

&lt;p&gt;Fake serverless databases may entail selecting instance types, creating or managing a cluster, picking node types, determining which Availability Zone the cluster nodes are in, and so on. That’s not serverless. It’s someone taking a database that isn’t serverless, putting lipstick on it, and unleashing their marketing and sales departments. Don’t fall for it!&lt;/p&gt;

&lt;h3&gt;
  
  
  Zero capacity planning with infinite scale
&lt;/h3&gt;

&lt;p&gt;With a true serverless database, there is no scaling up or down. There is no capacity to add that could take minutes to provision in a peak situation. In addition, there are no limits to scaling. The capacity is just there to be used or not.&lt;/p&gt;

&lt;p&gt;Many fake serverless databases require you to pick instance types or the number of instances in a cluster, so you must actively add and remove capacity. The problem is you have to guess initially, then monitor and manage this over time. Choose poorly, and either you pay too much money, or you don’t have enough capacity to run, and you’re doing summersaults to swap to larger instance sizes. If you have enabled inter-regional replication, the burden of which nodes are where and what the database sharding looks like is all on you.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ready with a single API call
&lt;/h3&gt;

&lt;p&gt;A true serverless database is just there, always ready and willing. You make an API call, and it answers immediately. There is nothing to spin up, no actions to compensate for cold starts, and no connection pools. Even better is when that API call can be over HTTPS, as you can call it from serverless edge functions (e.g., AWS Lambda@Edge, CloudFlare Workers, Vercel Functions, Netlify Functions, and more) with any modern programming language.&lt;/p&gt;

&lt;p&gt;Fake serverless databases may still require things like connection pooling and fat clients. This makes your app code bigger and more complicated, takes more developer time, and is sub-optimal for most serverless functions, especially serverless edge functions. The larger size and connection pooling means that regardless of where your code is, the function needs more resources than it might with a genuine serverless database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Usage-based consumption model
&lt;/h3&gt;

&lt;p&gt;True serverless databases should not force upfront provisioned capacity. Mandating a predetermined capacity commitment places an undue burden on developers. A serverless database bases ongoing pricing on what you use. To put it another way, resource consumption is metered. Any other style of pricing, you risk going down the too much or too little capacity I mentioned earlier, which is sub-optimal.&lt;/p&gt;

&lt;p&gt;Many fake serverless databases require predetermined capacity commitments because, despite appearing as a SaaS offering, they are often just provisioned single-user systems on the back end. Without knowing your exact needs, these systems are often provisioned large, leading to inflated costs through dramatic per-usage metering. Even if capacity commitments aren’t mandatory, they’re often presented as a tempting option for discounts. In contrast, genuine serverless databases operate on a multi-tenant system where the underlying cluster is already provisioned, meaning nothing needs to be set up expressly for you, and you only pay for the resources you use.&lt;/p&gt;

&lt;h3&gt;
  
  
  No planned downtime
&lt;/h3&gt;

&lt;p&gt;A true serverless database implements changes without planned downtime. One of the main points is that a serverless database needs to be available immediately when it’s called. You can’t do that when the database takes planned downtime. Any patch, new feature, or whatever is applied to the service, and the first a customer should hear of it is an announcement of that new feature, or they notice things just working better.&lt;/p&gt;

&lt;p&gt;Some fake serverless databases require downtime to roll out a new version, feature, or patch. You get messages like, “Hey, your database will be unavailable for a software patch for an hour this Saturday night.” Just…no.&lt;/p&gt;

&lt;h2&gt;
  
  
  Examples of serverless databases
&lt;/h2&gt;

&lt;p&gt;Below is a list of a few databases that present as serverless. We will leave it to you to use the litmus test to know whether they are genuinely serverless.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fauna is a truly serverless, &lt;a href="https://fauna.com/product/document-relational" rel="noopener noreferrer"&gt;document-relational&lt;/a&gt; database designed for modern applications, providing developers with &lt;a href="https://fauna.com/product/distributed-transaction-engine" rel="noopener noreferrer"&gt;effortless scalability and global availability&lt;/a&gt; without infrastructure management. It offers a flexible, developer-friendly API that supports &lt;a href="https://fauna.com/product/fql" rel="noopener noreferrer"&gt;complex queries and ACID transactions&lt;/a&gt;, ensuring data consistency and reliability. With support for both structured and semi-structured data, Fauna offers a robust operational database that supports a wide range of use cases and can evolve with your application. Multi-Active reads and writes and automatic distribution enable your application to scale without the complexity of manual sharding or replication.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/rds/aurora/serverless/" rel="noopener noreferrer"&gt;Amazon Aurora Serverless&lt;/a&gt; is a proprietary AWS service compatible with Postgres and MySQL, which means you can connect to your Aurora database as if you’re connecting to Postgres or MySQL. It is optimized for and only runs in AWS.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.mongodb.com/products/platform/atlas-database" rel="noopener noreferrer"&gt;MongoDB Atlas Serverless&lt;/a&gt; is a managed, cloud-based database service that simplifies the deployment and scaling of MongoDB clusters. It offers built-in features like automated backups, real-time performance monitoring, and advanced security. For a comparison between Mongo and Fauna, visit &lt;a href="https://fauna.com/compare/mongodb" rel="noopener noreferrer"&gt;this page&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cloud.google.com/firestore" rel="noopener noreferrer"&gt;Google Firestore&lt;/a&gt; is a serverless document database providing direct web, IoT, and mobile app development access. It’s highly scalable with no maintenance window and zero downtime.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;Amazon DynamoDB&lt;/a&gt; is a managed key-value NoSQL database service built with scale in mind. DynamoDB integrates well with other AWS services, such as Lambda, EventBridge, API Gateway, and Step Functions. For a comparison between DynamoDB and Fauna, visit &lt;a href="https://fauna.com/compare/dynamodb" rel="noopener noreferrer"&gt;this page&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.cockroachlabs.com/lp/serverless-22/" rel="noopener noreferrer"&gt;CockroachDB&lt;/a&gt;, offers several deployment options of its relational database, including a serverless version. It offers an elastic and robust data architecture distributed globally to help developers rapidly develop apps. It is a single Postgres instance in many aspects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Discover the key differences between popular serverless databases with our &lt;a href="https://fauna.com/blog/comparison-of-serverless-databases" rel="noopener noreferrer"&gt;comparison guide&lt;/a&gt;. Furthermore, understand how &lt;a href="https://fauna.com/product/serverless-database" rel="noopener noreferrer"&gt;Fauna offers a true serverless database&lt;/a&gt; choice by exploring its unique features and benefits. For more in-depth comparisons of Fauna vs other serverless databases, check out &lt;a href="https://fauna.com/compare/dynamodb" rel="noopener noreferrer"&gt;Fauna vs. DynamoDB&lt;/a&gt;, &lt;a href="https://fauna.com/compare/mongodb" rel="noopener noreferrer"&gt;Fauna vs. MongoDB Atlas&lt;/a&gt;, and &lt;a href="https://fauna.com/blog/compare-aws-aurora-serverless-v2-architecture-features-pricing-vs-fauna" rel="noopener noreferrer"&gt;Fauna vs. AWS Aurora Serverless v2&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>nosql</category>
      <category>serverless</category>
      <category>database</category>
      <category>fauna</category>
    </item>
    <item>
      <title>Am I already getting AI fatigue?</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Thu, 25 Jul 2024 16:34:04 +0000</pubDate>
      <link>https://dev.to/nosqlknowhow/am-i-already-getting-ai-fatigue-64</link>
      <guid>https://dev.to/nosqlknowhow/am-i-already-getting-ai-fatigue-64</guid>
      <description>&lt;p&gt;While I use and like many things being done with AI these days, I also find it a bit overwhelming in recent times. Perhaps because I have been using and following it for many years, I might be getting AI fatigue already. Also, some of it is starting to be overused, pushed for everything, or even used in dangerous ways.&lt;/p&gt;

&lt;p&gt;I am not one to stifle innovation, cast aside new things, or be a Luddite, but I am starting to adopt an Ian Malcolm from "Jurassic Park" opinion about some uses of AI I see lately.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We are already starting to see companies and governments using AI for activities some would initially see as harmless, but when you really look at them, they are straight out of the "1984" or "Brave New World" playbooks. They're being used behind seemingly innocuous things. "Don't you want cameras in your neighborhood to track who is coming and going to keep your neighborhood and kids secure?" Meanwhile, that video is being used by a company to feed AI-filtered information about you and your neighbors to the government. This is happening today. The company in question even has cameras on FedEx delivery trucks now "For driver protection." Meanwhile, data about every house, vehicle, etc., is fed into its data banks and to government agencies. It's like having the &lt;a href="https://en.wikipedia.org/wiki/Gestapo" rel="noopener noreferrer"&gt;Gestapo&lt;/a&gt; right there in your neighborhood that you invited in. People will say, "If you're doing nothing wrong, you don't have to be worried about it." Right up until it does affect you or someone you know or love. Until there is so much of this, we live in a dystopian present like the worlds of the books I mentioned. Both of which should be mandatory reading.&lt;/p&gt;

&lt;p&gt;We as a society are taking a dangerous path, and AI can be used for good and/or evil. If you work for a company that makes AI products, be careful who you let use those tools. Now is the time to speak up and stop these things or at least guide them. Try to do no harm.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Flexibility Meets Structure: Evolving Document Database Schemas with Fauna</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Fri, 21 Jun 2024 00:00:02 +0000</pubDate>
      <link>https://dev.to/nosqlknowhow/flexibility-meets-structure-evolving-document-database-schemas-with-fauna-5bif</link>
      <guid>https://dev.to/nosqlknowhow/flexibility-meets-structure-evolving-document-database-schemas-with-fauna-5bif</guid>
      <description>&lt;p&gt;The debate over utilizing a more strict schema definition and enforcement versus going schemaless with NoSQL databases often sparks passionate discussions. For the longest time, I was in the camp of “I hate the word schemaless,” when it came to NoSQL databases…and I am not someone who uses the term hate lightly. I was squarely in the “you must have a schema” camp. “Know your access patterns!” And while, ultimately, I still think you should have a schema and data model for every production app using NoSQL for it to perform well and be cost-effective, I have softened my “I hate schemaless” ideology. Why? It depends on where you and your team are in the development or application lifecycle and what kind of data you have. Early on, you may not know all your data access patterns or how data relates. Over time, that likely changes and the database schema and data model need to change with you. In addition, I have softened my stance because features in NoSQL databases evolved over the years. This is especially true recently, but more on that in a bit.&lt;/p&gt;

&lt;p&gt;Strict schemas offer data integrity, static typing, computed fields, and predictability, which are highly valued by many but not usually associated with NoSQL databases. On the other end of the spectrum, schemaless design provides flexibility and time efficiency, allowing unstructured data to be easily added. While this can work in some cases, most apps need more structure and controls for long-term cost-effectiveness and performance, but also data integrity.&lt;/p&gt;

&lt;p&gt;I will give you an example. I looked at a former coworker’s data model a few years ago and was surprised. He was simply dumping JSON into the database. For the app he was working on, it worked…for the moment. If he needed to scale to even a thousand or more ops/sec, he would have had problems in both performance and hard costs. I almost presented him with a better data model, but he was hesitant to change anything. Changing the data model or schema in the database on his platform would have been a major task, and that platform lacked controls to maintain data integrity, given his coworkers’ involvement. It also offered no help for migrations either.&lt;/p&gt;

&lt;p&gt;I have heard this from developers hundreds of times in my years with NoSQL databases. “What if I get my shard key wrong?” “What if I choose the wrong partition key?” Most databases give you the freedom to design a data model but then punish you for making incorrect decisions or just needing to change things when an app design changes. “You’re on your own,” is what most databases essentially say, as they don’t make fixing the issue easy.&lt;/p&gt;

&lt;p&gt;Fauna’s latest additions to its &lt;a href="https://fauna.com/product/database-schema?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;Schema features&lt;/a&gt; change all of this. It introduces &lt;a href="https://docs.fauna.com/fauna/current/learn/schema#type-enforcement?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;document type enforcement&lt;/a&gt;, including &lt;a href="https://docs.fauna.com/fauna/current/learn/schema#field-definitions?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;field definitions&lt;/a&gt; and &lt;a href="https://docs.fauna.com/fauna/current/learn/schema#wildcard-constraint?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;wildcard constraints&lt;/a&gt;, as well as &lt;a href="https://docs.fauna.com/fauna/current/learn/schema#schema-migrations?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;zero-downtime schema migrations&lt;/a&gt;. These features, along with the previously released &lt;a href="https://docs.fauna.com/fauna/current/learn/schema#check-constraints?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;check constraints&lt;/a&gt; and &lt;a href="https://docs.fauna.com/fauna/current/learn/schema#computed-fields?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;computed fields&lt;/a&gt;, change how we can approach schemas and data modeling in a NoSQL document database. The beauty of this release is you now have strict schema control and enforcement tools, but you don’t have to make those potentially difficult decisions upfront. Even better is the zero-downtime migrations solve the anxiety of “did I get this data model correct.” The new features allow you to start completely schemaless and add a stricter schema and enforcement over time as your application evolves. It gives you the ability to migrate from your existing schema, or lack thereof, in a controlled, methodical, and scripted fashion to your new schema. There’s a reason why Fauna is called a document-relational database.&lt;/p&gt;

&lt;p&gt;Anyhow, let’s jump into the release features and see exactly what’s here and why it matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Document types
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.fauna.com/fauna/current/learn/schema#document-type-definitions?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;Document types&lt;/a&gt; enable you to codify and enforce the shape of the data you want for a collection. Things like what fields can a document in this collection have, what values those fields can have, whether they are optional, can a document have fields not part of the required fields, and so on. To put it another way and use an example, you create a collection named Product and define what the product documents in that collection must look like structure-wise, or else non-conforming write and update operations are rejected.&lt;/p&gt;

&lt;p&gt;Whether you stay schemaless, add some field definitions with wildcard constraints to also have ad-hoc fields in order to stay flexible, or go fully strict and only allow a finite list of fields, Fauna will enforce what you define as the schema for that collection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Field definitions and schema enforcement
&lt;/h2&gt;

&lt;p&gt;First up is field definitions. With this, you can define fields for documents in a collection as one or more &lt;a href="https://docs.fauna.com/fauna/current/learn/data_model/documents#document-type?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;data types&lt;/a&gt;, a reference to another document, enumerated values, or a wildcard constraint. You can even set if the listed fields in JSON documents for this collection must be present or are optional. Prior to this latest release, you could already set a unique constraint on a single field or a combination of fields.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;collection Order {
  user: Ref&amp;lt;User&amp;gt;
  cart: Array&amp;lt;Ref&amp;lt;Product&amp;gt;&amp;gt;
  address: String | Ref&amp;lt;Address&amp;gt;
  name: String?
  status: "in-progress" | "completed" | "error" = "in-progress"
  *: Any
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I define a collection named Order, and it has five fields and a wildcard constraint: 1. The User field must be present and a reference to a document in the User collection. 2. The Cart field must be present, and an array of references to documents in the Product collection. 3. The Address field must be present, but it can be either of type String or a reference to a document in the Address collection. 4. The name field is optional and can be Null, but if it is present, it must be of type String. 5. The status field is not nullable, must be of one of the enumerated values, and if not present, defaults to “in-progress.” 6. A wildcard constraint, but more on that in a shortly.&lt;/p&gt;

&lt;p&gt;Once this schema is in place, if you try to write or update a document in the Order collection and the new document violates this structure, that transaction is rejected by the database. You could also make this collection have a strict schema where documents must have these fields and only these fields. If the document has additional fields, the transaction is rejected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wildcard constraints to keep some schema flexibility
&lt;/h2&gt;

&lt;p&gt;Now about that wildcard constraint in the example above…&lt;/p&gt;

&lt;p&gt;&lt;code&gt;*: Any&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;There are three ways to think about and work with wildcard constraints: 1. If you have it along with other fields defined in a collection definition, it tells Fauna that it’s ok for incoming documents in this collection to be flexible. This means the document must adhere to the defined schema for this collection, but the wildcard constraint says you can have additional ad-hoc fields in that document. 2. If you have a collection definition and it has no field definitions, that is an implied wildcard constraint. You could put it explicitly in there, but it’s not necessary. 3. If you omit the wildcard constraint line from a collection definition with defined fields, we have a strict schema for this collection. This means the documents in the example Order collection must adhere to the schema provided, and they cannot have ad-hoc fields.&lt;/p&gt;

&lt;p&gt;To be overly clear, with the wildcard constraint, any document in the Order collection example above can have additional fields not listed in the schema, but they are not checked by Fauna. So you get the best of both worlds here. In the same document, you get flexibility and schema control/enforcement when you need it, but still have extensibility as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Zero-Downtime Migrations
&lt;/h2&gt;

&lt;p&gt;While the benefits of field definitions and document type are great, it’s migrations that truly tie everything together and make this work. Migrations facilitate you to seamlessly and systematically update your each collection’s schema as your needs evolve. As mentioned in my example with a former coworker, most databases do not make altering your schema easy. Even an RDBMS applies schema changes synchronously and holds locks, creating downtime. In most cases, when you make changes, you have do a ton of heavy lifting to write and test code that runs the migration outside of the database in order to read, transform, and move data to the new schema. I have written hundreds of these in my years working on databases, and they can be a major pain.&lt;/p&gt;

&lt;p&gt;Fauna solves this with additions to the Fauna Schema Language (FSL). While FSL previously existed, now it has the ability to incorporate instructions on how to migrate your existing schema to the next iteration in a controlled fashion. FSL files can also be versioned with your existing code with tools like Git and be part of your CI/CD pipelines. Best of all, the FSL runs inside the database. No dragging data to and from a client. You transmit the instructions on how to change the schema to what you want, and Fauna takes care of all the heavy lifting.&lt;/p&gt;

&lt;p&gt;For instance, I began the development of my app and started with a schemaless collection for a user profile in your User collection. I don’t know what the schema would look like ultimately, but now that I am a few days into this, I know a few fields that must be present in every user document going forward.&lt;/p&gt;

&lt;p&gt;My existing collection definition in FSL looks like this, perhaps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;collection User {
  *: Any
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: I added the wildcard constraint explicitly for talking purposes. If that line is omitted, the wildcard constraint is implied.&lt;/p&gt;

&lt;p&gt;I want to make sure that every document in the User collection has a first name, a last name, and an email address, but more fields can be added if you want. Here’s what the schema definition looks like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;collection User {
  firstName: String
  lastName: String
  emailAddress: String
  *: Any
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My dev process is progressing and I want to specify fields I know must be in the User document type, and for Fauna to enforce that. I still want to be flexible on adding more fields as needed though. If I didn’t have any data in the User collection, I could stop here. I do have data and I don’t want to delete it, so I need to do a migration. Fauna will not assume anything for migrations. You have to give it explicit instructions on what to do.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;collection User {
  firstName: String
  lastName: String
  emailAddr: String
  conflicts: { *:Any }?
  *: Any

  migrations {
    add .firstName
    add .lastName
    add .emailAddr
    add .conflicts
    move_conflicts .conflicts
    backfill .emailAddr = "unknown"
    backfill .firstName = "unknown"
    backfill .lastName = "unknown"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the collection definition, I have the structure as I showed before, but I added a conflicts field for the migration process in the event there is a data type conflict. In the migration section, I am telling Fauna to perform an add of the four fields and to move any field with conflicting data types into the object in the conflicts field. For example, say there is one document with a value in firstName, but it is a number, not a string, as I have defined firstName to be. That is a conflict. The migration will move that field as I mentioned. The document will still have a firstName field, but it will have a value of unknown because of the next section. These backfills are because I have said in the collection definition that these fields cannot be null. So there has to be something there. In this case, I put “unknown”, but it could be whatever you want. Your application could then look for that value and, if you want to, handle it. i.e. prompt a user to fill it in with valid data.&lt;/p&gt;

&lt;p&gt;This is a simple overview, and there is a lot more to &lt;a href="https://docs.fauna.com/fauna/current/learn/schema#schema-migrations?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;migrations&lt;/a&gt;, as you can imagine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In conclusion, the evolution of NoSQL databases and schema, particularly with Fauna’s latest release, bridges the gap between the flexibility of schemaless design and the structure of strict schemas. As a document-relational database, Fauna combines the best aspects of both document and relational schema design, offering features like field definitions, document type enforcement, and seamless migrations using Fauna Schema Language. These advancements enable developers to start with a schemaless approach and gradually incorporate structure as their application evolves. What Fauna calls “gradual typing.” This not only ensures long-term performance and cost-effectiveness but also maintains data integrity and adaptability. With these features, Fauna advances how we approach schemas and data modeling in NoSQL databases, making it easier than ever to adapt and scale your database to meet your evolving needs.&lt;/p&gt;

&lt;p&gt;For more information about any of these topics, the documentation on &lt;a href="https://docs.fauna.com/fauna/current/learn/schema#collection-schema?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;collections&lt;/a&gt; and &lt;a href="https://docs.fauna.com/fauna/current/learn/data_model/documents?utm_source=devto&amp;amp;utm_medium=organicsocial&amp;amp;utm_campaign=ch.organicsocial_tgt.page-followers_con.flexibility-meets-structure-blog"&gt;documents&lt;/a&gt; are your best resource.&lt;/p&gt;

</description>
      <category>database</category>
      <category>nosql</category>
      <category>serverless</category>
      <category>devops</category>
    </item>
    <item>
      <title>What’s the difference between RBAC and ABAC in Fauna?</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Thu, 20 Jun 2024 23:41:47 +0000</pubDate>
      <link>https://dev.to/nosqlknowhow/whats-the-difference-between-rbac-and-abac-in-fauna-53ih</link>
      <guid>https://dev.to/nosqlknowhow/whats-the-difference-between-rbac-and-abac-in-fauna-53ih</guid>
      <description>&lt;p&gt;ABAC (Attribute-Based Access Control) is not an extension of RBAC (Role-Based Access Control), but rather a distinct model that can be considered a superset regarding flexibility and granularity. They both answer the question, “Does this operation have access,” but use very different mechanisms to determine the answer. Here’s how they compare and relate:&lt;/p&gt;

&lt;h2&gt;
  
  
  Role-Based Access Control (RBAC):
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Role-centric: Access decisions are primarily based on the static roles assigned to users. Each role has predefined permissions that determine what the bearer of that role can access.&lt;/li&gt;
&lt;li&gt;Simplicity and manageability: RBAC is generally simpler to implement and manage because it categorizes permissions by broad roles, which can be easily assigned to users.&lt;/li&gt;
&lt;li&gt;Static: The rules are static and don't typically consider the context of a request or the attributes of the resources being accessed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Attribute-Based Access Control (ABAC) in Fauna:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Attribute-centric: ABAC uses a variety of attributes (user attributes, resource attributes, action attributes, and contextual attributes) to make access decisions. These attributes can encompass various data points pertinent to enforcing access control policies. This includes personal user information such as age and location, organizational roles assigned to the user, and broader system-level conditions like the time of day or the device being used for access. Each attribute can be dynamically assessed to make real-time decisions about the user’s permissions within the system.&lt;/li&gt;
&lt;li&gt;Dynamic and granular: Policies in ABAC can be very granular and context-sensitive, allowing for more precise control over who can access what, when, and under what conditions.&lt;/li&gt;
&lt;li&gt;Flexibility: Due to its reliance on multiple attributes for making decisions, ABAC can accommodate more complex scenarios than RBAC. It can adapt to a range of changing conditions, which would be more difficult or cumbersome to manage in a purely role-based model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Relationship between RBAC and ABAC:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;While RBAC is focused on user roles, ABAC uses roles as just one of the many attributes it uses for access control. This means ABAC can implement all the policies that RBAC can, plus additional policies that are too specific or dynamic for RBAC to handle effectively.&lt;/li&gt;
&lt;li&gt;Thus, ABAC can be seen as a superset of RBAC in terms of capability. It offers everything RBAC does, with additional flexibility to incorporate a broader range of criteria into access decisions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, ABAC offers a more flexible and comprehensive approach to access control compared to RBAC, capable of handling complex, dynamic environments by leveraging a wide range of attributes, whereas RBAC offers a simpler, more straightforward approach that might be sufficient for environments with fixed access control requirements based on well-defined roles.&lt;/p&gt;

</description>
      <category>security</category>
      <category>database</category>
      <category>nosql</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Decoding Fauna: ABAC vs. RBAC Explained</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Thu, 09 May 2024 19:42:06 +0000</pubDate>
      <link>https://dev.to/nosqlknowhow/decoding-fauna-abac-vs-rbac-explained-4pj1</link>
      <guid>https://dev.to/nosqlknowhow/decoding-fauna-abac-vs-rbac-explained-4pj1</guid>
      <description>&lt;p&gt;ABAC (Attribute-Based Access Control) is not so much an extension of RBAC (Role-Based Access Control), but rather a superset regarding flexibility and granularity. They both answer the question, “Does this incoming operation have access to do XYZ,” but each uses different mechanisms to determine the answer.&lt;/p&gt;

&lt;p&gt;In addition to RBAC's roles, ABAC takes this to the next level and enables the real-time evaluation of dynamic information, such as data values in documents, time, and environmental factors like IP addresses, among other things, without requiring extensive coding in applications or middleware.&lt;/p&gt;

&lt;p&gt;Here’s how they compare and relate:&lt;/p&gt;

&lt;h2&gt;
  
  
  Role-Based Access Control (RBAC):
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Role-centric: Access decisions are primarily based on the static roles assigned to users. Each role has predefined permissions that determine what the bearer of that role can access.&lt;/li&gt;
&lt;li&gt;Simplicity and manageability: RBAC is generally simple to implement and manage because it categorizes permissions by broad roles, which can be easily assigned to users.&lt;/li&gt;
&lt;li&gt;Static: The rules are static and don't typically consider the context of a request or the attributes of the resources being accessed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Attribute-Based Access Control (ABAC) in Fauna:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Attribute-centric: ABAC uses a variety of attributes (user attributes, resource attributes, action attributes, and contextual attributes) to make access decisions. These attributes can encompass various data points pertinent to enforcing access control policies. This includes personal user information such as age and location, organizational roles assigned to the user, and broader system-level conditions like the time of day or the device used for access. Each attribute can be dynamically assessed to make real-time decisions about the user’s permissions within the system.&lt;/li&gt;
&lt;li&gt;Dynamic and granular: Policies in ABAC can be very granular and context-sensitive, allowing for more precise control over who can access what, when, and under what conditions.&lt;/li&gt;
&lt;li&gt;Flexibility: ABAC can accommodate more complex scenarios than RBAC due to its reliance on multiple attributes for making decisions. It can adapt to changing conditions, which would be more difficult or cumbersome to manage in a purely role-based model. Fauna Query Language (FQL) is used to query data to inform decisions. If you can query data using FQL, you can incorporate that query into security decisions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Relationship between RBAC and ABAC:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;While RBAC is focused on user roles, ABAC uses roles as just one of the many attributes it uses for access control. This means ABAC can implement all the policies that RBAC can, plus additional policies that are too specific or dynamic for RBAC to handle effectively.&lt;/li&gt;
&lt;li&gt;Thus, ABAC can be seen as a superset of RBAC regarding capability. It offers everything RBAC does, with additional flexibility to incorporate a broader range of criteria into access decisions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  An example of ABAC in action in Fauna
&lt;/h2&gt;

&lt;p&gt;Below is an example of using some simple ABAC permissions in Fauna. This code creates a role named authUser. It limits who gets this role dynamically based on the schedule in their Fauna "identity document," and then if/when they are eligible, they have the ability to create, read, delete, and write on documents in the "workspaces" collection. Via FQL, it queries for the identity of the calling token, known as the "identity document." It then looks for the projectContributor attribute in that document to see if the array in that attribute matches the attribute in the document you are trying to access. Essentially, if you're on the project, you can interact with the documents related to this project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;role authUser
  membership {
    predicate (self=&amp;gt; {
      Time.now().hour &amp;gt;= self?.schedule[0] &amp;amp;&amp;amp;
      Time.now().hour  &amp;lt; self?.schedule[1]
    })
  }
 privileges workspaces {
    create {
      predicate(data=&amp;gt;{
        Query.identity()!.projectContributor.includes(data.project)
      })
    }
    read {
      predicate (data=&amp;gt;{
        Query.identity()!.projectContributor.includes(data.project)
      })
    }
    write {
      predicate ((a,b)=&amp;gt;{
        Query.identity()!.projectContributor.includes(a.project)
        &amp;amp;&amp;amp; Query.identity()!.projectContributor.includes(b.project)
      })
    }
    delete { 
      predicate (data=&amp;gt;{
        Query.identity()!.projectContributor.includes(data.project) 
      })
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cool thing is that if someone is added to or removed from a project or it is outside of their work hours (the membership predicate), their permissions shift immediately and without manual intervention! The data is secure at the source...the database.&lt;/p&gt;

&lt;p&gt;In summary, &lt;a href="https://docs.fauna.com/fauna/current/learn/security_model/abac"&gt;ABAC&lt;/a&gt; in &lt;a href="https://docs.fauna.com/fauna/current/learn/security_model/"&gt;Fauna offers a more flexible&lt;/a&gt; and comprehensive approach to access control compared to RBAC, capable of handling complex, dynamic environments by leveraging a wide range of attributes, whereas RBAC offers a rudimentary approach that might be sufficient for environments with fixed access control requirements based on well-defined roles.&lt;/p&gt;

</description>
      <category>security</category>
      <category>database</category>
      <category>nosql</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Multi-region, strongly consistent databases matter</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Wed, 13 Mar 2024 17:29:48 +0000</pubDate>
      <link>https://dev.to/nosqlknowhow/multi-region-strongly-consistent-databases-matter-43pc</link>
      <guid>https://dev.to/nosqlknowhow/multi-region-strongly-consistent-databases-matter-43pc</guid>
      <description>&lt;p&gt;Having a multi-region database that offers strong consistency is a big deal in the realm of distributed systems and modern app development. It is especially critical for global applications for several reasons. Few have it, e.g. &lt;a href="https://fauna.com?utm="&gt;Fauna&lt;/a&gt;, and many want it. It's not easy to get right.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scalability
&lt;/h2&gt;

&lt;p&gt;A multi-region, strongly consistent database architecture allows organizations to scale their applications globally without worrying about the complexities of data consistency across regions. This scalability is essential for businesses aiming to expand their reach or manage large volumes of traffic across different parts of the world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Global Accessibility with Low Latency
&lt;/h2&gt;

&lt;p&gt;In a multi-region setup, data is replicated across various geographical locations, ensuring users can access the data from a nearby region. This reduces latency, as requests are served from the closest data center, cloud region, or even cloud provider, improving the speed and responsiveness of applications for users worldwide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strong Consistency Guarantees
&lt;/h2&gt;

&lt;p&gt;Strong consistency means that any read operation retrieves the most recent write for a given piece of data, regardless of the geographical location of the data request. This is crucial for applications that require real-time data accuracy, such as financial services, e-commerce transactions, and collaborative tools. With strong consistency, developers can ensure that users see the latest data &lt;em&gt;without compromising data integrity or experiencing stale reads&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  High Availability and Disaster Recovery
&lt;/h2&gt;

&lt;p&gt;A multi-region database architecture enhances the availability of applications by providing redundancy. If one region experiences downtime due to technical failures or other disruptions, front-end traffic can be rerouted to another active region, ensuring the application remains available to users. This redundancy is essential for maintaining high availability and implementing effective disaster recovery strategies. There's no need for an old-school "failback" procedure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simplified Application Logic
&lt;/h2&gt;

&lt;p&gt;When using a database that ensures strong consistency across regions, developers don't have to implement complex workarounds or logic in their applications to handle potential consistency issues. This simplification reduces the development and maintenance overhead, allowing teams to focus on building features and improving the application.&lt;/p&gt;

&lt;p&gt;Fauna's database offers exactly this solution. That combination of multi-region replication and strong consistency provides you with a tool to build global, scalable, and reliable applications.&lt;/p&gt;

</description>
      <category>database</category>
      <category>nosql</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Simple deletes in Fauna Query Language - (FQL)</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Fri, 08 Mar 2024 18:13:27 +0000</pubDate>
      <link>https://dev.to/nosqlknowhow/deleting-data-with-fauna-query-language-fql-v10-2bi2</link>
      <guid>https://dev.to/nosqlknowhow/deleting-data-with-fauna-query-language-fql-v10-2bi2</guid>
      <description>&lt;p&gt;Now that you have &lt;a href="https://dev.to/nosqlknowhow/simple-write-operations-in-fauna-query-language-fql-v10-3lee"&gt;written some data&lt;/a&gt; and &lt;a href="https://dev.to/nosqlknowhow/simple-update-operations-in-fauna-query-language-fql-v10-2bh6"&gt;updated some data&lt;/a&gt; in Fauna, in this how to let's delete some data!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/nosqlknowhow/simple-update-operations-in-fauna-query-language-fql-v10-2bh6"&gt;In my last post&lt;/a&gt;, you saw getting a result set from the byStatus index and using the &lt;a href="https://docs.fauna.com/fauna/current/reference/reference/schema_entities/set/foreach?utm_source=dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=awareness"&gt;&lt;code&gt;.forEach()&lt;/code&gt;&lt;/a&gt; or &lt;a href="https://docs.fauna.com/fauna/current/reference/reference/schema_entities/set/map?utm_source=dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=awareness"&gt;&lt;code&gt;.map()&lt;/code&gt;&lt;/a&gt; functions to iterate through the results, deletes are easy. &lt;/p&gt;

&lt;h2&gt;
  
  
  Deleting data from a Collection in Fauna
&lt;/h2&gt;

&lt;p&gt;To delete, we just call &lt;a href="https://docs.fauna.com/fauna/current/reference/reference/schema_entities/document/delete?utm_source=dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=awareness"&gt;the &lt;code&gt;.delete()&lt;/code&gt; function&lt;/a&gt; inside &lt;code&gt;.forEach()&lt;/code&gt;, and as it iterates through the documents, it deletes each one of them. Here are two different styles of using forEach to do this. You can choose for deletes or other operations which style you want to use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Order.byStatus("shipped", "20220").forEach(.delete())
Order.byStatus("shipped", "20220").forEach(x =&amp;gt; x.delete())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This can be done in two different ways, but they are essentially the same. It is a style call.&lt;/p&gt;

&lt;p&gt;One side note, I am using the &lt;code&gt;.forEach()&lt;/code&gt; function as I am deleting documents thus not returning anything. Since the &lt;code&gt;.delete()&lt;/code&gt; function returns nothing, and &lt;code&gt;.forEach()&lt;/code&gt; returns no data, this is more efficient.&lt;/p&gt;




&lt;p&gt;Fauna is a serverless, globally distributed strong consistent database designed for modern application development. It offers the flexibility of NoSQL with the safety and ease of traditional relational databases. It provides seamless scalability, strong consistency, and multi-model access capabilities, making it an ideal choice for developers seeking to build fast, reliable applications without the operational overhead of traditional database management. Fauna's support for ACID transactions, and its temporal database features further enhance its utility, allowing for efficient data retrieval, updates, and historical data analysis.&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>nosql</category>
      <category>database</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to do simple updates in Fauna Query Language (FQL)</title>
      <dc:creator>Kirk Kirkconnell</dc:creator>
      <pubDate>Fri, 08 Mar 2024 17:59:30 +0000</pubDate>
      <link>https://dev.to/nosqlknowhow/simple-update-operations-in-fauna-query-language-fql-v10-2bh6</link>
      <guid>https://dev.to/nosqlknowhow/simple-update-operations-in-fauna-query-language-fql-v10-2bh6</guid>
      <description>&lt;p&gt;Now that we have &lt;a href="https://dev.to/nosqlknowhow/simple-write-operations-in-fauna-query-language-fql-v10-3lee"&gt;written some data to the database&lt;/a&gt;, let’s look at updating existing data.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/nosqlknowhow/simple-write-operations-in-fauna-query-language-fql-v10-3lee"&gt;the previous example&lt;/a&gt;, we created a generic array to use, but in this example, we will use the demo data you can load when creating a new database in Fauna’s dashboard. Let’s update some of the order documents in the Order collection. More specifically, we want to update the status field to “shipped” for every order in our system currently with a status of “processing” and has a delivery address with the zip code “20220.” In this example, I am querying an index I created on the Order collection to get a set of documents.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Order.byStatus("processing", "20220").toArray().map(doc =&amp;gt; doc.update({status: "shipped"}))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The byStatus &lt;a href="https://docs.fauna.com/fauna/current/learn/data_model/indexes"&gt;index&lt;/a&gt; is defined with two index terms on the fields status and the zipCode in the deliveryAddress object nested in the Order documents. I use &lt;a href="https://docs.fauna.com/fauna/current/reference/reference/schema_entities/set/toarray?utm_source=dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=awareness"&gt;FQL’s &lt;code&gt;.toArray()&lt;/code&gt; function&lt;/a&gt; to &lt;em&gt;materialize&lt;/em&gt; the Set I get from reading the index into an array of documents. If I call &lt;a href="https://docs.fauna.com/fauna/current/reference/reference/schema_entities/set/map?utm_source=dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=awareness"&gt;the &lt;code&gt;.map()&lt;/code&gt; function&lt;/a&gt; without the &lt;code&gt;.toArray()&lt;/code&gt; function, the call will fail as this could, in rare cases, help you create undesirable data consistency issues, so FQL prohibits this action. Anyhow, I want to update the array of documents at the exact same time, and I know there are not that many of them. Therefore, &lt;code&gt;toArray()&lt;/code&gt; &lt;em&gt;materializes&lt;/em&gt; the result set into one big array, and map iterates through that array to run the update function on each document altering the status to shipped. &lt;/p&gt;

&lt;p&gt;One more nuance worth noting, I am using &lt;a href="https://docs.fauna.com/fauna/current/reference/reference/schema_entities/set/map?utm_source=dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=awareness"&gt;the &lt;code&gt;.map()&lt;/code&gt; function&lt;/a&gt; instead of &lt;a href="https://docs.fauna.com/fauna/current/reference/reference/schema_entities/set/foreach?utm_source=dev.to&amp;amp;utm_medium=blog&amp;amp;utm_campaign=awareness"&gt;the &lt;code&gt;.forEach()&lt;/code&gt; function&lt;/a&gt; you saw earlier for a specific reason. forEach can be used to update the documents too, but it doesn’t return the updated documents written to the database, whereas &lt;code&gt;.map()&lt;/code&gt; does. This is an important distinction between these two functions, so use the right one in your app depending on what you need. Just to be clear, these two functions are for more than just write operations in FQL. They are used to control flow and to iterate through a result from any operation on a SET and then run other FQL functions or just return the results. For example, here is a read operation from that same index, but I am using &lt;code&gt;.map()&lt;/code&gt; and projecting just the customer field of the documents returned from the index.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Order.byStatus("shipped", "20220").map(doc =&amp;gt; "#{doc.customer}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reason I am pointing this out is to reinforce that functions in FQL can have many uses, and you’ll use them often within Fauna.&lt;/p&gt;




&lt;p&gt;Fauna is a serverless, globally distributed strong consistent database designed for modern application development. It offers the flexibility of NoSQL with the safety and ease of traditional relational databases. It provides seamless scalability, strong consistency, and multi-model access capabilities, making it an ideal choice for developers seeking to build fast, reliable applications without the operational overhead of traditional database management. Fauna's support for ACID transactions, and its temporal database features further enhance its utility, allowing for efficient data retrieval, updates, and historical data analysis.&lt;/p&gt;

</description>
      <category>nosql</category>
      <category>javascript</category>
      <category>database</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
