<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ahmed khaled</title>
    <description>The latest articles on DEV Community by ahmed khaled (@ahmed2929).</description>
    <link>https://dev.to/ahmed2929</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ahmed2929"/>
    <language>en</language>
    <item>
      <title>ACID principles in database world</title>
      <dc:creator>ahmed khaled</dc:creator>
      <pubDate>Mon, 21 Nov 2022 11:34:51 +0000</pubDate>
      <link>https://dev.to/ahmed2929/acid-principles-in-database-world-3b7</link>
      <guid>https://dev.to/ahmed2929/acid-principles-in-database-world-3b7</guid>
      <description>&lt;p&gt;&lt;em&gt;ACID describes a set of desirable properties for database transactions:atomicity,consistency, isolation,and durability. The exact definitions of these terms can vary. As a general rule, the more strictly a system guarantees ACID properties, the greater the performance compromise.This ACID categorization is a common way for developers to quickly communicate the trade­offs of a particular solution,such as those found in NoSQL systems.&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Atomicity&lt;/strong&gt;: transactions either succeed or fail in entirety
&lt;em&gt;An atomic transaction can’t be partially executed: either the entire operation completes, or the database is left unchanged. For example, if a transaction is to delete
all comments by a particular user, either all comments will be deleted, or none of them will be deleted. There is no way to end up with some comments deleted and some not.Atomicity should apply even in the case of system error or power failure. Atomic is used here with its original meaning of indivisible.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt;: constraints are always enforced
&lt;em&gt;The completion of a successful transaction must maintain all data­integrity constraints defined in the system. Some example constraints are that primary keys must be unique,
data conforms to a particular schema, or foreign keys must reference entities that exist.Transactions that would lead to inconsistent state typically result in transaction failures, though minor issues may be resolved automatically;for example,coercing data into the correct shape. This isn’t to be confused with the C of consistency in the CAP theorem, which refers to guaranteeing a single view of the data being presented to all readers of a distributed store.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Isolation&lt;/strong&gt;: concurrent transactions don’t interfere
&lt;em&gt;Isolated transactions should produce the same result, whether the same transactions are executed concurrently or sequentially. The level of isolation a system provides
directly affects its ability to perform concurrent operations. A naïve isolation scheme is the use of a global lock, whereby the entire database is locked for the duration of a transaction, thus effectively processing all transactions in series. This gives a strong isolation guarantee but it’s also pathologically inefficient: transactions operating on entirely disjointed datasets are needlessly blocked (for example, a user adding a comment ideally doesn’t block another user updating their profile). In practice, systems provide various levels of isolation using more fine­grained and selective locking schemes (for example, by table, row, or field). More­sophisticated systems may even optimistically attempt all transactions concurrently with minimal locking, only to retry transactions by using increasingly coarse­grained locks in cases where conflicts are
detected.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Durability&lt;/strong&gt;: transactions are permanent
&lt;em&gt;The durability of a transaction is the degree to which its effects are guaranteed to persist, even after restarts, power failures, system errors, or even hardware failures.
For example,an application using the SQLite in­memory mode has no transaction durability;all data is lost when the process exits. On the other hand,SQLite persisting to disk will have good transaction durability, because data persists even after the machine is restarted.This may seem like a no­brainer: just write the data to disk—and voila,you have durable transactions.But disk I/O is one of the slowest operations your application can perform and can quickly become a significant bottleneck in your application,even at moderate levels of scale. Some databases offer different durability trade­offs that can be employed to maintain acceptable system performance.&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;give a visite to my site : &lt;a href="//www.ahmed-tech.me"&gt;ahmed khaled&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Best Practices in NoSQL Database Design</title>
      <dc:creator>ahmed khaled</dc:creator>
      <pubDate>Mon, 24 Oct 2022 11:46:38 +0000</pubDate>
      <link>https://dev.to/ahmed2929/designing-for-document-databases-pio</link>
      <guid>https://dev.to/ahmed2929/designing-for-document-databases-pio</guid>
      <description>&lt;p&gt;NoSQL database designers employ a distinct approach to database design compared to traditional relational database designers. When opting for a document database, designers and application developers prioritize scalability and flexibility. While ensuring data consistency remains important, they willingly accept additional responsibilities to prevent data anomalies in exchange for these benefits. For instance, if there are redundant copies of customer addresses in the database, an application developer might implement a customer address update function that updates all instances of an address. Consequently, developers are inclined to write more code in order to avoid anomalies in a document database, reducing the need for extensive database tuning and query optimization in the future.&lt;/p&gt;

&lt;p&gt;To enhance performance in document data modeling and application development, minimizing the reliance on joins becomes paramount. This optimization technique is commonly referred to as denormalization. The underlying concept involves storing data that is frequently accessed together within a single data structure, such as a table in a relational database or a document in a document database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Denormalization
&lt;/h2&gt;

&lt;p&gt;To illustrate the benefits of denormalization, let's consider a simple example involving order items and products. In the original design, the Order_Items entity has attributes such as order_item_ID, order_id, quantity, cost_per_unit, and product_id. The Products entity, on the other hand, includes attributes like product_ID, product_description, product_name, product_category, and list_price.&lt;/p&gt;

&lt;p&gt;Here is an example of an order item document:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "order_item_ID": 834838,
  "order_ID": 8827,
  "quantity": 3,
  "cost_per_unit": 8.50,
  "product_ID": 3648
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And here is an example of a product document:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "product_ID": 3648,
  "product_description": "1 package laser printer paper. 100% recycled.",
  "product_name": "Eco-friendly Printer Paper",
  "product_category": "office supplies",
  "list_price": 9.00
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you implemented two separate collections and maintained these distinct documents, you would need to query the order items collection to retrieve the desired order item, and then perform another query on the products collection to obtain information about the product with product_ID 3648. This approach would involve two lookup operations to gather the necessary details for a single order item.&lt;/p&gt;

&lt;p&gt;By denormalizing the design, you can create a collection of documents that require only one lookup operation. A denormalized version of the order item collection could be structured as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "order_item_ID": 834838,
  "order_ID": 8827,
  "quantity": 3,
  "cost_per_unit": 8.50,
  "product": {
    "product_description": "1 package laser printer paper. 100% recycled.",
    "product_name": "Eco-friendly Printer Paper",
    "product_category": "office supplies",
    "list_price": 9.00
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;By incorporating the product details directly within the order item document, you eliminate the need for an additional lookup. This denormalized approach streamlines the retrieval process, resulting in improved efficiency and reduced query complexity.&lt;/p&gt;
&lt;h2&gt;
  
  
  Avoid Overusing Denormalization
&lt;/h2&gt;

&lt;p&gt;Indeed, while denormalization can offer performance benefits, it should be used judiciously to avoid excessive redundancy and the inclusion of extraneous information in denormalized collections. The primary objective is to store data that is frequently accessed together within a document, enabling the database to minimize the frequency of reads from persistent storage, which can be relatively slow even with SSDs.&lt;/p&gt;

&lt;p&gt;However, it is essential to strike a balance and avoid including unnecessary or irrelevant data in denormalized collections. Including extraneous information can lead to increased storage requirements, decreased query performance, and potential inconsistencies if the denormalized data is not properly maintained. Therefore, careful consideration should be given to determine which data elements are truly essential for efficient retrieval and meet the specific needs of the application.&lt;/p&gt;

&lt;p&gt;By keeping denormalized collections focused on the relevant data that is frequently accessed together, developers can maximize the benefits of denormalization while avoiding the pitfalls of excessive redundancy and unnecessary data inclusion. This approach ensures optimized performance, reduced storage overhead, and consistent data integrity within the document database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oVJZ9LPq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ehkb9deavb6w8bhhrbr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oVJZ9LPq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ehkb9deavb6w8bhhrbr5.png" alt="Image description" width="568" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;how much denormalization is too much?&lt;/strong&gt;&lt;br&gt;
When designing the document database considering the specific queries the application will issue is crucial. In this scenario, we have identified two types of queries: generating invoices and packing slips for customers (constituting 95% of queries) and generating management reports (constituting 5% of queries).&lt;/p&gt;

&lt;p&gt;For invoices and packing slips, certain fields are necessary, such as order_ID, quantity, cost_per_unit, and product_name. However, product description, list price, and product category are not needed for these queries. Therefore, it would be more efficient to exclude these fields from the Order_Items collection. The revised version of the Order_Items document would appear as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "order_item_ID": 834838,
  "order_ID": 8827,
  "quantity": 3,
  "cost_per_unit": 8.50,
  "product_name": "Eco-friendly Printer Paper"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;To retain the relevant product details, a separate Products collection can be maintained. Here is an example of a document in the Products collection:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "product_description": "1 package laser printer paper. 100% recycled.",
  "product_name": "Eco-friendly Printer Paper",
  "product_category": "office supplies",
  "list_price": 9.00
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Although the product_name field appears redundantly in both the Order_Items collection and the Products collection, this design choice enables application developers to retrieve the required information for the majority of their queries with a single lookup operation. While this approach may slightly increase storage usage, it optimizes query performance and enhances the efficiency of retrieving data for invoicing and packing slip generation.&lt;/p&gt;
&lt;h2&gt;
  
  
  Say No to Joins
&lt;/h2&gt;

&lt;p&gt;While best practices, guidelines, and design patterns provide valuable guidance for building scalable and maintainable NoSQL applications, it is important not to adhere to them dogmatically. It is essential to consider the specific requirements and characteristics of your application. If breaking established best practices can offer improved performance, increased functionality, or better maintainability, it may be worth considering alternative design choices.&lt;/p&gt;

&lt;p&gt;If storing related information in multiple collections is deemed optimal for your application, you can implement joins in your application code. However, it is crucial to be aware of potential performance implications, especially when dealing with large collections. Joining two large collections using nested loops, as shown in the example code snippet, can lead to significant execution times. For instance, if the first collection contains 100,000 documents and the second collection contains 500,000 documents, the loop would execute 50,000,000,000 times.&lt;/p&gt;

&lt;p&gt;To optimize joins and reduce the overall number of operations performed, various techniques can be employed. These include utilizing indexes, filtering, and sorting. By leveraging indexes, you can speed up data retrieval by efficiently narrowing down the relevant documents. Filtering can help further refine the data set, and sorting can improve the join process in specific scenarios.&lt;/p&gt;

&lt;p&gt;In summary, while breaking established best practices should be done with caution, it is essential to prioritize the specific requirements and characteristics of your application. If alternative approaches can provide superior performance, functionality, or maintainability, it may be appropriate to deviate from traditional design patterns and leverage optimization techniques such as indexes, filtering, and sorting to improve the efficiency of joins and reduce the number of overall operations performed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q1OB39gD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5e927dmwhvd227s1lev2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q1OB39gD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5e927dmwhvd227s1lev2.png" alt="Image description" width="638" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Indeed, normalization is a valuable technique for reducing the risk of data anomalies, while denormalization serves a different purpose, primarily focused on improving query performance. In the context of document databases, denormalization is frequently utilized by data modelers and developers, similar to how relational data modelers employ normalization in their designs.&lt;/p&gt;

&lt;p&gt;Another crucial consideration when designing documents and collections is the potential for document size to change over time. Documents that are prone to size changes are referred to as mutable documents. This aspect is worth noting because changes in document size can impact storage utilization, query performance, and overall system efficiency.&lt;/p&gt;

&lt;p&gt;When mutable documents undergo frequent updates that modify their size, several factors come into play. These include the need to allocate additional storage space for the updated document, the potential fragmentation of data within the collection, and the impact on disk I/O and memory consumption during read and write operations.&lt;/p&gt;

&lt;p&gt;To address the challenges associated with mutable documents, it is important to consider strategies such as document versioning, efficient update operations, and potential data reorganization techniques. Document versioning allows for tracking and managing changes to documents, enabling historical analysis and ensuring data integrity. Efficient update operations involve optimizing the way document updates are performed to minimize the impact on storage and performance. Data reorganization techniques, such as compaction or defragmentation, can be employed periodically to reclaim wasted space and improve overall storage efficiency.&lt;/p&gt;

&lt;p&gt;By considering the potential for document size changes and implementing appropriate strategies, developers can mitigate the challenges associated with mutable documents in document database designs, ensuring optimal performance and efficient resource utilization.&lt;/p&gt;
&lt;h2&gt;
  
  
  Mutable Documents
&lt;/h2&gt;

&lt;p&gt;Things change. Things have been changing since the Big Bang. Things will most likely continue to change. It helps to keep these facts in mind when designing databases.&lt;br&gt;
Some documents will change frequently, and others will change infrequently. A document that keeps a counter of the number of times a web page is viewed could change hundreds of times per minute. A table that stores server event log data may only change when there is an error in the load process that copies event data from a server to the document database. When designing a document database, consider not just how frequently a document will change, but also how the size&lt;br&gt;
of the document may change.Incrementing a counter or correcting an error in a field will not significantly change the size of a document. However, consider the following scenarios:&lt;br&gt;
• Trucks in a company fleet transmit location, fuel consumption,&lt;br&gt;
and other operating metrics every three minutes to a fleet management database.&lt;br&gt;
• The price of every stock traded on every exchange in the world is checked every minute. If there is a change since the last check,the new price information is written to the database.&lt;br&gt;
• A stream of social networking posts is streamed to an application, which summarizes the number of posts; overall sentimentof the post; and the names of any companies, celebrities, publicofficials, or organizations. The database is continuously updated with this information.&lt;br&gt;
Over time, the number of data sets written to the database increases.How should an application designer structure the documents to handle such input streams? One option is to create a new document for each new set of data. In the case of the trucks transmitting operational data, this would include a truck ID, time, location data, and so on:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
truck_id: 'T87V12',
time: '08:10:00',
date : '27-May-2015',
driver_name: 'Jane Washington',
fuel_consumption_rate: '14.8 mpg',
…
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Each truck would transmit 20 data sets per hour, or assuming a 10-hour operations day, 200 data sets per day. The truck _ id, date,and driver _ name would be the same for all 200 documents. This looks like an obvious candidate for embedding a document with the opera- tional data in a document about the truck used on a particular day. This could be done with an array holding the operational data documents:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"truck_id": "T87V12",
"date": "27-May-2015",
"driver_name": "Jane Washington",
"operational_data": [
{
"time": "00:01",
"fuel_consumption_rate": "14.8 mpg"
},
{
"time": "00:04",
"fuel_consumption_rate": "12.2 mpg"
},
{
"time": "00:07",
"fuel_consumption_rate": "15.1 mpg"
},
...
]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The document would start with a single operational record in the array, and at the end of the 10-hour shift, it would have 200 entries in the array.From a logical modeling perspective, this is a perfectly fine way to structure the document, assuming this approach fits your query requirements. From a physical model perspective, however, there is a potential performance problem.When a document is created, the database management system allocates a certain amount of space for the document. This is usually enough to fit the document as it exists plus some room for growth. If the document grows larger than the size allocated for it, the document&lt;br&gt;
may be moved to another location. This will require the database management system to read the existing document and copy it to another location, and free the previously used storage space.&lt;/p&gt;
&lt;h2&gt;
  
  
  Avoid Moving Oversized Documents
&lt;/h2&gt;

&lt;p&gt;One way to avoid this problem of moving oversized documents is to allocate sufficient space for the document at the time the document is created. In the case of the truck operations document, you could create the document with an array of 200 embedded documents with the time and other fields specified with default values. When the actual data is transmitted to the database, the corresponding array entry is updated with the actual values Consider the life cycle of a document and when possible plan for anticipated growth. Creating a document with sufficient space for the full life of the document can help to avoid I/O overhead.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://www.ahmed-tech.me/" rel="noopener noreferrer"&gt;
      ahmed-tech.me
    &lt;/a&gt;
&lt;/div&gt;



</description>
      <category>nosql</category>
      <category>mongodb</category>
      <category>database</category>
      <category>desingdatabase</category>
    </item>
  </channel>
</rss>
