<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yujin</title>
    <description>The latest articles on DEV Community by Yujin (@yujin).</description>
    <link>https://dev.to/yujin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yujin"/>
    <language>en</language>
    <item>
      <title>The Rise of Currency "A" and Tech-Enabled Abundance</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Fri, 07 Mar 2025 06:33:23 +0000</pubDate>
      <link>https://dev.to/yujin/the-rise-of-currency-a-and-tech-enabled-abundance-1jdi</link>
      <guid>https://dev.to/yujin/the-rise-of-currency-a-and-tech-enabled-abundance-1jdi</guid>
      <description>&lt;p&gt;&lt;em&gt;The revolutionary economy powered by currency 'A', where money is abundant, prices are remarkably low, and technology transforms how people work, live, and spend. Explore how this shift fosters creativity, sustainability, and a society driven by passion rather than profit.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/news-and-trends/2025-02-17-00-the-rise-of-currency-a-and-tech-enabled-abundance/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fnews-and-trends%2F2025-02-17-00-the-rise-of-currency-a-and-tech-enabled-abundance%2Fimages%2Fythumb.png" height="668" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/news-and-trends/2025-02-17-00-the-rise-of-currency-a-and-tech-enabled-abundance/" rel="noopener noreferrer" class="c-link"&gt;
          The Rise of Currency "A" and Tech-Enabled Abundance
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          The revolutionary economy powered by currency 'A', where money is abundant, prices are remarkably low, and technology transforms how people work, live, and spend. Explore how this shift fosters creativity, sustainability, and a society driven by passion rather than profit.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;Technology shapes every aspect of daily life, an economic revolution is quietly emerging. Take as an example, a nation where the currency simply known as “A”, is distributed as abundantly as the digital signals streaming through every device, yet every product, service, and commodity is priced so remarkably low that spending becomes almost an afterthought. In this emerging economy, earning “A” is effortless, while its expenditure is rendered nearly trivial, prompting citizens to focus on passions and pursuits rather than the constant chase for wealth. A New Paradigm of Monetary Abundance Across sprawling urban centers and tranquil rural communities alike, a subtle yet profound shift is taking place. Government initiatives, powered by cutting-edge algorithms and real-time data analytics, ensure that every citizen receives a steady infusion of currency “A.” Similar to how nature disperses seeds in the wind, these initiatives have democratized access to money, dissolving the barriers that once confined wealth to a privileged few.  Yet while acquiring “A” has become as simple as logging onto a secure digital portal, the everyday cost of living has plummeted. Thanks to unprecedented advancements in automation, supply chain management, and production technologies, goods and services are now available at prices that seem almost symbolic. A loaf of bread, a liter of milk, or even a state-of-the-art gadget can be purchased for mere fractions of “A.”. The Technology Behind the Transformation  At the heart of this economic metamorphosis lies a sophisticated tapestry of technological innovation. Automated factories, run by artificial intelligence and robotic systems, churn out consumer goods at breakneck speeds with near-zero production costs. Digital platforms powered by blockchain technology record every transaction with unparalleled transparency, ensuring that trust and accountability underpin the entire system. These technologies do more than just reduce production costs, they also streamline distribution. Smart logistics networks, guided by real-time data, ensure that even the remotest areas receive their share of affordable commodities. The result is an economy where an overabundance of supply drives prices down to almost negligible levels, effectively decoupling the traditional link between income and spending power. In this system, the very nature of money is reimagined. Currency “A” is no longer the prized object of accumulation; rather, it has become a tool; a digital resource that facilitates the smooth functioning of an economy designed to liberate human potential. With each unit of “A” barely registering at the checkout, citizens find that the focus shifts away from the relentless pursuit of wealth, and toward the enrichment of personal and communal experiences. A Day in the Life: When Spending Is a Choice  Step into the bustling markets of this society, and one is immediately struck by an atmosphere of calm abundance. In a sunlit plaza, vendors offer goods and fresh produce at prices so low they seem almost ceremonial. Shoppers, their digital wallets brimming with “A”, wander leisurely among stalls, savoring the experience of choice rather than the urgency of necessity. Consider the story of Miguel, a local artisan whose passion for pottery had once been overshadowed by the need to work multiple jobs just to make ends meet. In the old economic order, every purchase was calculated and measured against the backdrop of scarcity. Today, with “A” flowing in abundance and everyday items costing only a sliver of its value, Miguel dedicates his time to perfecting his craft. His creations are not made for profit. They are expressions of art, culture, and personal fulfillment. In this economy, spending is no longer a frantic scramble; it is a deliberate act of investing in one’s passions.  Similarly, families gather around community centers where technology-enabled workshops and creative hubs offer opportunities for learning and exploration. Children are encouraged to experiment with robotics, art, and science, activities that once required expensive equipment or specialized training. With the economic pressures of traditional systems lifted, education and creativity thrive, transforming neighborhoods into vibrant ecosystems of innovation and self-expression. Cultural Shifts: When Passion Trumps Profit The transformation of the economy has brought about a profound cultural shift. For generations, society has been driven by the pursuit of wealth, a chase that often left little room for personal interests and community engagement. In the era of currency “A,” however, the motivational landscape has been redrawn.  In public forums, lively discussions echo with a newfound appreciation for the arts, sciences, and humanities. Citizens are no longer defined by the balance in their bank accounts; instead, they are recognized for their contributions to culture, creativity, and communal well-being. As the lure of accumulating “A” fades into the background, the true currency becomes the fulfillment derived from pursuing one’s passions. Local festivals and art exhibitions flourish, celebrating both traditional heritage and futuristic visions. Community leaders emphasize the value of collaboration over competition, and the shared wealth of experiences takes center stage. In this society, the act of spending “A” is imbued with meaning; it is a way of nurturing dreams, fostering relationships, and investing in the collective good. Balancing Innovation and Economic Stability Yet, as with any revolutionary system, the balance between abundance and practicality remains a delicate one. Economists have raised concerns about the potential pitfalls of a deflationary environment, where the increased purchasing power of “A” might lead to a liquidity trap, a scenario in which citizens hoard their currency rather than spending it. The risk is that without the regular flow of expenditure, economic activity could slow, undermining the system’s overall dynamism. To address these challenges, policymakers have embraced technology as an indispensable tool. Real-time economic monitoring, powered by big data and machine learning, enables government institutions to adjust the supply of “A” with surgical precision. These adaptive measures ensure that while the currency remains plentiful, its circulation is actively encouraged through targeted incentives and community initiatives. Innovative solutions are also emerging from the private sector. Fintech companies are developing creative platforms that gamify spending, transforming transactions into interactive experiences that reward participation and community engagement. In these digital marketplaces, every purchase becomes an opportunity not just for consumption, but for connecting with others and sharing in a collective narrative of progress. The Environmental and Social Impact  The economic revolution driven by currency “A” is not merely an exercise in fiscal engineering, it also carries significant environmental and social implications. By decoupling wealth accumulation from consumption, the system alleviates the pressures that often lead to overproduction and waste. With production streamlined by automated, energy-efficient technologies, resources are used more judiciously, contributing to a more sustainable model of growth. Communities are increasingly embracing local production and renewable energy sources, fostering a sense of resilience and self-reliance. In urban gardens and community-run cooperatives, citizens collaborate to grow fresh produce, reducing their carbon footprint and strengthening local bonds. The abundance of “A” enables individuals to pursue lifestyles that prioritize quality over quantity; a welcome departure from the disposable culture that has long defined modern economies. Social cohesion, too, is experiencing a renaissance. With the constant anxiety of financial insecurity replaced by a collective sense of stability, crime rates are declining, and trust in public institutions is on the rise. The economy of abundance is not just about economic metrics; it is about nurturing a society where every individual feels valued, empowered, and connected. Looking Toward a New Horizon  As the sun sets over city skylines and rural landscapes alike, the promise of a tech-enabled economy where spending is a thoughtful, rather than a desperate, act is beginning to crystallize. Currency “A” is reshaping more than just financial transactions, it is redefining the relationship between individuals and their communities, between technology and tradition, and between ambition and fulfillment. The journey is still unfolding, and many questions remain. How will long-term economic stability be maintained in an environment where the very notion of money is transformed? What new forms of innovation and creativity will emerge when the constraints of traditional wealth accumulation are lifted? And most importantly, how will this brave new economy inspire future generations to view success not as a tally of coins, but as a tapestry woven from passion, creativity, and shared experience?  In the laboratories of government think tanks, on the bustling streets of digital marketplaces, and within the hearts of every citizen, the answers are beginning to take shape. The deflationary model, underpinned by advanced technology and visionary policies, offers a glimpse into a future where life’s true rewards are measured not in monetary units, but in moments of joy, discovery, and community. As we stand at the precipice of this new era, one thing is clear: the era of currency “A” is not just an economic experiment, it is a cultural awakening. It invites us to reimagine our priorities, to rediscover the value of creativity over accumulation, and to forge a future where technology serves as a catalyst for human flourishing. In this unfolding story, every transaction is a testament to the power of innovation, every purchase a step toward a more meaningful life, and every citizen a vital thread in the intricate tapestry of a society that celebrates both abundance and restraint.  In the quiet hum of data centers and the vibrant chatter of community gatherings, the rhythm of life has changed. The pursuit of “A” has given way to the pursuit of passion, and in that shift lies the promise of a richer, more connected world, one where technology and human spirit converge to redefine what it truly means to live well.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/news-and-trends/2025-02-17-00-the-rise-of-currency-a-and-tech-enabled-abundance/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>design</category>
      <category>productivity</category>
      <category>automation</category>
      <category>ardenov</category>
    </item>
    <item>
      <title>SQL 101 | Chapter 9: Common Table Expression (CTE)</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Thu, 27 Feb 2025 00:08:00 +0000</pubDate>
      <link>https://dev.to/yujin/sql-101-chapter-9-common-table-expression-cte-974</link>
      <guid>https://dev.to/yujin/sql-101-chapter-9-common-table-expression-cte-974</guid>
      <description>&lt;p&gt;&lt;em&gt;Common Table Expression (CTE) is defined using the WITH keyword, writing the query logic within it, and then referencing it in your main query.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-9-common-table-expression-cte/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fdata-science-and-machine-learning%2Fsql-101%2Fchapter-9-common-table-expression-cte%2Fimages%2Fythumb.jpg" height="376" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-9-common-table-expression-cte/" rel="noopener noreferrer" class="c-link"&gt;
          SQL 101 | Chapter 9: Common Table Expression (CTE)
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Common Table Expression (CTE) is defined using the WITH keyword, writing the query logic within it, and then referencing it in your main query.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;Common Table Expression (CTE) is introduced using the WITH keyword and serves as a temporary result set that is available only for the duration of the main query.  Unlike subqueries, which can make SQL statements harder to read, CTE improves overall clarity, especially in queries that involve multiple steps.  As an illustration, you can imagine CTE expressing the raw table into different expressions (temporary result sets) as needed to be processed into the final result.  Transforming raw table to temporary result sets with CTE Transforming raw table to temporary result sets with CTE   Why Use a Common Table Expression (CTE)? Improved Readability A Common Table Expression (CTE) helps break down complex SQL logic into smaller, modular blocks of code. This approach makes a query easier to understand and maintain.  Reusable Logic One of the key benefits of CTE is its reusability. Once defined, the CTE can be referenced multiple times in the main query. This avoids the need to rewrite the same subquery logic repeatedly, which saves the time and ensures consistency in your queries.  Step-by-Step Queries CTE is especially helpful when a query involves intermediate calculations or transformations. By defining each step separately, you can build the query incrementally. This makes the query easier to understand and debug.  Recursive Queries Recursion in CTE is essential for handling hierarchical or tree-structured data. Whether you are working with organizational charts, file systems, or other similar structures, recursion simplifies complex queries and enhances their efficiency.  Basic Syntax of a CTE The basic syntax of a CTE is defined using the WITH keyword, writing the query logic within it, and then referencing it in your main query.  WITH cte_name AS (   SELECT column_1   FROM table_1   WHERE condition )  SELECT column_1 FROM cte_name; Here, cte_name acts as a placeholder for the CTE, which behaves like a temporary table for the main query. The logic within the CTE (SELECT statement) is executed first, and the result can be used like a normal table in the following query.  If you use multiple CTE statements, use a comma as a separator between each CTE statement.  WITH cte_1 AS (   SELECT        column_1     , column_2   FROM table_1   WHERE condition ) , cte_2 AS (   SELECT        column_1     , column_3   FROM table_2   WHERE condition )  SELECT      c1.column_1   , column_2   , column_3 FROM cte_1 AS c1 LEFT JOIN cte_2 AS c2   ON c1.column_1 = c2.column_1; Simple CTE in Practice Suppose you want to find books that have a higher number of pages than the average page count in their genre. Using a CTE, you can first calculate the average page count for each genre, and then use this result in the main query.  WITH AveragePages AS (   SELECT        book_genre     , AVG(total_pages) AS avg_pages   FROM books   GROUP BY book_genre )  SELECT      b.book_title   , b.book_genre   , b.total_pages   , ap.avg_pages FROM books b JOIN AveragePages ap    ON b.book_genre = ap.book_genre WHERE b.total_pages &amp;gt; ap.avg_pages; Books with above-average page counts in their genre Books with above-average page counts in their genre   The AveragePages CTE calculates the average page count for each genre.  In the main query, JOIN combines each book’s data with the corresponding average page count for its genre.  The main query then compares each book’s total_pages to the average for its genre and filters books with above-average page counts (&amp;gt; ap.avg_pages).  Chaining Multiple CTEs You can chain multiple CTEs together to build more complex queries step by step. In this case to identify books belonging to top-selling genres, where the genres have generated total sales exceeding $5,000:  WITH GenreSales AS (   SELECT       b.book_genre     , SUM(s.price) AS total_genre_sales   FROM books b   JOIN sales s     ON b.book_id = s.book_id   GROUP BY b.book_genre )  , TopGenres AS (   SELECT book_genre   FROM GenreSales   WHERE total_genre_sales &amp;gt; 5000 )  SELECT     b.book_title   , b.book_genre FROM books b JOIN TopGenres tg   ON b.book_genre = tg.book_genre; Books belonging to top-selling genres Books belonging to top-selling genres   This first CTE (GenreSales) calculates total sales for each genre.  The second CTE (TopGenres) filters the genres identified in GenreSales to only include those where the total sales exceed $5,000.  Finally the main query retrieves book_title and book_genre for books that belong to the top-selling genres (TopGenres).  Recursive CTE Query A recursive CTE in SQL is created using a WITH RECURSIVE clause. This allows you to define a CTE that references itself, enabling you to work with hierarchical or recursive data. Below is the general syntax:  WITH RECURSIVE cte_name AS (   -- Anchor Query: Base result set   SELECT column_1, column_2, ...   FROM table_name   WHERE condition_for_base_case    UNION ALL    -- Recursive Query: Recursive logic for self-referencing   SELECT column_1, column_2, ...   FROM table_name   JOIN cte_name     ON condition_for_recursive_step )  -- Final Query: Use the recursive CTE SELECT column_1, column_2, ... FROM cte_name; Recursive CTE can be used to navigate parent-child relationships, perform tree traversals, or build hierarchies.  For this practice, we use the books table with an additional rule. The parent-child relationship is derived dynamically based on the book_id. Specifically, the parent_book_id is determined as book_id - 1. If the book_id is 1, it is considered the root book and does not have a parent.  The query will show the entire hierarchy, indicating the level (depth) of each book in the hierarchy.  WITH RECURSIVE BookHierarchy AS (   -- Anchor Query: Start    -- with the root book    -- (where book_id - 1 = 0)   SELECT       book_id     , book_title     , CAST(NULL AS INT) AS parent_book_id     , 1 AS level   FROM books   WHERE book_id - 1 = 0    UNION ALL    -- Recursive Query: Find   -- child books by dynamically    -- calculating parent_book_id   SELECT       b.book_id     , b.book_title     , bh.book_id AS parent_book_id     , bh.level + 1 AS level   FROM books b   JOIN BookHierarchy bh     ON b.book_id - 1 = bh.book_id )  SELECT     book_id   , book_title   , parent_book_id   , level FROM BookHierarchy ORDER BY level, book_id; Books Hierarchy Books Hierarchy   Anchor query initializes the hierarchy by selecting the root book, where book_id - 1 = 0. In this case, Book 1 is the root book, as its parent_book_id is NULL.  Recursive Query identifies child books by unifying the books table with the recursive CTE (BookHierarchy), using the condition b.book_id - 1 = bh.book_id.  For every match, the parent_book_id is assigned as the book_id of the parent book. The level is incremented by 1 to indicate the depth of the child in the hierarchy.  The query outputs the entire hierarchy of books, ordered by their level in the hierarchy.  Still confused? Here is a Recursive Illustration. Watch it until it ends! 😆 Still confused? Here is a Recursive Illustration. Watch it until it ends! 😆   References &lt;a href="https://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL#Common_table_expression" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL#Common_table_expression&lt;/a&gt; &lt;a href="https://www.postgresql.org/docs/9.4/queries-with.html" rel="noopener noreferrer"&gt;https://www.postgresql.org/docs/9.4/queries-with.html&lt;/a&gt; &lt;a href="https://www.atlassian.com/data/sql/using-common-table-expressions" rel="noopener noreferrer"&gt;https://www.atlassian.com/data/sql/using-common-table-expressions&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-9-common-table-expression-cte/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>database</category>
      <category>sql</category>
      <category>ardenov</category>
    </item>
    <item>
      <title>Quickstart Svelte - Setting Up and Running a Development Server</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Thu, 20 Feb 2025 00:08:00 +0000</pubDate>
      <link>https://dev.to/yujin/quickstart-svelte-setting-up-and-running-a-development-server-1kfb</link>
      <guid>https://dev.to/yujin/quickstart-svelte-setting-up-and-running-a-development-server-1kfb</guid>
      <description>&lt;p&gt;&lt;em&gt;Quickstart setting up a Svelte project and run its development server using Docker. This guide explains configuring docker-compose.yml for initializing a Svelte + Vite project and running the dev server with hot-reloading.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/web-development/quickstart/quickstart-svelte-setting-up-and-running-a-development-server/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fweb-development%2Fquickstart%2Fquickstart-svelte-setting-up-and-running-a-development-server%2Fimages%2Fythumb.jpg" height="376" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/web-development/quickstart/quickstart-svelte-setting-up-and-running-a-development-server/" rel="noopener noreferrer" class="c-link"&gt;
          Quickstart Svelte - Setting Up and Running a Development Server
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Quickstart setting up a Svelte project and run its development server using Docker. This guide explains configuring docker-compose.yml for initializing a Svelte + Vite project and running the dev server with hot-reloading.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;In this quickstart guide, Svelte will be installed using Docker. This approach means you won’t need to install Node.js or any dependencies directly on your machine, because everything operates within a clean container environment.  Make sure Docker is installed. If not, download it from &lt;a href="https://docs.docker.com/get-docker/" rel="noopener noreferrer"&gt;https://docs.docker.com/get-docker/&lt;/a&gt;. Verify the installation by running docker -v in your terminal.  Installing Svelte Installing Svelte + Vite using Docker Installing Svelte + Vite using Docker   Let’s create docker-compose.yml file with the following content:  version: '3' services:   npm:     image: node:18-alpine # LTS     volumes:       - ./app:/app:delegated       - ./.npm:/root/.npm:delegated     ports:       - "5500:5173"     command:       - /bin/sh       - -c       - |         npm create &lt;a href="mailto:vite@5.5.5"&gt;vite@5.5.5&lt;/a&gt; . -- --template svelte         npm install             working_dir: /app     restart: no This docker-compose.yml file sets up a containerized environment for creating a Svelte project using Node.js. It utilizes the lightweight (~50 MB) node:18-alpine image to ensure a consistent development environment, eliminating the need for Node.js installation on your local machine.  The project files are generated in a shared folder called app, making them accessible from your host system. The configuration runs the command npm create &lt;a href="mailto:vite@5.5.5"&gt;vite@5.5.5&lt;/a&gt; . -- --template svelte to initialize a new Svelte project and installs the necessary dependencies using npm install.  The dot (.) indicates that the new project should be created in the current working directory. Instead of generating a new subfolder for the project, the files will be placed directly in the location where the command is executed. This approach is helpful when you have already set up a directory for the project, such as in a Docker setup where /app serves as the working directory.  Additionally, the container maps default port from 5173 to 5500 so you can access the development server at &lt;a href="http://localhost:5500" rel="noopener noreferrer"&gt;http://localhost:5500&lt;/a&gt;.  Once the container finishes setting up, you will find the Svelte project files in the ./app directory. By running docker compose up, you will have a fully functional Svelte development environment that is isolated from your system. This setup is ideal for ensuring compatibility and avoiding dependency conflicts in your local development environment.  Run the following command to set up Svelte:  docker compose up Run the Svelte Development Server Running the Svelte dev server Running the Svelte dev server   Replace the contents of your docker-compose.yml file with the updated configuration below:  version: '3' services:   npm:     image: node:18-alpine # LTS     volumes:       - ./app:/app:delegated       - ./.npm:/root/.npm:delegated     ports:       - "5500:5173"     command:       - /bin/sh       - -c       - |         npm run dev -- --host 0.0.0.0             working_dir: /app     restart: no The command now runs npm run dev, which starts the Svelte development server. The -- --host 0.0.0.0 flag ensures the server binds to all network interfaces, making it accessible from outside the container. This is critical because Docker containers have their own isolated networking, and binding to 0.0.0.0 allows your host machine to access the server.  Now, rerun the following command to start the development server:  docker compose up You can access it with browser in &lt;a href="http://localhost:5500" rel="noopener noreferrer"&gt;http://localhost:5500&lt;/a&gt; or in your LAN using the host local ip address: http://[host_ip_address]:5500. As you modify files in the ./app directory, the browser will update in real time thanks to Vite’s hot-reload capabilities.  References &lt;a href="https://docs.docker.com/get-docker/" rel="noopener noreferrer"&gt;https://docs.docker.com/get-docker/&lt;/a&gt; &lt;a href="https://vite.dev/guide/" rel="noopener noreferrer"&gt;https://vite.dev/guide/&lt;/a&gt; &lt;a href="https://svelte.dev/docs/svelte/getting-started" rel="noopener noreferrer"&gt;https://svelte.dev/docs/svelte/getting-started&lt;/a&gt; &lt;a href="https://nodejs.org/en/download/package-manager" rel="noopener noreferrer"&gt;https://nodejs.org/en/download/package-manager&lt;/a&gt; &lt;a href="https://hub.docker.com/_/node/tags?name=18-alpine" rel="noopener noreferrer"&gt;https://hub.docker.com/_/node/tags?name=18-alpine&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/web-development/quickstart/quickstart-svelte-setting-up-and-running-a-development-server/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>webdev</category>
      <category>svelte</category>
      <category>ardenov</category>
    </item>
    <item>
      <title>SQL 101 | Chapter 8: Using Subqueries for Improved Modularity and Dynamic Calculations</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Thu, 13 Feb 2025 00:08:00 +0000</pubDate>
      <link>https://dev.to/yujin/sql-101-chapter-8-using-subqueries-for-improved-modularity-and-dynamic-calculations-26mn</link>
      <guid>https://dev.to/yujin/sql-101-chapter-8-using-subqueries-for-improved-modularity-and-dynamic-calculations-26mn</guid>
      <description>&lt;p&gt;&lt;em&gt;Explore the use of subqueries in SQL to enhance your queries. Understand their function in SELECT, FROM, and WHERE clauses for dynamic calculations, modular queries, and dynamic data filtering.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-8-using-subqueries-for-improved-modularity-and-dynamic-calculations/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fdata-science-and-machine-learning%2Fsql-101%2Fchapter-8-using-subqueries-for-improved-modularity-and-dynamic-calculations%2Fimages%2Fythumb.jpg" height="376" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-8-using-subqueries-for-improved-modularity-and-dynamic-calculations/" rel="noopener noreferrer" class="c-link"&gt;
          SQL 101 | Chapter 8: Using Subqueries for Improved Modularity and Dynamic Calculations
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Explore the use of subqueries in SQL to enhance your queries. Understand their function in SELECT, FROM, and WHERE clauses for dynamic calculations, modular queries, and dynamic data filtering.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;Subqueries, often referred to as inner queries or nested queries, are queries embedded inside another SQL query. They allow you to dynamically filter, transform, or manipulate data by providing intermediate results for use in the main (outer) query. Subqueries can appear in the SELECT, FROM, or WHERE clauses of a query, and each placement serves a different purpose.  Let’s understand more using the provided books, authors, and sales tables from Chapter 3!  Subqueries in the SELECT Clause Subqueries in the SELECT clause are used to calculate or fetch derived values for each row in the main query’s result set. You can use subqueries to retrieve a value from another table and display it as a computed column.  To find total sales revenue for each book:  SELECT      b.book_title   , (     SELECT SUM(s.price)      FROM sales s      WHERE s.book_id = b.book_id     ) AS total_revenue FROM books b; Total sales revenue for each book Total sales revenue for each book   The (SELECT SUM(s.price) FROM sales s WHERE s.book_id = b.book_id) subquery calculates the total sales revenue for each book.  This derived value is included as a new column (total_revenue) in the result set. The result allows you to see both the book_title and its total_revenue.  Subqueries in the FROM Clause Subqueries in the FROM clause are often referred to as “derived tables” or “inline views.” They allow you to create a temporary table or result set that can be treated like a regular table in the main query.  For ranking authors by total sales, use:  SELECT      author_name   , total_sales FROM    (   SELECT        a.author_name     , SUM(s.price) AS total_sales   FROM authors a   JOIN books b      ON a.author_id = b.author_id   JOIN sales s      ON b.book_id = s.book_id   GROUP BY a.author_name   ) AS author_sales ORDER BY total_sales DESC; Rank authors by total sales Rank authors by total sales   In the FROM clause, the subquery calculates the total_sales for each author by summing up the price column from the sales table.  This intermediate result set (author_sales) is treated as a derived table.  The main query retrieves data from the derived table, sorting authors by their total_sales.  Subqueries in the WHERE Clause Subqueries in the WHERE clause are used to filter rows in the main query based on the results of another query.  Let’s find books that have not been sold!  SELECT    book_title FROM books WHERE book_id NOT IN    (   SELECT book_id FROM sales   ); Books that have not been sold Books that have not been sold   The subquery (SELECT book_id FROM sales) retrieves all book_id that have been sold.  The main query then filters out the book_id that appeared in the subquery result and returns books that have not been sold.  Correlated Subqueries A correlated subquery is executed once for every row of the outer query. It references columns from the outer query, meaning the subquery depends on each row being processed in the outer query to execute.  To find the titles of books that have more pages than the average number of pages for books in their genre:  SELECT book_title FROM books b WHERE total_pages &amp;gt;    (   SELECT AVG(total_pages)    FROM books    WHERE book_genre = b.book_genre   ); Books that have more pages than the average in their genre Books that have more pages than the average in their genre   Here, the subquery calculates the average number of pages for each genre (b.book_genre), and this calculation is repeated for every row in the books table because the subquery depends on the outer query’s book_genre.  💡 This can lead to performance issues when dealing with large datasets, as the subquery is repeatedly executed for each outer query row.  If a subquery is correlated, it will execute for every row in the outer query. On the other hand, if a subquery is non-correlated, it executes only once, making it more efficient in most cases.  Use subqueries for modularity, breaking down complex problems into manageable parts and easier to understand.  Keep in mind that subqueries can be resource-intensive, especially if they are not properly indexed or if they involve large datasets. Consider using filters in subqueries or completely changing them by using joins or Common Table Expressions (CTEs) for improved readability and performance.  References &lt;a href="https://en.wikibooks.org/wiki/Structured_Query_Language/SELECT:_Subquery" rel="noopener noreferrer"&gt;https://en.wikibooks.org/wiki/Structured_Query_Language/SELECT:_Subquery&lt;/a&gt; &lt;a href="https://en.wikipedia.org/wiki/Correlated_subquery" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Correlated_subquery&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-8-using-subqueries-for-improved-modularity-and-dynamic-calculations/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>database</category>
      <category>sql</category>
      <category>ardenov</category>
    </item>
    <item>
      <title>SQL 101 | Chapter 7: Using Joins to Combine and Unify Data</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Thu, 06 Feb 2025 00:08:00 +0000</pubDate>
      <link>https://dev.to/yujin/sql-101-chapter-7-using-joins-to-combine-and-unify-data-1a0c</link>
      <guid>https://dev.to/yujin/sql-101-chapter-7-using-joins-to-combine-and-unify-data-1a0c</guid>
      <description>&lt;p&gt;&lt;em&gt;Joins allow you to combine data from multiple tables. Learn how these can be the foundation of recommendations and collaborative filtering as these enable us to find patterns, associations, and similarities within datasets.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-7-using-joins-to-combine-and-unify-the-data/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fdata-science-and-machine-learning%2Fsql-101%2Fchapter-7-using-joins-to-combine-and-unify-the-data%2Fimages%2Fythumb.jpg" height="376" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-7-using-joins-to-combine-and-unify-the-data/" rel="noopener noreferrer" class="c-link"&gt;
          SQL 101 | Chapter 7: Using Joins to Combine and Unify Data
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Joins allow you to combine data from multiple tables. Learn how these can be the foundation of recommendations and collaborative filtering as these enable us to find patterns, associations, and similarities within datasets.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;In SQL, joins allow you to combine data from multiple tables. This chapter we’ll learn advanced join techniques. To illustrate these concepts, we’ll use three tables: books, authors, and sales (from Chapter 3).  Joins are one of the most essential features of SQL, allowing you to combine rows from multiple tables based on a related column. While basic joins like INNER JOIN are typically the starting point, advanced SQL workflows often require other types of joins such as LEFT JOIN, RIGHT JOIN, FULL JOIN and CROSS JOIN. Each join type serves a specific purpose and determines how data from two tables is combined.  INNER JOIN (or JOIN) An INNER JOIN (commonly written simply as JOIN) returns only the rows where there is a match between the columns of both tables. If no match is found, the rows are excluded from the result.  INNER JOIN diagram INNER JOIN diagram   For example, to retrieve books along with their authors:  SELECT      b.book_title   , a.author_name FROM books b INNER JOIN authors a   ON b.author_id = a.author_id; This query matches the author_id in the books table with the author_id in the authors table. Only books with an associated author are included in the result.  💡 INNER JOIN can also be written as JOIN  SELECT      b.book_title   , a.author_name FROM books b JOIN authors a   ON b.author_id = a.author_id; List of books along with their authors List of books along with their authors   If there are books in the books table without a corresponding author_id in the authors table, those books will not appear in the result.  LEFT JOIN A LEFT JOIN retrieves all rows from the left table and matches rows from the right table. If there is no match, rows from the right table are filled with NULL.  LEFT JOIN diagram LEFT JOIN diagram   This is particularly useful when you want to see all entries from one table regardless of whether they have matching data in the other table.  To retrieve all books, including those without an assigned author:  SELECT      b.book_title   , a.author_name FROM books b LEFT JOIN authors a   ON b.author_id = a.author_id; List of books along with their authors using LEFT JOIN List of books along with their authors using LEFT JOIN   In this query, all rows from the books table are returned. If a book_id doesn’t have a matching author_id in the authors table, the author_name will be NULL.  RIGHT JOIN A RIGHT JOIN is the opposite of LEFT JOIN. It retrieves all rows from the right table and matches rows from the left table. Unmatched rows from the left table are filled with NULL.  RIGHT JOIN diagram RIGHT JOIN diagram   Let’s try to retrieve all authors and the books they’ve written, even if some authors haven’t written any books.  SELECT      a.author_name   , b.book_title FROM authors a RIGHT JOIN books b   ON a.author_id = b.author_id; List of books along with their authors using RIGHT JOIN List of books along with their authors using RIGHT JOIN   Here, all rows from the books table are included in the result. If a book doesn’t have a corresponding author, the author_name will display NULL.  FULL JOIN (or FULL OUTER JOIN) A FULL JOIN combines the results of both a LEFT JOIN and a RIGHT JOIN. It returns all rows from both tables, matching rows where possible. If no match is found, the result includes NULL for the rows of the non-matching table.  FULL JOIN diagram FULL JOIN diagram   This is helpful for reconciling mismatched datasets or ensuring no data is omitted from either table.  To get a complete list of books and authors, whether or not they have a match, do:  SELECT     b.book_title   , a.author_name FROM books b FULL JOIN authors a   ON b.author_id = a.author_id; 💡 FULL JOIN can also be written as FULL OUTER JOIN. It is the more explicit version that emphasizes the “outer” nature of the join.  SELECT     b.book_title   , a.author_name FROM books b FULL OUTER JOIN authors a   ON b.author_id = a.author_id; List of books and authors, whether or not they have a match List of books and authors, whether or not they have a match   A FULL JOIN combines rows from both tables and includes all unmatched rows, assigning NULL values to the rows where no match exists.  However, you can add additional filters to control how the NULL values behave using the WHERE clause, by filtering the join key (the column you are joining on). This approach allows you to extract subsets of the FULL JOIN results, giving you more flexibility when analyzing your data.  Inversed diagram of INNER JOIN by filtering the FULL JOIN Inversed diagram of INNER JOIN by filtering the FULL JOIN   The query below specifically filters for rows where the author_id in the books table has no match in the authors table, or vice versa:  SELECT     b.book_title   , a.author_name FROM books b FULL JOIN authors a   ON b.author_id = a.author_id WHERE    b.author_id IS NULL OR    a.author_id IS NULL; Result of filtering the FULL JOIN Result of filtering the FULL JOIN   In this query, the FULL JOIN brings in all rows from both tables, matching them on the author_id column.  The WHERE clause ensures only unmatched rows are included in the result, either rows from the books table with no corresponding author_id in the authors table (b.author_id IS NULL) or rows from the authors table with no match in the books table (a.author_id IS NULL).  Such gaps are useful for data integrity checks or reconciling data between related tables.  CROSS JOIN A CROSS JOIN produces the Cartesian product of two tables, meaning every row in the first table is combined with every row in the second table.  Diagram of CROSS JOIN Diagram of CROSS JOIN   While CROSS JOIN are not used as often as other types of joins (like INNER JOIN and LEFT JOIN), they are incredibly useful in certain scenarios, especially for generating all possible combinations, comparisons, or for analytical purposes.  Let’s take an example using the books and authors tables. Imagine we want to see all possible combinations of books and authors. A CROSS JOIN will pair each book with every author:  SELECT      b.book_title   , a.author_name FROM books b CROSS JOIN authors a; 💡 CROSS JOIN can also be written as ,.  SELECT      b.book_title   , a.author_name FROM books b , authors a; Always prefer CROSS JOIN: It adheres to modern SQL standards, is explicit, is optimized, and makes your queries more readable and maintainable.  Avoid using comma-separated joins (FROM table_1, table_2), as they are considered outdated and can lead to misunderstandings in complex queries.  Combinations of books and authors Combinations of books and authors   CROSS JOIN can lead to extremely large result sets, which may not always be useful. However, they are valuable in specific scenarios such as generating combinations, exploring relationship, and dimensional analysis.  Self Join A Self Join is a join where a table is joined with itself. This might seem unusual at first, but self joins are incredibly useful when comparing rows within the same table or establishing relationships between rows in a single dataset. In a self join, the table is aliased to differentiate between its two instances in the query.  A Self Join is foundational of recommendations and collaborative filtering because it enables us to find patterns, associations, and similarities within datasets.  Collaborative filtering works on the principle of leveraging user preferences or behaviors to recommend items. It analyzes the relationships between users and items to suggest what a user might like based on the behavior of similar users or the attributes of items they interacted with.  There are two types of collaborative filtering:  User-Based Collaborative Filtering. It recommends items based on the preferences of users who are similar to the target user. Item-Based Collaborative Filtering. It recommends items that are similar to the items the target user has interacted with. Let’s use the books table to help identify relationships based on metadata, such as genre:  SELECT       b1.book_title AS book_1_title     , b2.book_title AS book_2_title     , b1.book_genre FROM books b1 JOIN books b2   ON b1.book_genre = b2.book_genre   AND b1.book_id &amp;lt;&amp;gt; b2.book_id; List of pairs of books that share the same genre List of pairs of books that share the same genre   This query generates pairs of books that share the same genre but are not the same book. This is useful for recommending books within a specific genre.  To improve the quality of recommendations, you can use the sales table tracks purchases of books. A Self Join on this table can also pair books that are purchased together.  By using techniques like INNER JOIN (or JOIN), LEFT JOIN, RIGHT JOIN, FULL JOIN (or FULL OUTER JOIN), CROSS JOIN (or ,) and self join in various situations, you can handle complex relational data with ease.  Take time to practice these queries with the provided tables. As you become more familiar with these techniques, you’ll be better equipped to tackle more advanced SQL problems.  References &lt;a href="https://en.wikipedia.org/wiki/Join_(SQL)" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Join_(SQL)&lt;/a&gt; &lt;a href="https://dataschool.com/how-to-teach-people-sql/sql-join-types-explained-visually/" rel="noopener noreferrer"&gt;https://dataschool.com/how-to-teach-people-sql/sql-join-types-explained-visually/&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-7-using-joins-to-combine-and-unify-the-data/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>database</category>
      <category>sql</category>
      <category>ardenov</category>
    </item>
    <item>
      <title>Essential Audit Fields to Include in Your Database Tables</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Thu, 30 Jan 2025 00:08:00 +0000</pubDate>
      <link>https://dev.to/yujin/essential-audit-fields-to-include-in-your-database-tables-3i7e</link>
      <guid>https://dev.to/yujin/essential-audit-fields-to-include-in-your-database-tables-3i7e</guid>
      <description>&lt;p&gt;&lt;em&gt;When designing your database tables, it's crucial to include audit fields to track the creation, modification, and deletion of records. These audit fields provide essential insights into data changes, improve accountability, and help meet regulatory or compliance standards. In this article, we'll cover the key audit fields you should consider adding to your tables, including timestamps, user tracking, and versioning, ensuring a transparent and maintainable record lifecycle.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/web-development/2025-01-24-essential-audit-fields-to-include-in-your-database-tables/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fweb-development%2F2025-01-24-essential-audit-fields-to-include-in-your-database-tables%2Fimages%2Fythumb.jpg" height="376" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/web-development/2025-01-24-essential-audit-fields-to-include-in-your-database-tables/" rel="noopener noreferrer" class="c-link"&gt;
          Essential Audit Fields to Include in Your Database Tables
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          When designing your database tables, it's crucial to include audit fields to track the creation, modification, and deletion of records. These audit fields provide essential insights into data changes, improve accountability, and help meet regulatory or compliance standards. In this article, we'll cover the key audit fields you should consider adding to your tables, including timestamps, user tracking, and versioning, ensuring a transparent and maintainable record lifecycle.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;Audit fields typically includes columns that track changes or events in a record’s lifecycle. These fields are used to monitor and maintain the integrity of the data, often for accountability, traceability, or compliance purposes. Below is a common list of audit fields:  Standard Audit Fields created_at  Tracks when the record was created.  Typically set to the current timestamp at the time of insertion.  Immutable (should not be updated after creation).  updated_at  Tracks the last time the record was modified.  Automatically updated whenever the record changes.  Optional Audit Fields deleted_at  Tracks when the record was “soft deleted” (logical deletion without removing the data from the database).  Useful in soft-delete mechanisms where data is hidden instead of permanently deleted.  Nullable by default (null if the record isn’t deleted).  created_by  Tracks the ID or username of the user/system that created the record.  Useful for identifying the origin of the record.  updated_by  Tracks the ID or username of the user/system that last updated the record.  Ensures accountability for modifications.  deleted_by  Tracks the ID or username of the user/system that performed a soft delete.  Advanced or Compliance-Specific Audit Fields version  Tracks the version of the record, often used in.  optimistic_locking  Increments with each update to detect conflicts in concurrent transactions.  change_reason  Logs the reason for a record update or deletion.  Often used in systems requiring strict audit trails.  restored_at  Tracks when a soft-deleted record is restored.  restored_by  Tracks the ID of the user/system that restored a soft-deleted record.  Example Table with Audit Fields CREATE TABLE audit_example (     id INT AUTO_INCREMENT PRIMARY KEY,     data TEXT,     created_at DATETIME DEFAULT CURRENT_TIMESTAMP,     updated_at DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,     deleted_at DATETIME DEFAULT NULL,     created_by INT,     updated_by INT,     deleted_by INT,     change_reason VARCHAR(255),     version INT DEFAULT 1 ); Best Practices for Audit Fields Use default values and triggers (if needed) to automate timestamping and updates.  Store user identifiers in created_by, updated_by, etc., to link changes to specific users.  Normalize audit fields across all tables to maintain consistency.  Use soft-delete (deleted_at) instead of physical deletion to avoid accidental data loss.  This ensures you can track not only the “what” and “when” of changes but also the “who” and “why,” which is essential for systems requiring detailed audit trails.  References &lt;a href="https://docs.oracle.com/cd/F47663_05/books/OnDemWebSvcs/c-Audit-Fields-aku1252063.html" rel="noopener noreferrer"&gt;https://docs.oracle.com/cd/F47663_05/books/OnDemWebSvcs/c-Audit-Fields-aku1252063.html&lt;/a&gt; &lt;a href="https://en.wikipedia.org/wiki/Optimistic_concurrency_control" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Optimistic_concurrency_control&lt;/a&gt; &lt;a href="https://stackoverflow.com/questions/2762906/why-do-we-need-audit-columns-in-database-tables" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/2762906/why-do-we-need-audit-columns-in-database-tables&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/web-development/2025-01-24-essential-audit-fields-to-include-in-your-database-tables/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>design</category>
      <category>database</category>
      <category>webdev</category>
      <category>ardenov</category>
    </item>
    <item>
      <title>SQL 101 | Chapter 6: Elevating Data Presentation - Advanced Sorting Techniques for Deeper Insights</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Thu, 30 Jan 2025 00:08:00 +0000</pubDate>
      <link>https://dev.to/yujin/sql-101-chapter-6-elevating-data-presentation-advanced-sorting-techniques-for-deeper-insights-3j3m</link>
      <guid>https://dev.to/yujin/sql-101-chapter-6-elevating-data-presentation-advanced-sorting-techniques-for-deeper-insights-3j3m</guid>
      <description>&lt;p&gt;&lt;em&gt;This chapter delves into multi-level sorting, sorting based on computed columns, and handling case-insensitive or locale-specific requirements. You'll explore practical examples and detailed explanations to master these techniques, enabling you to handle complex data sets and present results that provide deeper insights and support decision-making. Start by learning how to perform multi-level sorting for effective data prioritization and organization.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-6-elevating-data-presentation-advanced-sorting-techniques-for-deeper-insights/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fdata-science-and-machine-learning%2Fsql-101%2Fchapter-6-elevating-data-presentation-advanced-sorting-techniques-for-deeper-insights%2Fimages%2Fythumb.jpg" height="376" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-6-elevating-data-presentation-advanced-sorting-techniques-for-deeper-insights/" rel="noopener noreferrer" class="c-link"&gt;
          SQL 101 | Chapter 6: Elevating Data Presentation - Advanced Sorting Techniques for Deeper Insights
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          This chapter delves into multi-level sorting, sorting based on computed columns, and handling case-insensitive or locale-specific requirements. You'll explore practical examples and detailed explanations to master these techniques, enabling you to handle complex data sets and present results that provide deeper insights and support decision-making. Start by learning how to perform multi-level sorting for effective data prioritization and organization.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;Introduction Sorting data is a fundamental aspect of data analysis, enabling you to organize information in a meaningful order. While basic sorting with ORDER BY is commonly used to arrange data in ascending or descending order, advanced sorting techniques offer more nuanced control over how your data is presented.  In this chapter, we will explore advanced sorting techniques that go beyond simple sorting. You’ll learn how to implement multi-level sorting, and sort based on computed columns. These techniques will help you handle complex data sets and present your results in a way that provides deeper insights and better supports decision-making. To illustrate these concepts, we’ll use the tables from Chapter 3.  We will cover a range of sorting scenarios, with practical examples and detailed explanations, to help you master these advanced techniques. Let’s start by examining how to perform multi-level sorting to prioritize and organize your data effectively.  Sorting with Multiple Columns Sorting data is not always a straightforward task when you need to prioritize based on multiple criteria. Advanced sorting techniques allow you to arrange your data using multiple columns, giving you finer control over how results are ordered. This section will guide you through the syntax and practical applications of sorting with multiple columns.  Syntax Format:  SELECT column_1, column_2 FROM table_1 ORDER BY column_1 [ASC/DESC], column_2 [ASC/DESC]; Or more concise syntax format:  SELECT column_1, column_2 FROM table_1 ORDER BY 1 [ASC/DESC], 2 [ASC/DESC]; The ORDER BY clause is used to sort query results by one or more columns. You begin by selecting the columns from a table, for instance, SELECT column_1, column_2 FROM table_1. To order these results, you use the ORDER BY clause and specify the columns to sort by, along with the sort direction. For example, ORDER BY column_1 ASC, column_2 DESC sorts the results first by column_1 in ascending order and then by column_2 in descending order.  The ASC keyword stands for ascending order, but it’s optional because ascending order is the default sorting direction if none is specified. Thus, writing ORDER BY column_1, column_2 DESC will sort by column_1 in ascending order by default, and then by column_2 in descending order. For a more concise approach, you can also use column positions, such as ORDER BY 1 ASC, 2 DESC, which sorts by the first and second columns in the SELECT list, respectively.  Example 1: Authors sorted by popularity in descending order, and then by name in ascending order  In this example, we want to organize authors based on their popularity, prioritizing those with higher scores. For authors with the same popularity, we then sort them alphabetically by name.  SELECT      author_name   , popularity FROM authors ORDER BY      popularity DESC   , author_name ASC; The ORDER BY clause first sorts authors by popularity in descending order to highlight the most popular authors at the top. For authors with the same popularity score, it then sorts them by author_name in ascending order.  Authors are sorted by popularity in descending order and then by name in ascending order Authors are sorted by popularity in descending order and then by name in ascending order   Example 2: Sorting books by genre then title  This SQL query retrieves a list of books, displaying their titles and genres from the books table. It organizes the results by first sorting them in descending alphabetical order of genre. Within each genre group, the books are then sorted alphabetically by their titles.  SELECT      book_title   , book_genre FROM books ORDER BY      2 DESC   , 1; The sorting order is specified using numbers to reference the columns. The number 2 refers to the second column in the SELECT clause, which is book_genre, and 1 refers to the first column, book_title. By using numbers instead of column names, the query becomes more concise and easier to read, especially when dealing with longer column names or more complex queries.  Although the query specifies descending order for genres with DESC, it omits specifying ascending order for titles, as ASC is the default sorting order in SQL and therefore optional. This dual-level sorting allows for a clear and structured presentation of books, making it easy to identify and differentiate between genres while maintaining an orderly list of titles within each genre category.  Books are sorted by genre then title Books are sorted by genre then title   Custom Sorting with CASE Custom sorting allows you to define specific rules for ordering your data beyond standard sorting options. By using the CASE statement in SQL, you can create complex sorting logic that prioritizes records based on custom criteria. This section will explore how to implement custom sorting with practical examples.  Syntax Format:  SELECT      column_1   , column_2   , column_3 FROM table_1 ORDER BY      CASE        WHEN column_1 = 'value_1' THEN 1       WHEN column_1 = 'value_2' THEN 2       ELSE 3     END [ASC/DESC]   , column_2 [ASC/DESC]; This syntax allows you to define specific sorting rules using CASE, assigning priority values to different conditions and then performing additional sorting based on a column.  Example: Prioritizing ‘Fiction’ in book sorting by genre and title  This fetches a list of books, displaying their titles and genres from the books table.  SELECT      book_title   , book_genre FROM books ORDER BY      CASE        WHEN book_genre = 'Fiction' THEN 1       ELSE 2     END   , 2 DESC   , 1; It prioritizes books in the “Fiction” genre by sorting them at the top of the list. This is achieved through a CASE statement in the ORDER BY clause that assigns a value of 1 to books in the “Fiction” genre and 2 to all other genres. After prioritizing “Fiction” books, the query then sorts the remaining genres in descending alphabetical order, ensuring that genres later in the alphabet appear first. Within each genre group, the books are sorted alphabetically by their titles.  The sorting order is specified using numbers to reference the columns. The number 2 refers to the second column in the SELECT clause, which is book_genre, and 1 refers to the first column, book_title. By using numbers instead of column names, the query becomes more concise and easier to read, especially when dealing with longer column names or more complex queries.  Although the query specifies descending order for genres with DESC, it omits specifying ascending order for titles, as ASC is the default sorting order in SQL and therefore optional. This multi-level sorting allows for a clear and structured presentation of books, making it easy to identify and differentiate between genres while prioritizing “Fiction” books and maintaining an orderly list of titles within each genre category.  Selecting book titles and genres, sorting by prioritizing ‘Fiction’ first, then by descending genre and ascending title Selecting book titles and genres, sorting by prioritizing ‘Fiction’ first, then by descending genre and ascending title   Sorting with Dynamic Columns Dynamic column sorting provides the flexibility to order your data based on different criteria depending on the presence or absence of values in certain columns. This technique is useful when you need to adapt your sorting logic to handle incomplete or varying data conditions.  In this section, we will explore how to implement dynamic column sorting using the CASE statement to achieve adaptable and responsive data ordering.  Syntax Format:  To sort data dynamically based on the presence of values in columns, use the following syntax format:  SELECT column_1, column_2 FROM table_1 ORDER BY    CASE      WHEN column_1 IS NULL THEN column_2     ELSE column_1   END [ASC/DESC]; This format allows you to specify a primary sorting column and a fallback column for cases where the primary column is null, giving you flexibility in how data is organized.  Example: Sales sorted by ‘price’ if ‘cost’ is NULL, otherwise by ‘cost’  In this example, we sort sales records by ‘price’ when ‘cost’ is not provided. For records where ‘cost’ is available, we sort primarily by ‘cost’.  SELECT      sale_id   , price   , cost FROM sales ORDER BY      CASE        WHEN cost IS NULL THEN price       ELSE cost     END DESC; The CASE statement prioritizes ‘cost’ for sorting when it is not null. When ‘cost’ is null, it falls back to sorting by ‘price’. The DESC keyword arranges the results in descending order, displaying higher values first.  Sales sorted by ‘cost’ or ‘price’ Sales sorted by ‘cost’ or ‘price’   Conclusion By applying these advanced sorting techniques, you effectively manage and analyze your data, gaining deeper insights and making more informed decisions. Whether you’re dealing with large datasets or complex queries, mastering these methods will enhance your SQL skills and productivity.  &lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-6-elevating-data-presentation-advanced-sorting-techniques-for-deeper-insights/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>database</category>
      <category>sql</category>
      <category>ardenov</category>
    </item>
    <item>
      <title>SQL 101 | Chapter 5: Advanced SQL Filtering - How to Refine Your Queries for Better Data Insights</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Thu, 23 Jan 2025 00:08:00 +0000</pubDate>
      <link>https://dev.to/yujin/sql-101-chapter-5-advanced-sql-filtering-how-to-refine-your-queries-for-better-data-insights-5g0k</link>
      <guid>https://dev.to/yujin/sql-101-chapter-5-advanced-sql-filtering-how-to-refine-your-queries-for-better-data-insights-5g0k</guid>
      <description>&lt;p&gt;&lt;em&gt;This chapter covers intricate WHERE conditions, pattern matching, range filtering, and null checking, providing clear examples and explanations. Learn to leverage SQL's powerful capabilities for managing and analyzing data.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-5-advanced-sql-filtering-how-to-refine-your-queries-for-better-data-insight/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fdata-science-and-machine-learning%2Fsql-101%2Fchapter-5-advanced-sql-filtering-how-to-refine-your-queries-for-better-data-insight%2Fimages%2Fythumb.jpg" height="376" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-5-advanced-sql-filtering-how-to-refine-your-queries-for-better-data-insight/" rel="noopener noreferrer" class="c-link"&gt;
          SQL 101 | Chapter 5: Advanced SQL Filtering - How to Refine Your Queries for Better Data Insights
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          This chapter covers intricate WHERE conditions, pattern matching, range filtering, and null checking, providing clear examples and explanations. Learn to leverage SQL's powerful capabilities for managing and analyzing data.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;Introduction Advanced filtering techniques allow you to refine your queries with greater precision, enabling you to pinpoint specific data points based on complex conditions.  This chapter delves into several sophisticated methods for filtering your data using SQL. From handling intricate WHERE conditions to mastering pattern matching, range filtering, and null checking, these techniques provide a robust toolkit for managing and analyzing data. Each method will be explored with clear examples and explanations, illustrating how to leverage SQL’s powerful capabilities to meet diverse data needs. To illustrate these concepts, we’ll use the tables from Chapter 3.  Let’s dive into these advanced filtering techniques, starting with how to handle complex WHERE conditions.  Complex WHERE Conditions Complex WHERE conditions in SQL enable you to refine your query results by combining multiple criteria. These conditions leverage logical operators such as AND, OR, and NOT to filter records based on various attributes. Below, we explore several practical examples to demonstrate these techniques.  Syntax Format:  To apply complex WHERE conditions, use the following syntax format:  SELECT column_1 FROM table_1 WHERE condition_1 AND/OR/NOT condition_2; This format allows you to combine multiple conditions to retrieve precisely the data you need.  Example 1: Books with more than 300 pages and authored by someone with popularity above 3  In this example, we aim to find books that are not only lengthy but also written by highly popular authors. We achieve this by joining the books and authors tables and applying conditions on both the number of pages and the author’s popularity.  SELECT      b.book_title   , b.total_pages   , a.popularity FROM books b JOIN authors a    ON b.author_id = a.author_id WHERE b.total_pages &amp;gt;= 300   AND a.popularity &amp;gt; 3; This query joins the books and authors tables on the author_id column. It then filters for books with minimum 300 pages and only includes those written by authors with a popularity score greater than 3. This ensures that both criteria are met for the results to be included.  Books with minimum 300 pages and popular authors Books with minimum 300 pages and popular authors   Example 2: Books that are either in the ‘Biography’ genre or have maximum 300 pages  This query targets books that fit into one of two categories: those belonging to the ‘Biography’ genre or those with a maximum 300 page count.  SELECT     book_title   , book_genre   , total_pages FROM books WHERE book_genre = 'Biography'   OR total_pages &amp;lt;= 300; By using the OR operator, this query retrieves books that either fall under the ‘Biography’ genre or have maximum 300 pages. The OR operator broadens the search criteria to include more diverse results.  Books in ‘Biography’ genre or maximum 300 pages Books in ‘Biography’ genre or maximum 300 pages   Example 3: Books Not in the ‘Fiction’ and ‘Biography’ genre and having fewer than 800 pages  Here, we exclude books from the ‘Fiction’ and ‘Biography’ genre and focus on those with fewer than 800 pages.  SELECT      book_title   , book_genre   , total_pages FROM books WHERE (NOT book_genre IN ('Fiction', 'Biography'))   AND total_pages &amp;lt; 800; This query uses the NOT operator to exclude books categorized as ‘Fiction’ and ‘Biography’ and applies an additional condition to select books with fewer than 800 pages. Combining NOT with AND allows for precise filtering of the dataset.  Books not in ‘Fiction’ and ‘Biography’ genre and fewer than 800 pages Books not in ‘Fiction’ and ‘Biography’ genre and fewer than 800 pages   Pattern Matching with LIKE The LIKE operator in SQL is a powerful tool for searching text fields based on specific patterns. It allows you to filter records by matching patterns using wildcard characters. This section demonstrates various ways to use LIKE for pattern matching.  Syntax Format:  To perform pattern matching, use the following syntax format:  SELECT column_1 FROM table_1 WHERE column_1 LIKE 'pattern'; Here, 'pattern' represents the search pattern with wildcards to match various parts of the text.  SQL wildcards are used in LIKE clauses to search for a specified pattern in a column. Here are the most common SQL wildcards:  Percent (%): Represents zero, one, or multiple characters.  Example: WHERE name LIKE 'A%' finds any values that start with “A”. Example: WHERE name LIKE '%abc%' finds any values that contain “abc”. Underscore (&lt;em&gt;): Represents a single character.  Example: WHERE name LIKE 'A&lt;/em&gt;' finds any values that have “A” as the first character and any single character as the second. Example: WHERE name LIKE '&lt;em&gt;n' finds any values where the second character is “n”. Example 1: Finding authors whose names start with “A”  SELECT author_name  FROM authors  WHERE author_name LIKE 'A%'; This retrieves the names of authors from the authors table whose names start with the letter ‘A’. The LIKE 'A%' condition is used to match any author_name that begins with ‘A’, followed by any sequence of characters.  Authors whose names start with “A” Authors whose names start with “A”   Example 2: Finding authors whose names have “i” as the second character  SELECT author_name  FROM authors  WHERE author_name LIKE '_i%'; The query retrieves the names of authors from the authors table where the name contains the letter “i” as the second character. The underscore (&lt;/em&gt;) in the LIKE '_i%' condition acts as a wildcard for any single character, and % matches any sequence of characters following “i”.  Authors whose names have “i” as the second character Authors whose names have “i” as the second character   Example 3: Finding authors whose phone numbers ending in “567”  SELECT      author_name   , phone FROM authors WHERE phone LIKE '%567'; This selects the names and phone numbers of authors from the authors table where the phone number ends with “567”. The % symbol is used as a wildcard to match any sequence of characters before “567”.  Authors whose phone numbers ending in “567” Authors whose phone numbers ending in “567”   Example 4: Finding authors with “pine” in address  SELECT      author_name   , address FROM authors WHERE LOWER(address) LIKE '%pine%'; This query retrieves the names and addresses of authors from the authors table where the address contains the word “pine” (case-insensitive). The LOWER function is used to ensure that the search is not affected by letter case, and %pine% matches any address containing the substring “pine”.  Authors with “pine” in address Authors with “pine” in address   Range Filtering with BETWEEN The BETWEEN operator in SQL is a powerful tool for filtering records within a specific range. It allows you to specify a range of values to select records that fall between the given limits. This section explores various applications of range filtering using BETWEEN.  Syntax Format:  To filter records within a range, use the following syntax format:  SELECT column_1 FROM table_1 WHERE column_1 BETWEEN value_1 AND value_2; Here, value_1 and value_2 define the inclusive range for filtering records.  Example 1: Sales records with a sale date between January 1, 2022, and January 31, 2022  This query retrieves sales records that fall within a specified date range. This is useful for analyzing sales data within a particular timeframe.  SELECT * FROM sales WHERE date_sale BETWEEN '2022-01-01' AND '2022-01-31'; The BETWEEN operator is used to select sales records where the date_sale falls between January 1, 2022, and January 31, 2022. This allows you to focus on sales data within the month of January 2022.  Sales records with a sale date between January 1, 2022, and January 31, 2022 Sales records with a sale date between January 1, 2022, and January 31, 2022   Example 2: Sales with prices ranging from $120 to $140  Here, we filter sales records to find those with prices within a specified range. This helps in analyzing sales within a certain price bracket.  SELECT * FROM sales WHERE price BETWEEN 120 AND 140; The BETWEEN operator filters sales records where the price is between $120 and $140. This range helps in evaluating sales within a particular price range.  Sales with prices ranging from $120 to $140 Sales with prices ranging from $120 to $140   Null Checking with IS NULL and IS NOT NULL Handling null values is crucial in SQL to manage and analyze incomplete or missing data. The IS NULL and IS NOT NULL operators are used to filter records based on the presence or absence of data in specific columns.  Syntax Format for IS NULL  To find records where a column value is null, use the following syntax:  SELECT column_1 FROM table_1 WHERE column_1 IS NULL; This query retrieves rows where the specified column contains null values, indicating the absence of data.  Syntax Format for IS NOT NULL  To find records where a column value is not null, use the following syntax:  SELECT column_1 FROM table_1 WHERE column_1 IS NOT NULL; This query retrieves rows where the specified column contains non-null values, indicating the presence of data.  Example 1: Find authors who don’t have books  This query retrieves the names of authors who don’t have any books associated with them.  SELECT      a.author_name   , b.book_id FROM authors a LEFT JOIN books b USING(author_id) WHERE b.book_id IS NULL It performs a left join between the authors and books tables on author_id and filters to show only those authors where book_id is NULL, indicating they don’t have any books listed.  Authors who don’t have books Authors who don’t have books   Example 2: List books with sales information:  This query fetches the titles of books along with their corresponding sale IDs and sale dates.  SELECT      b.book_title   , s.sale_id   , s.date_sale FROM books b LEFT JOIN sales s USING(book_id) WHERE s.sale_id IS NOT NULL It performs a left join between the books and sales tables using book_id, and filters the results to include only those entries where sale_id is not NULL, indicating that the book has been sold.  Books with sales information Books with sales information   Conclusion By applying these advanced filtering techniques, you can effectively manage and analyze your data, gaining deeper insights and making more informed decisions. Whether you’re dealing with large datasets or complex queries, mastering these methods will enhance your SQL skills and productivity.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-5-advanced-sql-filtering-how-to-refine-your-queries-for-better-data-insight/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>SQL 101 | Chapter 4: SQL Aliases - Learn How to Improve Query Readability and Maintainability</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Thu, 16 Jan 2025 00:08:00 +0000</pubDate>
      <link>https://dev.to/yujin/sql-101-chapter-4-sql-aliases-learn-how-to-improve-query-readability-and-maintainability-3aj4</link>
      <guid>https://dev.to/yujin/sql-101-chapter-4-sql-aliases-learn-how-to-improve-query-readability-and-maintainability-3aj4</guid>
      <description>&lt;p&gt;&lt;em&gt;Learn how SQL aliases can make your queries cleaner and easier to understand. This article breaks down how to use aliases for columns and tables, with simple explanations and practical examples. Discover how these handy tools can help you write more readable and maintainable SQL code.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-4-sql-aliases-learn-how-to-improve-query-readability-and-maintainability/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fdata-science-and-machine-learning%2Fsql-101%2Fchapter-4-sql-aliases-learn-how-to-improve-query-readability-and-maintainability%2Fimages%2Fythumb.jpg" height="376" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-4-sql-aliases-learn-how-to-improve-query-readability-and-maintainability/" rel="noopener noreferrer" class="c-link"&gt;
          SQL 101 | Chapter 4: SQL Aliases - Learn How to Improve Query Readability and Maintainability
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Learn how SQL aliases can make your queries cleaner and easier to understand. This article breaks down how to use aliases for columns and tables, with simple explanations and practical examples. Discover how these handy tools can help you write more readable and maintainable SQL code.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;Making your queries easy to read and maintain is crucial. Aliases in SQL provide a convenient way to give your columns or tables temporary names, making your queries cleaner, more readable, and easier to maintain. In this article, we’ll explore the power of aliases, their syntax, and practical examples to illustrate their utility. By the end, you’ll have a solid grasp of how to use aliases effectively in your SQL queries.  What are Aliases in SQL? Aliases are temporary names assigned to columns or tables in a SQL query. They are particularly useful for:  Simplifying column and table names. Making complex queries more readable. Enhancing the clarity and maintainability of result sets. Aliases are created using the AS keyword, although it can be omitted in some SQL dialects.  Column Aliases Column aliases are used to rename a column in the result set. This is beneficial when you have columns with complex or non-intuitive names.  Syntax:  SELECT column_1 AS alias_name FROM table_1; Or more concise syntax:  SELECT column_1 alias_name FROM table_1; For example tables, see Chapter 3  Example: Consider a table authors with a column author_name (for tables, see chapter 3). To rename this column to full_name in the result set, you would write:  SELECT author_name full_name FROM authors; In this example, the author_name column is presented as full_name in the output.  Using full_name as an alias for author_name Using full_name as an alias for author_name   Table Aliases Table aliases are used to rename a table in a query. This is especially useful in complex queries involving multiple tables.  Syntax:  SELECT column_1 FROM table_1 AS alias_name; Or more concise syntax:  SELECT column_1 FROM table_1 alias_name; Example: Simplifying table names in joins. Imagine you have two tables, authors and books. To list books along with their authors’ names, you can use table aliases:  SELECT    b.book_title,    a.author_name FROM books b JOIN authors a    ON b.author_id = a.author_id; Here, authors is aliased as a and books as b, making the query more concise and readable.  Table aliasing in SQL Table aliasing in SQL   Practical example: Using aliases in aggregation. If you want to calculate the total sales for each book and give a meaningful name to the aggregate column:  SELECT    b.book_title,    COUNT(s.sale_id) total_sales FROM books b JOIN sales s    ON b.book_id = s.book_id GROUP BY b.book_title; In this query, COUNT(s.sale_id) is aliased as total_sales, clearly indicating what the value represents.  Getting total sales for each book Getting total sales for each book   Making Queries Maintainable Naming Conventions Using aliases allows you to follow consistent naming conventions across your queries. For instance, if you consistently use a for authors and b for books, anyone reading your queries will quickly understand which table is being referenced. This consistency can make maintaining and updating your queries much easier.  Reducing Complexity Aliases can also help reduce the complexity of your queries. Instead of repeatedly typing long table names, you can use shorter aliases, making the query easier to read and understand. This can be particularly useful when you have long and complicated table names.  Improving Performance While aliases themselves don’t directly improve performance, they can make it easier to identify areas for optimization. By clearly labeling columns and tables, you can more easily spot inefficiencies in your queries and address them.  Key Points to Remember Aliases Improve Readability and Maintainability: They make complex queries easier to understand and manage by providing meaningful names. Temporary Names: Aliases are only valid within the context of the query in which they are defined. Optional AS Keyword: While AS is commonly used, some SQL dialects allow you to omit it. Conclusion Learning how to use aliases in SQL is a fundamental skill that can greatly enhance the readability and maintainability of your queries. Whether you’re simplifying column names, making complex joins more readable, or providing clear labels for aggregated data, aliases are an invaluable tool in your SQL toolkit.  By incorporating aliases into your SQL practice, you’ll find that your queries become not only more efficient to write but also easier for others (and your future self) to read and understand. So go ahead, start using aliases in your queries and experience the difference they make!  References &lt;a href="https://www.w3schools.com/sql/sql_alias.asp" rel="noopener noreferrer"&gt;https://www.w3schools.com/sql/sql_alias.asp&lt;/a&gt; &lt;a href="https://www.geeksforgeeks.org/sql-tutorial/" rel="noopener noreferrer"&gt;https://www.geeksforgeeks.org/sql-tutorial/&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-4-sql-aliases-learn-how-to-improve-query-readability-and-maintainability/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>database</category>
      <category>sql</category>
      <category>ardenov</category>
    </item>
    <item>
      <title>SQL 101 | Chapter 3: Mastering Data Retrieval with SELECT Statements</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Thu, 09 Jan 2025 00:08:00 +0000</pubDate>
      <link>https://dev.to/yujin/sql-101-chapter-3-mastering-data-retrieval-with-select-statements-3nnh</link>
      <guid>https://dev.to/yujin/sql-101-chapter-3-mastering-data-retrieval-with-select-statements-3nnh</guid>
      <description>&lt;p&gt;&lt;em&gt;Learn how to master data retrieval with SQL in this comprehensive guide. Explore SELECT basics, filtering with WHERE, sorting with ORDER BY, limiting results with LIMIT, aggregating data with GROUP BY, and joining tables with JOIN. Ideal for those looking to deepen their understanding of SQL.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-3-mastering-data-retrieval-with-select-statements/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fdata-science-and-machine-learning%2Fsql-101%2Fchapter-3-mastering-data-retrieval-with-select-statements%2Fimages%2Fythumb.jpg" height="376" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-3-mastering-data-retrieval-with-select-statements/" rel="noopener noreferrer" class="c-link"&gt;
          SQL 101 | Chapter 3: Mastering Data Retrieval with SELECT Statements
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Learn how to master data retrieval with SQL in this comprehensive guide. Explore SELECT basics, filtering with WHERE, sorting with ORDER BY, limiting results with LIMIT, aggregating data with GROUP BY, and joining tables with JOIN. Ideal for those looking to deepen their understanding of SQL.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;Introduction Structured Query Language (SQL) is the backbone of managing and manipulating data within relational databases. One of its most fundamental operations is retrieving data, and the SELECT statement is the key player in this game. In this guide, we’ll delve into the essentials of data retrieval, exploring SELECT basics, filtering with WHERE, sorting with ORDER BY, limiting results with LIMIT, aggregating data with GROUP BY, and finally, the power of joining tables with JOIN.  Preparation Before we begin, we need to create tables to practice SQL SELECT statements. We are going to use pgAdmin as our SQL playground. Please refer to the previous chapter if you haven’t installed it yet. There are three tables we are going to use: books, authors, and sales. For now, don’t worry about the syntax of how to prepare these tables, as it will be explained in the next chapter.  If you prefer to populate the table using CSV files, you can download them here: books.csv, authors.csv, sales.csv.  Create and populate the books table:  -- create the "books" table CREATE TABLE books (   book_id SERIAL PRIMARY KEY,   book_title VARCHAR(255) NOT NULL,   book_genre VARCHAR(255),   book_summary TEXT,   total_pages INT,   author_id INT );  -- insert 100 sample records into the "books" table INSERT INTO books (book_title, book_genre, book_summary, total_pages, author_id) SELECT   CONCAT('Book ', seq) AS book_title,   CASE seq % 3     WHEN 0 THEN 'Fiction'     WHEN 1 THEN 'Non-Fiction'     ELSE 'Biography'   END AS book_genre,   CONCAT('This is a summary of Book ', seq) AS book_summary,   seq * 100 AS total_pages,   (seq % 10) + 2 AS author_id FROM   generate_series(1, 100) AS seq; Create and populate the authors table:  -- create the "authors" table CREATE TABLE authors (   author_id SERIAL PRIMARY KEY,   author_name VARCHAR(255) NOT NULL,   address VARCHAR(255),   phone VARCHAR(20),   popularity INT );  -- insert 10 sample records into the "authors" table INSERT INTO authors (author_name, address, phone, popularity) VALUES   ('John Smith', '123 Main St, Anytown USA', '555-1234', 3),   ('Jane Doe', '456 Elm St, Anytown USA', '555-5678', 2),   ('Bob Johnson', '789 Oak St, Anytown USA', '555-9012', 4),   ('Sue Williams', '321 Pine St, Anytown USA', '555-3456', 1),   ('Mike Brown', '654 Maple St, Anytown USA', '555-7890', 3),   ('Mary Jones', '987 Cedar St, Anytown USA', '555-2345', 2),   ('David Lee', '246 Birch St, Anytown USA', '555-6789', 4),   ('Karen Davis', '369 Spruce St, Anytown USA', '555-0123', 1),   ('Tom Wilson', '802 Walnut St, Anytown USA', '555-4567', 3),   ('Amy Taylor', '135 Cherry St, Anytown USA', '555-8901', 2);  -- insert 1 sample record into the "authors" table INSERT INTO authors (author_id, author_name, address, phone, popularity) VALUES   (100, 'You Smith', '456 Main St, Anytown USA', '555-1267', 4); Create and populate the sales table:  -- create the "sales" table CREATE TABLE sales (     sale_id SERIAL PRIMARY KEY,     book_id INTEGER,     timestamp_sale TIMESTAMPTZ,     date_sale DATE,     price DECIMAL(10, 2),     cost DECIMAL(10, 2) );  -- insert 100 sample records into sales table SELECT SETSEED(0.5);  INSERT INTO sales (book_id, timestamp_sale, date_sale, price, cost) SELECT     (FLOOR(RANDOM() * 100) + 1) AS book_id,     (TIMESTAMP '2022-08-08 13:10:11' - ((RANDOM() * (365 * 24 * 60 * 60))::integer || ' seconds')::interval) AT TIME ZONE 'UTC' AS timestamp_sale,     DATE '2022-08-08' - (RANDOM() * 365)::integer AS date_sale,     ROUND((RANDOM() * 100 + 10)::numeric, 2)+100 AS price,     ROUND((RANDOM() * 50 + 5)::numeric, 2) AS cost FROM     generate_series(1, 100);  -- sync generated date based on generated timstamp UPDATE sales SET date_sale = (   SELECT date(timestamp_sale)   FROM sales AS t2   WHERE t2.sale_id = sales.sale_id ); SELECT Basics At its core, the SELECT statement is about fetching data from a specified table. The syntax is straightforward:  SELECT column_1, column_2 FROM table_1; This statement selects specific columns from a table, providing a structured view of the data.  Example: This SQL query retrieves the book_id, book_title, and total_pages columns from the books table.  SELECT book_id, book_title, total_pages FROM books; Below is the result you might see when running this query:  The result of the SQL query, displaying book IDs, titles, and total pages from the books table The result of the SQL query, displaying book IDs, titles, and total pages from the books table   Filtering data Using the WHERE Clause To refine your results, the WHERE clause comes into play. It allows you to filter rows based on specified conditions:  SELECT column_1, column_2 FROM table_1 WHERE condition; Example: This SQL query retrieves the book_id, book_title, and total_pages columns from the books table but only for books where the total_pages is greater than 500.  SELECT book_id, book_title, total_pages FROM books WHERE total_pages &amp;gt; 500; Here’s a preview of what the results look like when you run this query:  The result showing books with more than 500 pages The result showing books with more than 500 pages   Sorting Results Using the ORDER BY Clause ORDER BY enables you to sort the result set based on one or more columns, either in ascending (ASC) or descending (DESC) order:  SELECT column_1, column_2 FROM table_1 ORDER BY column_1 ASC/DESC; Example: This SQL query retrieves the book_id, book_title, and total_pages columns from the books table and sorts the results by total_pages in descending order, so the books with the most pages appear first.  SELECT book_id, book_title, total_pages FROM books ORDER BY total_pages DESC; Here’s a snapshot of the results you get from executing this query:  The result showing books ordered by total pages in descending order The result showing books ordered by total pages in descending order   Limiting Results Using the LIMIT Clause When dealing with large datasets, LIMIT is crucial. It restricts the number of rows returned by a query:  SELECT column_1, column_2 FROM table_1 LIMIT number_of_rows; Example: This SQL query retrieves the book_id, book_title, and total_pages columns from the books table, but limits the output to only the first 8 records.  SELECT book_id, book_title, total_pages FROM books LIMIT 8; Here’s a glimpse of the data you see when running this query:  The result showing the first 8 books from the table The result showing the first 8 books from the table   Aggregating Data Using the GROUP BY Clause GROUP BY is used for aggregating data based on specified columns. It is often paired with aggregate functions like COUNT, SUM, AVG, etc.:  SELECT column_1, aggregate_function(column_2) FROM table_1 GROUP BY column_1; Example: This SQL query counts the number of books for each genre by grouping the results based on book_genre.  SELECT   book_genre,   COUNT(book_id) FROM books GROUP BY book_genre; Here’s a snapshot of the results you encounter with this query:  The result set showing the number of books per genre The result set showing the number of books per genre   Joining Tables Using the JOIN or INNER JOIN Clause In a relational database, data is often distributed across multiple tables. INNER JOIN or JOIN helps combine rows from two or more tables based on a related column:  SELECT column_1, column_2 FROM table_1 INNER JOIN table_2 ON table_1.column_x = table_2.column_x; Example: This SQL query retrieves the book_id, book_title, and author_name by joining the books table with the authors table based on matching author_id.  SELECT   books.book_id,   books.book_title,   authors.author_name FROM books INNER JOIN authors   ON books.author_id = authors.author_id; Here’s a look at the combined data you get with this query:  The result showing book details along with author names The result showing book details along with author names   Types of Joins FULL JOIN Returns all rows from both tables, matching rows and non-matching rows. If a row doesn’t have a match in the other table, it will still be included in the result.  INNER JOIN or JOIN Returns only the matching rows from both tables. If a row doesn’t have a match in the other table, it will be excluded from the result.  LEFT JOIN Returns all rows from the left (first) table and the matching rows from the right (second) table. If a row doesn’t have a match in the right table, it will still be included in the result.  RIGHT JOIN Returns all rows from the right (second) table and the matching rows from the left (first) table. If a row doesn’t have a match in the left table, it will still be included in the result.  CROSS JOIN or , Returns the Cartesian product of both tables. Generates all possible combinations of rows from both tables.  UNION ALL Combines the results of two or more SELECT statements into a single result set.  UNION Combines the results of two or more SELECT statements into a single result set and removes duplicate rows from the result set.  Visualization of Joins types Visualization of Joins types   Inspecting the LEFT JOIN on One-to-Many Relationship In this SQL query, we’re examining how a LEFT JOIN operates in a one-to-many relationship between two tables: authors and books. In this scenario, one author can be associated with multiple books.  The LEFT JOIN ensures that all records from the authors table are included in the result set, regardless of whether there are matching records in the books table. If an author does not have any books, their information will still appear in the output, with NULL values in the book_title column.  SELECT   authors.author_id,   authors.author_name,   books.book_title FROM authors LEFT JOIN books   USING (author_id) ORDER BY author_id; See the output in the image below.  Null value if author does not have books Null value if author does not have books   Here is a diagram or table illustrating the relationship between the authors (Table 1) and books (Table 2) tables, showing how the LEFT JOIN combines these tables based on the author_id column.  LEFT JOIN between authors and books table LEFT JOIN between authors and books table   Quiz Now, try answering the quiz below to deepen your understanding.  List the top 3 books that:  Are written by the most popular authors Have more than 500 pages, with a preference for those closer to 500 Include the following columns from the list: book_id, book_title, total_pages, author_name, popularity  Click to reveal the answer Conclusion Mastering data retrieval in SQL opens a world of possibilities for efficiently extracting insights from databases. By understanding the nuances of SELECT basics, filtering with WHERE, sorting with ORDER BY, limiting with LIMIT, aggregating with GROUP BY, and joining tables with JOIN, you empower yourself to tackle diverse data scenarios.  In this guide, we’ve just begun to explore the possibilities. SQL is a powerful language, and the more you practice, the more you’ll discover its potential. As you become more skilled at writing SQL queries, you’ll find it easier to turn raw data into valuable insights. So, get ready, open your SQL console, and dive into the world of data retrieval.  References &lt;a href="https://dataschool.com/how-to-teach-people-sql/sql-join-types-explained-visually/" rel="noopener noreferrer"&gt;https://dataschool.com/how-to-teach-people-sql/sql-join-types-explained-visually/&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-3-mastering-data-retrieval-with-select-statements/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>database</category>
      <category>sql</category>
      <category>ardenov</category>
    </item>
    <item>
      <title>SQL 101 | Chapter 2: Setting Up Your Database Environment</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Thu, 02 Jan 2025 00:08:00 +0000</pubDate>
      <link>https://dev.to/yujin/sql-101-chapter-2-setting-up-your-database-environment-5hk7</link>
      <guid>https://dev.to/yujin/sql-101-chapter-2-setting-up-your-database-environment-5hk7</guid>
      <description>&lt;p&gt;&lt;em&gt;Setting up a robust and efficient database environment is a crucial step for businesses and individuals alike. Whether you're managing data for a small project or a large-scale enterprise application, getting your database environment right is essential for smooth operations. In this guide, we'll walk you through the key steps involved in setting up your database environment, including choosing the right Database Management System (DBMS), installing it along with a SQL client, and creating your first database.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-2-setting-up-your-database-environment-a-step-by-step-guide/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fdata-science-and-machine-learning%2Fsql-101%2Fchapter-2-setting-up-your-database-environment-a-step-by-step-guide%2Fimages%2Fythumb.jpg" height="376" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-2-setting-up-your-database-environment-a-step-by-step-guide/" rel="noopener noreferrer" class="c-link"&gt;
          SQL 101 | Chapter 2: Setting Up Your Database Environment
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Setting up a robust and efficient database environment is a crucial step for businesses and individuals alike. Whether you're managing data for a small project or a large-scale enterprise application, getting your database environment right is essential for smooth operations. In this guide, we'll walk you through the key steps involved in setting up your database environment, including choosing the right Database Management System (DBMS), installing it along with a SQL client, and creating your first database.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;Choosing a Database Management System (DBMS) Before diving into the technical aspects of setting up your database environment, you need to make a fundamental decision: choosing the right Database Management System (DBMS). The DBMS is the software that will manage your data, and the choice you make here will significantly impact your project’s success. To make a choice, you need to consider factors like scalability, compatibility, and features.  image  Scalability: It refers to a system’s ability to handle increasing workloads or growing data volumes without compromising performance or availability. Scalability is a critical consideration because your application may start small but could grow significantly over time. Compatibility: It involves the DBMS’s ability to work seamlessly with your existing software stack, including your operating system, programming languages, and third-party tools. Incompatibilities can lead to integration issues, increased development time, and maintenance challenges. Features: The features offered by a DBMS play a critical role in defining its capabilities and suitability for specific use cases. Features can vary widely between different database systems. After taking those factors into account, you’ll find that there are several DBMS options available, each with its own strengths and weaknesses. Here are a few popular choices:  Screenshot-2024-01-20-003029.png-small  MySQL: An open-source relational database management system known for its performance, reliability, and ease of use. It’s a great choice for web applications and small to medium-sized projects. PostgreSQL: Another open-source relational DBMS that excels in handling complex queries and large datasets. It’s a solid choice for applications that require scalability and data integrity. MongoDB: A NoSQL database that’s ideal for handling unstructured or semi-structured data. MongoDB is perfect for projects with rapidly evolving data schemas. Microsoft SQL Server: A robust DBMS with strong support for Windows-based applications. It’s suitable for businesses heavily invested in the Microsoft ecosystem. Oracle Database: A powerful DBMS designed for large enterprises with high data volume and complex requirements. It offers advanced features for data management and security. Installing DBMS and SQL Client Once you’ve selected a DBMS, the next step is to install it on your system. The installation process may vary depending on your operating system and the specific DBMS you’ve chosen. Here is a general outline of the steps involved:  Download the DBMS: Visit the official website of your chosen DBMS and download the appropriate installation package for your operating system. Most DBMS providers offer clear instructions and download links on their websites. Install the DBMS: Run the installation package and follow the on-screen instructions. You’ll typically need to configure settings such as the installation directory, port numbers, and administrative passwords during this step. Install a SQL Client: In addition to the DBMS, you’ll need a SQL client to interact with the database. Some DBMSs come with their own graphical clients, but you can also choose from a variety of third-party options. Popular SQL clients include SQL Server Management Studio, DBeaver, and MySQL Workbench. Configure the SQL Client: Once your SQL client is installed, you’ll need to configure it to connect to your newly installed DBMS. You’ll need to provide connection details such as the server address, port number, and authentication credentials. Test the Connection: After configuring your SQL client, test the connection to ensure that it can connect to the DBMS without any issues. If everything is set up correctly, you’re ready to move on to the next step. In this SQL 101 guide, PostgreSQL will be used as the DBMS due to its extensive feature set, outstanding scalability capabilities, and the invaluable backing of a dedicated community, making it an excellent selection for those new to SQL. Throughout this guide, I’ll try to adhere to ANSI SQL standards, ensuring that the knowledge you gain can be applied seamlessly across various DBMS platforms.  Screenshot-2024-01-20-003408.png-small  For the installation method, let’s streamline the process using Docker to simplify things. Follow the steps below:  Step 1: Install Docker Begin by installing Docker, a platform for containerization that manage our DBMS setup. Download and install the docker here: &lt;a href="https://docs.docker.com/get-docker/" rel="noopener noreferrer"&gt;https://docs.docker.com/get-docker/&lt;/a&gt; Validate your docker installation by running docker -v in command line/terminal.  Screenshot-2024-01-20-003515.png-small  Step 2: Create a docker-compose.yml File Next, create a docker-compose.yml file.  version: '3' volumes:   db_data:   pgadmin-data:   docker-entrypoint:  services:   postgres:     image: postgres:13.11-bullseye # feel free to update     restart: always     environment:       POSTGRES_USER: admin # change it       POSTGRES_PASSWORD: admin # change it       POSTGRES_DB: sql101_by_ardenov # change it     volumes:       - docker-entrypoint:/docker-entrypoint-initdb.d       - db_data:/var/lib/postgresql/data      pgadmin:     image: dpage/pgadmin4:7.7 # feel free to update     depends_on:       - postgres     ports:       - "8001:80"     environment:       PGADMIN_DEFAULT_EMAIL: &lt;a href="mailto:admin@example.com"&gt;admin@example.com&lt;/a&gt; # change it       PGADMIN_DEFAULT_PASSWORD: admin # change it     volumes:      - pgadmin-data:/var/lib/pgadmin     restart: unless-stopped Step 3: Install the DBMS  docker-compose up -d Screenshot-2024-01-20-003701.png-small  Step 4: Validate the DBMS Installation  Visit the SQL Client by typing localhost:8001 in your browser or by clicking in the Docker desktop.  Screenshot-2024-01-20-003942.png-small  You should find the pgAdmin login page. Input the login credentials from the YAML file.  Screenshot-2024-01-20-004054.png-small  Creating a Database With your DBMS and SQL client in place, it’s time to create your first database. Creating a database involves defining its structure and schema, which will determine how data is organized and stored. Here’s a simplified guide to creating a database:  Step 1: Launch Your SQL Client Open your SQL client and connect to the DBMS using the credentials you provided during installation.  Screenshot-2024-01-20-004506.png-small  Step 2: Create a New Database In the query tool, you can create a new database by running a SQL command like:  CREATE DATABASE YourDatabaseName; Replace YourDatabaseName with the name you want to give your database.  Screenshot-2024-01-20-004825.png-small  Step 3: Define Tables &amp;amp; Insert Data Once your database is created, you can define tables and store your data which we will cover the details in the next chapter. Each table should have a defined structure, including columns and data types.  Step 4: Query Your Database You can run SQL SELECT statements to retrieve and manipulate data in your database. For now we are using it to check if your setup is complete.  SELECT 'Hello World from SQL' AS output Screenshot-2024-01-20-005135.png-small  If you see that output then congratulations! You’ve successfully set up your database environment, including choosing the right DBMS, installing it along with a SQL client, and creating your first database.  Conclusion Setting up your database environment is a critical first step in any data-driven project. By carefully selecting the right DBMS, installing it correctly, and creating a well-structured database, you’re laying the foundation for efficient data management and application development.  In the next chapter, we will retrieve data with SELECT statement. Until next time, happy querying!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-2-setting-up-your-database-environment-a-step-by-step-guide/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>database</category>
      <category>sql</category>
      <category>ardenov</category>
    </item>
    <item>
      <title>SQL 101 | Chapter 1: Understanding SQL and Databases</title>
      <dc:creator>Yujin</dc:creator>
      <pubDate>Thu, 26 Dec 2024 00:08:00 +0000</pubDate>
      <link>https://dev.to/yujin/sql-101-chapter-1-understanding-sql-and-databases-2h8j</link>
      <guid>https://dev.to/yujin/sql-101-chapter-1-understanding-sql-and-databases-2h8j</guid>
      <description>&lt;p&gt;&lt;em&gt;How do databases serve as efficient repositories of structured data, and how does SQL (Structured Query Language) enable businesses to store, manage, and retrieve vast amounts of information with ease?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In this article, we will delve into the fundamentals of SQL and databases, exploring key concepts such as relational databases, tables, rows, and columns.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Improve your readability in:
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-1-understanding-sql-and-databases/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Fposts%2Fdata-science-and-machine-learning%2Fsql-101%2Fchapter-1-understanding-sql-and-databases%2Fimages%2Fythumb.jpg" height="376" class="m-0" width="668"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-1-understanding-sql-and-databases/" rel="noopener noreferrer" class="c-link"&gt;
          SQL 101 | Chapter 1: Understanding SQL and Databases
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          In the world of information technology, databases are the backbone of virtually every application and system. They serve as efficient repositories of structured data, enabling businesses to store, manage, and retrieve vast amounts of information with ease. To interact with these databases, we rely on a powerful and versatile language known as SQL (Structured Query Language).
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.ardenov.com%2Ffavicon-32x32.png%3Fv%3D2" width="32" height="32"&gt;
        blog.ardenov.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;In this article, we will delve into the fundamentals of SQL and databases, exploring key concepts such as relational databases, tables, rows, and columns.  What is SQL? sql  SQL, short for Structured Query Language, is a domain-specific language used for managing and manipulating data in relational databases. It was first introduced in the early 1970s by IBM researchers, and since then, it has become the standard language for working with databases across various platforms and systems.  SQL allows users to interact with databases using simple yet powerful commands. These commands can perform a wide range of operations, such as retrieving data, inserting new records, updating existing information, and deleting unwanted entries. Whether you are a database administrator, a software developer, or a data analyst, a solid understanding of SQL is indispensable for efficiently handling data.  Relational Databases A relational database is a type of database that organizes and stores data in a structured manner, based on the principles of the relational model. Introduced by Dr. Edgar F. Codd in 1970, the relational model represents data in the form of tables, each comprising rows and columns.  The beauty of a relational database lies in its ability to establish relationships between tables. These relationships define how data in one table is related to data in another, facilitating efficient data retrieval and minimizing data duplication. As a result, relational databases are widely used in various applications, from small-scale projects to large enterprise systems.  Key Concepts: Tables, Rows, and Columns In the context of SQL and databases, it’s essential to grasp the key concepts of tables, rows, and columns.  image-2  Tables: A table, that represents an entity, is a fundamental building block of a relational database. It represents a collection of related data organized into rows and columns. Each table in a database has a unique name and a predefined structure, which specifies the fields or attributes it contains.  Rows: Also known as records or tuples, rows represent individual entries in a table. Each row contains data related to a specific entity or item. For example, in a database for an online store, each row in the “Sales” table could represent a different sale of a product.  Columns: Columns, on the other hand, are the vertical components of a table. They define the attributes or properties of the data being stored. For the “Product” table example, the columns could include attributes such as “ModelNumber,” “ProductName,” “ProductPrice,” and “UnitCost.”  Together, these three elements form the core structure of a relational database, providing a systematic way to organize and access data efficiently.  Conclusion SQL is the language that empowers developers and database administrators to interact with relational databases seamlessly. Understanding its core concepts, such as tables, rows, and columns, is crucial for anyone working with data in the IT industry.  In this article, we have explored the fundamentals of SQL and the principles behind relational databases.  In the next chapter, we will set up our environment to practice hands-on SQL, allowing us to apply the knowledge gained and take our database skills to the next level. Happy querying and let the SQL adventures begin!  References &lt;a href="https://javarevisited.blogspot.com/2017/02/top-6-sql-query-interview-questions-and-answers.html" rel="noopener noreferrer"&gt;https://javarevisited.blogspot.com/2017/02/top-6-sql-query-interview-questions-and-answers.html&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.ardenov.com/posts/data-science-and-machine-learning/sql-101/chapter-1-understanding-sql-and-databases/" rel="noopener noreferrer"&gt;https://blog.ardenov.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>database</category>
      <category>sql</category>
      <category>ardenov</category>
    </item>
  </channel>
</rss>
