<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Angélica Beatriz ROMERO ROQUE</title>
    <description>The latest articles on DEV Community by Angélica Beatriz ROMERO ROQUE (@angelica_romero).</description>
    <link>https://dev.to/angelica_romero</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/angelica_romero"/>
    <language>en</language>
    <item>
      <title>Building a Real-Time Data Pipeline App with Change Data Capture Tools: Debezium, Kafka, and NiFi</title>
      <dc:creator>Angélica Beatriz ROMERO ROQUE</dc:creator>
      <pubDate>Tue, 17 Dec 2024 22:50:53 +0000</pubDate>
      <link>https://dev.to/angelica_romero/building-a-real-time-data-pipeline-app-with-change-data-capture-tools-debezium-kafka-and-nifi-6ig</link>
      <guid>https://dev.to/angelica_romero/building-a-real-time-data-pipeline-app-with-change-data-capture-tools-debezium-kafka-and-nifi-6ig</guid>
      <description>&lt;p&gt;Change Data Capture (CDC) has become a critical technique for modern data integration, allowing organizations to track and propagate data changes across different systems in real-time. In this article, we'll explore how to build a comprehensive CDC solution using powerful open-source tools like Debezium, Apache Kafka, and Apache NiFi&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Technologies in Our CDC Stack&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Debezium: An open-source platform for change data capture that supports multiple database sources.&lt;/li&gt;
&lt;li&gt;Apache Kafka: A distributed streaming platform that serves as the central nervous system for our data pipeline.&lt;/li&gt;
&lt;li&gt;Apache NiFi: A data flow management tool that helps us route, transform, and process data streams.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Architecture Overview&lt;/strong&gt;&lt;br&gt;
Our proposed architecture follows these key steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Capture database changes using Debezium&lt;/li&gt;
&lt;li&gt;Stream changes through Kafka&lt;/li&gt;
&lt;li&gt;Process and route data using NiFi&lt;/li&gt;
&lt;li&gt;Store or further process the transformed data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sample Implementation Approach&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from confluent_kafka import Consumer, Producer
import json
import debezium

class CDCDataPipeline:
    def __init__(self, source_db, kafka_bootstrap_servers):
        """
        Initialize CDC pipeline with database source and Kafka configuration

        :param source_db: Source database connection details
        :param kafka_bootstrap_servers: Kafka broker addresses
        """
        self.source_db = source_db
        self.kafka_servers = kafka_bootstrap_servers

        # Debezium connector configuration
        self.debezium_config = {
            'connector.class': 'io.debezium.connector.mysql.MySqlConnector',
            'tasks.max': '1',
            'database.hostname': source_db['host'],
            'database.port': source_db['port'],
            'database.user': source_db['username'],
            'database.password': source_db['password'],
            'database.server.name': 'my-source-database',
            'database.include.list': source_db['database']
        }

    def start_capture(self):
        """
        Start change data capture process
        """
        # Configure Kafka producer for streaming changes
        producer = Producer({
            'bootstrap.servers': self.kafka_servers,
            'client.id': 'cdc-change-producer'
        })

        # Set up Debezium connector
        def handle_record(record):
            """
            Process each captured change record
            """
            # Transform record and publish to Kafka
            change_event = {
                'source': record.source(),
                'operation': record.operation(),
                'data': record.after()
            }

            producer.produce(
                topic='database-changes', 
                value=json.dumps(change_event)
            )

        # Start Debezium connector
        debezium.start_connector(
            config=self.debezium_config,
            record_handler=handle_record
        )

# Example usage
source_database = {
    'host': 'localhost',
    'port': 3306,
    'username': 'cdc_user',
    'password': 'secure_password',
    'database': 'customer_db'
}

pipeline = CDCDataPipeline(
    source_database, 
    kafka_bootstrap_servers='localhost:9092'
)
pipeline.start_capture()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Detailed Implementation Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Database Source Configuration
The first step involves configuring Debezium to connect to your source database. This requires:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Proper database user permissions&lt;/li&gt;
&lt;li&gt;Network connectivity&lt;/li&gt;
&lt;li&gt;Enabling binary logging (for MySQL)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Kafka as a Streaming Platform
Apache Kafka acts as a central message broker, capturing and storing change events. Key considerations include:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Configuring topic partitions&lt;/li&gt;
&lt;li&gt;Setting up appropriate retention policies&lt;/li&gt;
&lt;li&gt;Implementing exactly-once processing semantics&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Data Transformation with NiFi
Apache NiFi provides powerful data routing and transformation capabilities:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Filter and route change events&lt;/li&gt;
&lt;li&gt;Apply data enrichment&lt;/li&gt;
&lt;li&gt;Handle complex transformation logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenges and Best Practices&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Handling Schema Changes: Implement robust schema evolution strategies&lt;/li&gt;
&lt;li&gt;Performance Optimization: Use appropriate partitioning and compression&lt;/li&gt;
&lt;li&gt;Error Handling: Implement comprehensive error tracking and retry mechanisms&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;GitHub Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I've created a sample implementation that you can explore and use as a reference. The complete code and additional documentation can be found at:&lt;br&gt;
GitHub Repository: &lt;a href="https://github.com/Angelica-R/cdc-data-pipeline" rel="noopener noreferrer"&gt;https://github.com/Angelica-R/cdc-data-pipeline&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Building a Change Data Capture solution requires careful architectural design and selection of appropriate tools. By leveraging Debezium, Kafka, and NiFi, you can create a robust, scalable data integration platform that provides real-time insights into your data changes.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building an App with a Cloud NoSQL Database (Avoiding DynamoDB or Cosmos DB)</title>
      <dc:creator>Angélica Beatriz ROMERO ROQUE</dc:creator>
      <pubDate>Tue, 17 Dec 2024 22:37:37 +0000</pubDate>
      <link>https://dev.to/angelica_romero/building-an-app-with-a-cloud-nosql-database-avoiding-dynamodb-or-cosmos-db-jkf</link>
      <guid>https://dev.to/angelica_romero/building-an-app-with-a-cloud-nosql-database-avoiding-dynamodb-or-cosmos-db-jkf</guid>
      <description>&lt;p&gt;Cloud-based NoSQL databases have become increasingly popular due to their scalability, flexibility, and ability to handle unstructured data. However, some developers may need to avoid commonly used services like AWS DynamoDB or Azure Cosmos DB, either for cost, compatibility, or preference reasons. In this article, we'll explore how to build an app using an alternative cloud NoSQL database, Firebase Firestore, and demonstrate its implementation with sample code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use a Cloud NoSQL Database?&lt;/strong&gt;&lt;br&gt;
NoSQL databases are ideal for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scalability: Easily handle large volumes of data across distributed systems.&lt;/li&gt;
&lt;li&gt;Flexibility: Store data in various formats (JSON, key-value, document, etc.).&lt;/li&gt;
&lt;li&gt;Real-Time Sync: Many NoSQL databases, like Firestore, provide real-time capabilities.&lt;/li&gt;
&lt;li&gt;Ease of Use: Simplified data modeling compared to relational databases.
In this article, we’ll use Firebase Firestore, a document-based database that offers real-time synchronization, scalability, and serverless management, making it an excellent alternative to DynamoDB or Cosmos DB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technologies We'll Use&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Database: Firebase Firestore&lt;/li&gt;
&lt;li&gt;Frontend: React.js&lt;/li&gt;
&lt;li&gt;Backend: Node.js (optional)&lt;/li&gt;
&lt;li&gt;Cloud Environment: Firebase Hosting (optional)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Setting Up Firebase Firestore&lt;/strong&gt;&lt;br&gt;
Step 1: Create a Firebase Project&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Visit Firebase Console.&lt;/li&gt;
&lt;li&gt;Create a new project and enable Firestore under the "Build" section.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 2: Install Firebase SDK&lt;br&gt;
Add Firebase to your project by installing the Firebase SDK in your React app:&lt;br&gt;
&lt;code&gt;npm install firebase&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Step 3: Configure Firebase in Your App&lt;br&gt;
Create a firebaseConfig.js file to initialize Firebase:&lt;br&gt;
`// firebaseConfig.js&lt;br&gt;
import { initializeApp } from 'firebase/app';&lt;br&gt;
import { getFirestore } from 'firebase/firestore';&lt;/p&gt;

&lt;p&gt;const firebaseConfig = {&lt;br&gt;
    apiKey: "your-api-key",&lt;br&gt;
    authDomain: "your-app.firebaseapp.com",&lt;br&gt;
    projectId: "your-project-id",&lt;br&gt;
    storageBucket: "your-project-id.appspot.com",&lt;br&gt;
    messagingSenderId: "your-sender-id",&lt;br&gt;
    appId: "your-app-id"&lt;br&gt;
};&lt;/p&gt;

&lt;p&gt;// Initialize Firebase&lt;br&gt;
const app = initializeApp(firebaseConfig);&lt;br&gt;
const db = getFirestore(app);&lt;/p&gt;

&lt;p&gt;export default db;`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building the App&lt;/strong&gt;&lt;br&gt;
Project Structure&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cloud-no-sql-app/
│-- src/
│   ├── components/
│   │   ├── AddItem.js
│   │   ├── ItemList.js
│   └── firebaseConfig.js
│-- App.js
└── package.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Backend: Firestore CRUD Operations&lt;/strong&gt;&lt;br&gt;
Adding Data&lt;br&gt;
To add a document to a Firestore collection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { collection, addDoc } from 'firebase/firestore';
import db from './firebaseConfig';

const addItem = async (item) =&amp;gt; {
    try {
        const docRef = await addDoc(collection(db, 'items'), item);
        console.log('Document written with ID:', docRef.id);
    } catch (e) {
        console.error('Error adding document:', e);
    }
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reading Data&lt;br&gt;
To fetch all documents from a Firestore collection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { collection, getDocs } from 'firebase/firestore';

const fetchItems = async () =&amp;gt; {
    const querySnapshot = await getDocs(collection(db, 'items'));
    querySnapshot.forEach((doc) =&amp;gt; {
        console.log(`${doc.id} =&amp;gt;`, doc.data());
    });
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deleting Data&lt;br&gt;
To delete a document:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { doc, deleteDoc } from 'firebase/firestore';

const deleteItem = async (id) =&amp;gt; {
    await deleteDoc(doc(db, 'items', id));
    console.log('Document deleted with ID:', id);
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Frontend: React Components&lt;/strong&gt; &lt;br&gt;
AddItem Component&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React, { useState } from 'react';
import { addItem } from './firebaseConfig';

const AddItem = () =&amp;gt; {
    const [item, setItem] = useState('');

    const handleSubmit = async (e) =&amp;gt; {
        e.preventDefault();
        await addItem({ name: item });
        setItem('');
    };

    return (
        &amp;lt;form onSubmit={handleSubmit}&amp;gt;
            &amp;lt;input
                type="text"
                value={item}
                onChange={(e) =&amp;gt; setItem(e.target.value)}
                placeholder="Enter item name"
            /&amp;gt;
            &amp;lt;button type="submit"&amp;gt;Add Item&amp;lt;/button&amp;gt;
        &amp;lt;/form&amp;gt;
    );
};

export default AddItem;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ItemList Component&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React, { useState, useEffect } from 'react';
import { fetchItems } from './firebaseConfig';

const ItemList = () =&amp;gt; {
    const [items, setItems] = useState([]);

    useEffect(() =&amp;gt; {
        const getItems = async () =&amp;gt; {
            const data = await fetchItems();
            setItems(data);
        };
        getItems();
    }, []);

    return (
        &amp;lt;ul&amp;gt;
            {items.map((item) =&amp;gt; (
                &amp;lt;li key={item.id}&amp;gt;{item.name}&amp;lt;/li&amp;gt;
            ))}
        &amp;lt;/ul&amp;gt;
    );
};

export default ItemList;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Running the App&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start your development server:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm start

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add items and view them in real time via the Firestore Console or your app.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;GitHub Repository&lt;/strong&gt;&lt;br&gt;
You can find the complete source code for this project on GitHub:&lt;br&gt;
GitHub Repository: &lt;a href="https://github.com/Angelica-R/cloud-no-sql-app.git" rel="noopener noreferrer"&gt;https://github.com/Angelica-R/cloud-no-sql-app.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Using Firebase Firestore, you can easily build scalable applications without relying on SQL Server, DynamoDB, or Cosmos DB. Firestore's real-time synchronization and simplicity make it a strong choice for projects that require cloud-hosted NoSQL databases. By following this tutorial, you’ve learned how to set up Firestore, perform CRUD operations, and integrate it into a React app.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Build an Application Without SQL Server Database (Avoiding RPrometheusedis, MongoDB, and )</title>
      <dc:creator>Angélica Beatriz ROMERO ROQUE</dc:creator>
      <pubDate>Tue, 17 Dec 2024 22:05:25 +0000</pubDate>
      <link>https://dev.to/angelica_romero/build-an-application-without-sql-server-database-avoiding-rprometheusedis-mongodb-and--3ko7</link>
      <guid>https://dev.to/angelica_romero/build-an-application-without-sql-server-database-avoiding-rprometheusedis-mongodb-and--3ko7</guid>
      <description>&lt;p&gt;In the world of software development, databases are one of the fundamental pillars for storing and managing information. However, there are cases where applications do not require a conventional database such as SQL Server or NoSQL solutions like Redis, MongoDB, or Prometheus. This can happen when the application is small, does not need persistent data, or when a simplified and lightweight solution is sought.&lt;/p&gt;

&lt;p&gt;In this article, we will explore how to build a functional application without using databases. We will use flat files (JSON) as temporary storage, which is useful for small projects or applications with no large scalability requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Reasons to Avoid Traditional Databases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Although SQL Server, Redis, and MongoDB are popular choices, they can be unnecessary in the following scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Small applications or prototypes.&lt;/li&gt;
&lt;li&gt;Temporary storage requirements.&lt;/li&gt;
&lt;li&gt;Avoiding costs associated with cloud services.&lt;/li&gt;
&lt;li&gt;Need for a simple and lightweight implementation.&lt;/li&gt;
&lt;li&gt;Lack of infrastructure or access to servers.
A common alternative is to use flat files, like JSON, to store data locally on the server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Technologies to Use&lt;/strong&gt;&lt;br&gt;
For this example, we will build a simple application using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Language: Node.js (JavaScript)&lt;/li&gt;
&lt;li&gt;Storage: Local JSON files&lt;/li&gt;
&lt;li&gt;Framework: Express.js to create a simple HTTP server&lt;/li&gt;
&lt;li&gt;Development Tools: Visual Studio Code and npm
We will not use any database management system or NoSQL solutions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Project Structure&lt;/strong&gt;&lt;br&gt;
The file structure will be straightforward:&lt;br&gt;
&lt;code&gt;my-app/&lt;br&gt;
│-- server.js&lt;br&gt;
│-- data/&lt;br&gt;
│   └── data.json&lt;br&gt;
└── package.json&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;server.js: Main file where the Express server is implemented.&lt;/li&gt;
&lt;li&gt;data.json: Local storage file.&lt;/li&gt;
&lt;li&gt;package.json: Dependency configuration file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Code Implementation&lt;/strong&gt;&lt;br&gt;
4.1 Install Dependencies&lt;br&gt;
Initialize a Node.js project and install Express.js:&lt;br&gt;
&lt;code&gt;mkdir my-app&lt;br&gt;
cd my-app&lt;br&gt;
npm init -y&lt;br&gt;
npm install express&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
4.2 Create the JSON Data File&lt;/p&gt;

&lt;p&gt;In the data folder, create the data.json file:&lt;br&gt;
&lt;code&gt;[&lt;br&gt;
    {&lt;br&gt;
        "id": 1,&lt;br&gt;
        "name": "Juan",&lt;br&gt;
        "age": 25&lt;br&gt;
    },&lt;br&gt;
    {&lt;br&gt;
        "id": 2,&lt;br&gt;
        "name": "Pedro",&lt;br&gt;
        "age": 30&lt;br&gt;
    }&lt;br&gt;
]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;4.3 Implement the Express Server&lt;/p&gt;

&lt;p&gt;In server.js, implement a server that reads and writes data to the JSON file:&lt;br&gt;
`const express = require('express');&lt;br&gt;
const fs = require('fs');&lt;br&gt;
const app = express();&lt;br&gt;
const PORT = 3000;&lt;/p&gt;

&lt;p&gt;app.use(express.json());&lt;/p&gt;

&lt;p&gt;const filePath = './data/data.json';&lt;/p&gt;

&lt;p&gt;// Get all data&lt;br&gt;
app.get('/users', (req, res) =&amp;gt; {&lt;br&gt;
    fs.readFile(filePath, 'utf8', (err, data) =&amp;gt; {&lt;br&gt;
        if (err) return res.status(500).send('Error reading data');&lt;br&gt;
        res.json(JSON.parse(data));&lt;br&gt;
    });&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;// Add a new user&lt;br&gt;
app.post('/users', (req, res) =&amp;gt; {&lt;br&gt;
    const newUser = req.body;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fs.readFile(filePath, 'utf8', (err, data) =&amp;gt; {
    if (err) return res.status(500).send('Error reading data');

    const users = JSON.parse(data);
    users.push({ id: users.length + 1, ...newUser });

    fs.writeFile(filePath, JSON.stringify(users, null, 2), (err) =&amp;gt; {
        if (err) return res.status(500).send('Error saving data');
        res.status(201).send('User added successfully');
    });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;});&lt;/p&gt;

&lt;p&gt;app.listen(PORT, () =&amp;gt; {&lt;br&gt;
    console.log(&lt;code&gt;Server running at http://localhost:${PORT}&lt;/code&gt;);&lt;br&gt;
});`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. How It Works&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data Reading: The GET /users endpoint reads the JSON file and returns its content as a response.&lt;/li&gt;
&lt;li&gt;Data Writing: The POST /users endpoint adds a new user to the JSON file and saves the changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;6. Testing the Application&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start the server:
node server.js&lt;/li&gt;
&lt;li&gt;Use Postman or your browser to test the endpoints:&lt;/li&gt;
&lt;li&gt;GET &lt;a href="http://localhost:3000/users" rel="noopener noreferrer"&gt;http://localhost:3000/users&lt;/a&gt; returns the list of users.&lt;/li&gt;
&lt;li&gt;POST &lt;a href="http://localhost:3000/users" rel="noopener noreferrer"&gt;http://localhost:3000/users&lt;/a&gt; with a JSON body like:
&lt;code&gt;{
"name": "Luis",
"age": 28
}&lt;/code&gt;
adds a new user.
&lt;strong&gt;7. Source Code on GitHub&lt;/strong&gt;
You can find the complete code in the following GitHub repository:
GitHub Repository: &lt;a href="https://github.com/Angelica-R/App-without-SQL-Server-Database.git" rel="noopener noreferrer"&gt;https://github.com/Angelica-R/App-without-SQL-Server-Database.git&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;8. Conclusion&lt;/strong&gt;&lt;br&gt;
Building an application without databases like SQL Server, Redis, or MongoDB is a valid option for small and simple applications. Using JSON files provides a lightweight and easy-to-implement solution, allowing data management in a temporary or local form. This technique is ideal for prototypes or environments where a complex database is not needed.&lt;/p&gt;

&lt;p&gt;Furthermore, this approach allows for future scalability, as it is always possible to migrate to a traditional database when the project requires it.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>json</category>
      <category>node</category>
    </item>
  </channel>
</rss>
