<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: alexia cismaru</title>
    <description>The latest articles on DEV Community by alexia cismaru (@alexiacismaru).</description>
    <link>https://dev.to/alexiacismaru</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alexiacismaru"/>
    <language>en</language>
    <item>
      <title>Using bounded contexts to build a Java application</title>
      <dc:creator>alexia cismaru</dc:creator>
      <pubDate>Thu, 07 Nov 2024 09:43:30 +0000</pubDate>
      <link>https://dev.to/alexiacismaru/using-bounded-contexts-to-build-a-java-application-1c5g</link>
      <guid>https://dev.to/alexiacismaru/using-bounded-contexts-to-build-a-java-application-1c5g</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6j5me90ou2szeha8lzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6j5me90ou2szeha8lzv.png" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What are bounded contexts?
&lt;/h2&gt;

&lt;p&gt;A bounded context is one of the core patterns in Domain-Driven Design (DDD). It represents how to divide a large project into domains. This separation allows for flexibility and easier maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Hexagonal Architecture?
&lt;/h2&gt;

&lt;p&gt;Hexagonal architecture separates the application's core from its external dependencies. It uses ports and adapters to decouple business logic from outside services. Making the business logic independent of frameworks, databases, or user interfaces allows the application to adapt easily to future requirements.&lt;/p&gt;

&lt;p&gt;The architecture is made out of three main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Business Model&lt;/strong&gt;: the business rules and core logic. It is completely isolated from external dependencies and only communicates through ports.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ports&lt;/strong&gt;: exit and entry to the business model. They separate the core from external layers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adapters&lt;/strong&gt;: translate external interactions (HTTP requests, database operations) into something the core understands. There are in adapters used for incoming communication and out adapters used for outgoing communication.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why use Hexagonal Architecture?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Testability&lt;/strong&gt;: you can write unit tests for the business logic without mocking databases, external APIs, or frameworks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability&lt;/strong&gt;: you can easily swap out dependencies without affecting the core business logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: independent scaling of the layers, enhancing overall performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: different external systems can interact with the same core logic.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building an application using Hexagonal Architecture in Java
&lt;/h2&gt;

&lt;p&gt;This walk-through project uses bounded contexts and hexagonal architecture in Java.&lt;/p&gt;

&lt;p&gt;The goal is to create a ticketing system for an amusement park called Techtopia. The project has 3 main bounded contexts: Tickets, Attractions, and Entrance Gates. Each bounded context has its directory and includes components like in and our ports, adapters, use cases, etc.&lt;/p&gt;

&lt;p&gt;We will walk through the code process of buying a ticket for the park.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Define the Domain&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create a " domain " directory and include the business logic, free from any framework or external dependency.&lt;/p&gt;

&lt;p&gt;Create the "Ticket" entity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package java.boundedContextA.domain;

import lombok.Getter;
import lombok.Setter;
import lombok.ToString;

import java.time.Duration;
import java.time.LocalDateTime;
import java.util.UUID;

@Getter
@Setter
@ToString
public class Ticket {
    private TicketUUID ticketUUID;
    private LocalDateTime start;
    private LocalDateTime end;
    private double price;
    private TicketAction ticketAction;
    private final Guest.GuestUUID owner;
    private ActivityWindow activityWindow;

    public record TicketUUID(UUID uuid) {
    }

    public Ticket(TicketUUID ticketUUID, Guest.GuestUUID owner) {
        this.ticketUUID = ticketUUID;
        this.owner = owner;
    }

    public Ticket(TicketUUID ticketUUID, LocalDateTime start, LocalDateTime end, double price, TicketAction ticketAction, Guest.GuestUUID owner) {
        this.ticketUUID = ticketUUID;
        this.start = start;
        this.end = end;
        this.price = price;
        this.ticketAction = ticketAction;
        this.owner = owner;
    }

    public Ticket(TicketUUID ticketUUID, LocalDateTime start, LocalDateTime end, double price, Guest.GuestUUID owner, ActivityWindow activityWindow) {
        this.ticketUUID = ticketUUID;
        this.start = start;
        this.end = end;
        this.price = price;
        this.owner = owner;
        this.activityWindow = activityWindow;
    }

    public void addTicketActivity(TicketActivity ticketActivity) {
        this.activityWindow.add(ticketActivity);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Moreover, create another domain class named &lt;strong&gt;"BuyTicket"&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package java.boundedContextA.domain;

import org.springframework.stereotype.Component;

import java.time.LocalDateTime;
import java.util.UUID;

@Component
public class BuyTicket {
    public Ticket buyTicket(TicketAction ticketAction, LocalDateTime start, LocalDateTime end, double price, Guest.GuestUUID owner) {
        return new Ticket(new Ticket.TicketUUID(UUID.randomUUID()), start, end, price, ticketAction, owner);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;BuyTicket *&lt;/em&gt; represents the logic for buying a ticket. By making it a separate Spring component, you can isolate the ticket-buying logic in its class, which can evolve independently of other components. This separation improves maintainability and makes the codebase more modular.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create ports&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the "ports/in" directory you create use cases. Here, we will make the use case where a ticket is bought.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package java.boundedContextA.ports.in;

public interface BuyingATicketUseCase {
    void buyTicket(BuyTicketsAmountCommand buyTicketsAmountCommand);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a record of a ticket to save it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package java.boundedContextA.ports.in;

import java.boundedContextA.domain.Guest;
import java.boundedContextA.domain.TicketAction;

import java.time.LocalDateTime;

public record BuyTicketsAmountCommand(double price, TicketAction action, LocalDateTime start, LocalDateTime end, Guest.GuestUUID owner) {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, in the &lt;strong&gt;"ports/out"&lt;/strong&gt; directory you create ports that represent each step of buying said ticket. Create interfaces like &lt;strong&gt;"CreateTicketPort"&lt;/strong&gt;, &lt;strong&gt;"TicketLoadPort"&lt;/strong&gt;, &lt;strong&gt;"TicketUpdatePort"&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package java.boundedContextA.ports.out;

import java.boundedContextA.domain.Ticket;

public interface TicketCreatePort {
    void createTicket(Ticket ticket);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create port interfaces&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In a separate directory, named "core", implement the interface of the buying ticket use case.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package java.boundedContextA.core;

import java.boundedContextA.domain.BuyTicket;
import java.boundedContextA.ports.in.BuyTicketsAmountCommand;
import java.boundedContextA.ports.in.BuyingATicketUseCase;
import java.boundedContextA.ports.out.TicketCreatePort;
import lombok.AllArgsConstructor;
import org.springframework.stereotype.Service;

import java.util.List;

@Service
@AllArgsConstructor
public class DefaultBuyingATicketUseCase implements BuyingATicketUseCase {
    final BuyTicket buyTicket;
    private final List&amp;lt;TicketCreatePort&amp;gt; ticketCreatePorts;

    @Override
    public void buyTicket(BuyTicketsAmountCommand buyTicketsAmountCommand) {
        var ticket = buyTicket.buyTicket(buyTicketsAmountCommand.action(), buyTicketsAmountCommand.start(), buyTicketsAmountCommand.end(), buyTicketsAmountCommand.price(), buyTicketsAmountCommand.owner());
        ticketCreatePorts.stream().forEach(ticketCreatedPort -&amp;gt; ticketCreatedPort.createTicket(ticket));
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Create adapters&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the &lt;strong&gt;"adapters/out"&lt;/strong&gt; directory, create JPA entities of the Ticket to mirror the domain. This is how the application communicates with the database and creates a table of the tickets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package java.adapters.out.db;

import java.boundedContextA.domain.TicketAction;
import jakarta.persistence.*;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;
import org.hibernate.annotations.JdbcTypeCode;

import java.sql.Types;
import java.time.LocalDateTime;
import java.util.UUID;

@Entity
@Table(schema="boundedContextA",name = "boundedContextA.tickets")
@Getter
@Setter
@NoArgsConstructor
public class TicketBoughtJpaEntity {
    @Id
    @JdbcTypeCode(Types.VARCHAR)
    private UUID uuid;

    public TicketBoughtJpaEntity(UUID uuid) {
        this.uuid = uuid;
    }

    @JdbcTypeCode(Types.VARCHAR)
    private UUID owner;

    @Column
    private LocalDateTime start;
    @Column
    private LocalDateTime end;

    @Column
    private double price;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't forget to create a repository of the entity. This repository will communicate with the service, just like any other architecture.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package java.boundedContextA.adapters.out.db;

import org.springframework.data.jpa.repository.JpaRepository;

import java.util.Optional;
import java.util.UUID;

public interface TicketRepository extends JpaRepository&amp;lt;TicketBoughtJpaEntity, UUID&amp;gt; {
    Optional&amp;lt;TicketBoughtJpaEntity&amp;gt; findByOwner(UUID uuid);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the &lt;strong&gt;"adapters/in"&lt;/strong&gt; directory, create a controller of the Ticket. This application will communicate with external sources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package java.boundedContextA.adapters.in;

import java.boundedContextA.ports.in.BuyTicketsAmountCommand;
import java.boundedContextA.ports.in.BuyingATicketUseCase;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/api")
public class TicketsController {
    private final BuyingATicketUseCase buyingATicketUseCase;

    public TicketsController(BuyingATicketUseCase buyingATicketUseCase) {
        this.buyingATicketUseCase = buyingATicketUseCase;
    }

    @PostMapping("/ticket")
    public void receiveMoney(@RequestBody BuyTicketsAmountCommand command) {
        try {
            buyingATicketUseCase.buyTicket(command);
        } catch (IllegalArgumentException e) {
            System.out.println("An IllegalArgumentException occurred: " + e.getMessage());
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Finalize the ticket-buying process&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To signify that the ticket was bought, create a record of the event in an &lt;strong&gt;"events"&lt;/strong&gt; directory.&lt;/p&gt;

&lt;p&gt;Events represent significant occurrences in the application that are important for the system to communicate to other systems or components. They serve as another way of communicating with the outside about a process that finished, a state that changed, or the need for further action.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package java.boundedContextA.events;

import java.time.LocalDateTime;
import java.util.UUID;

public record TicketIsBoughtEvent(UUID uuid, LocalDateTime start, LocalDateTime end) {
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't forget to include a main class to run everything all at once.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package java.boundedContextA;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class BoundedContextAApplication {
    public static void main(String[] args) {
        SpringApplication.run(BoundedContextAApplication.class, args);
    }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;**This is a very brief explanation, for a more in-depth code, and how to connect to a React interface, check out this GitHub repository: &lt;a href="https://github.com/alexiacismaru/techtopia" rel="noopener noreferrer"&gt;https://github.com/alexiacismaru/techtopia&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Implementing this architecture in Java involves defining a clean core domain with business logic and interfaces, creating adapters to interact with external systems, and writing everything together while keeping the core isolated.&lt;/p&gt;

&lt;p&gt;By following this architecture, your Java applications will be better structured, easier to maintain, and flexible enough to adapt to future changes.&lt;/p&gt;

</description>
      <category>java</category>
      <category>tutorial</category>
      <category>programming</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>How to build a website using React and Rest APIs (React basics explained)</title>
      <dc:creator>alexia cismaru</dc:creator>
      <pubDate>Wed, 06 Nov 2024 14:56:35 +0000</pubDate>
      <link>https://dev.to/alexiacismaru/how-to-build-a-website-using-react-and-rest-apis-react-basics-explained-5bf9</link>
      <guid>https://dev.to/alexiacismaru/how-to-build-a-website-using-react-and-rest-apis-react-basics-explained-5bf9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8ww1i3jvemedoft61ul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8ww1i3jvemedoft61ul.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;React and TypeScript are powerful frameworks for building scalable, maintainable, and safe websites. React provides a flexible and component-based architecture, while TypeScript adds static typing to JavaScript, for clean and readable code. This article will guide you through setting up a simple website with React and TypeScript, covering the core concepts needed to get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Choose React with TypeScript?
&lt;/h2&gt;

&lt;p&gt;TypeScript is popular among JavaScript developers because it can catch errors during development and make code easier to understand and refactor. The two are ideal for building modern, fast websites and applications with maintainable code that scales well.&lt;/p&gt;

&lt;p&gt;** Check out the whole code on GitHub: &lt;a href="https://github.com/alexiacismaru/techtopia/tree/main/frontend" rel="noopener noreferrer"&gt;https://github.com/alexiacismaru/techtopia/tree/main/frontend&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic React concepts and how to use them to build a website
&lt;/h2&gt;

&lt;p&gt;Let’s build a website for a fictional amusement park called Techtopia. We will show elements like attractions and where they are on the map, a landing page, or a loading page. In addition, we will also make it possible to add/delete elements of the page or to search for them based on a variable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;Create an empty React project by copying this into the terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm create vite@latest reactproject --template react-ts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the empty project and a new tab will open in the browser window.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd reactproject
npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Final project structure overview
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;reactproject/
├── node_modules/
├── public/
├── src/
│   ├── assets/
│   ├── components/
│   ├── context/
│   ├── hooks/
│   ├── model/
│   ├── services/
│   ├── App.css
│   ├── App.tsx
│   ├── index.css
│   ├── vite-env.d.ts
├── .gitignore
├── package.json
└── tsconfig.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Components
&lt;/h3&gt;

&lt;p&gt;Components are elements of a webpage that can also be reused. They can be a part of the webpage, like the header or footer, or the entire page, like a list of users. It is just like a JavaScript function but returns a rendered element.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export function Header() {
    return (
        &amp;lt;header style={{ display: 'block', width: '100%', top: 0, left: 0, zIndex: 'var(--header-and-footer)' }}&amp;gt;
            &amp;lt;div style={{
                borderBottom: '1px solid white',
                boxShadow: '',
                backgroundColor: 'transparent',
                paddingLeft: '1rem',
                paddingRight: '1rem',
                marginLeft: 'auto',
                marginRight: 'auto',
            }}&amp;gt;
                &amp;lt;div
                    style={{
                        display: 'flex',
                        justifyContent: 'space-between',
                        padding: '5px',
                        alignItems: 'baseline',
                    }}
                &amp;gt;
                    &amp;lt;a href='/techtopia' style={{
                        fontSize: '40px', fontFamily: 'MAROLLA__', color: 'black',
                        fontWeight: 'bold',
                    }}&amp;gt;Techtopia&amp;lt;/a&amp;gt;
                    &amp;lt;div style={{display: 'flex',
                        justifyContent: 'space-around',
                        padding: '5px',
                        alignItems: 'baseline',}}&amp;gt;
                        &amp;lt;a href='/refreshment-stands' style={{
                            marginRight: '10px', color: 'black'
                        }}&amp;gt;Refreshment stands&amp;lt;/a&amp;gt;
                        &amp;lt;a href='/attractions' style={{ marginRight: '10px', color: 'white'
                        }}&amp;gt;Attractions&amp;lt;/a&amp;gt;
                        &amp;lt;a href='/map' style={{ marginRight: '60px', color: 'white'
                        }}&amp;gt;Map&amp;lt;/a&amp;gt;
                    &amp;lt;/div&amp;gt;
                &amp;lt;/div&amp;gt;
            &amp;lt;/div&amp;gt;
        &amp;lt;/header&amp;gt;
    )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  JSX
&lt;/h3&gt;

&lt;p&gt;JSX is JavaScript XML, allowing the user to write HTML-like code in .jsx files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Button sx={{padding: "10px", color: 'black'}} onClick={onClose}&amp;gt;X&amp;lt;/Button&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  TSX
&lt;/h3&gt;

&lt;p&gt;TSX is a file extension for TypeScript files that contains JSX syntax. With TSX you can write type-checked code with the existing JSX syntax.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface RefreshmentStand {
    id: string;
    name: string;
    isOpen: boolean;
}

const Reshfresment = (props: RefreshmentStand) =&amp;gt; {
  return (
    &amp;lt;div&amp;gt;
        &amp;lt;h1&amp;gt;{props.name}&amp;lt;/h1&amp;gt;
        &amp;lt;p&amp;gt;{props.isOpen}&amp;lt;/p&amp;gt;
    &amp;lt;/div&amp;gt;
  );
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Fragments
&lt;/h3&gt;

&lt;p&gt;Fragments return multiple elements to a component. It groups the list of elements without creating extra DOM nodes.&lt;/p&gt;

&lt;p&gt;We can use them to fetch the data from a Java backend (check out how to build the Java application from this article: &lt;a href="https://medium.com/@alexia.csmr/using-bounded-contexts-to-build-a-java-application-1c7995038d30" rel="noopener noreferrer"&gt;https://medium.com/@alexia.csmr/using-bounded-contexts-to-build-a-java-application-1c7995038d30&lt;/a&gt;). Start by installing Axios and using the base backend URL from your application. Then, we will create a fragment that uses GET to fetch all the attractions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import axios from 'axios'
import { POI } from '../model/POI'

const BACKEND_URL = 'http://localhost:8093/api'

export const getAttractions = async () =&amp;gt; {
    const url = BACKEND_URL + '/attractions'
    const response = await axios.get&amp;lt;POI[]&amp;gt;(url)
    return response.data
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This can be expanded to getting data based on parameters, POST, DELETE, etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const addAttraction = async (attractionData: Omit&amp;lt;POI, 'id'&amp;gt;) =&amp;gt; {
    const url = BACKEND_URL + '/addAttraction'
    const response = await axios.post(url, attractionData)
    return response.data
}

export const getAttraction = async (attractionId: string) =&amp;gt; {
    const url = BACKEND_URL + '/attractions'
    const response = await axios.get&amp;lt;POI&amp;gt;(`${url}/${attractionId}`)
    return response.data
}

export const getAttractionByTags = async (tags: string) =&amp;gt; {
    const url = BACKEND_URL + '/attractions'
    const response = await axios.get&amp;lt;POI[]&amp;gt;(`${url}/tags/${tags}`)
    return response.data
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  State
&lt;/h3&gt;

&lt;p&gt;The state is a React object that contains data or information about the component. A component’s state can change over time and when it does, the component re-renders.&lt;/p&gt;

&lt;p&gt;To get a single element from a list based on a parameter you can use the useParams() hook.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { id } = useParams()
const { isLoading, isError, attraction } = useAttraction(id!)
const { tag } = useParams()
const { isLoadingTag, isErrorTag, attractions } = useTagsAttractions(tag!)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Hooks
&lt;/h3&gt;

&lt;p&gt;As seen above, I’ve used_ useAttractions() &lt;em&gt;and _useTagsAttractions()&lt;/em&gt;. They are hooks and can be personalized to get any data you want. In this example, they fetch the attractions based on their &lt;em&gt;ID _or _tags&lt;/em&gt;. Hooks can only be called inside React function components, can only be called at the top level of a component, and can’t be conditional.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {useMutation, useQuery, useQueryClient} from '@tanstack/react-query'
import {POI} from "../model/./POI.ts";
import { addAttraction, getAttractions } from '../services/API.ts'
import { useContext } from 'react'

export function useAttractions() {
    const queryClient = useQueryClient()
    const {
        isLoading: isDoingGet,
        isError: isErrorGet,
        data: attractions,
    } = useQuery({
        queryKey: ['attractions'],
        queryFn: () =&amp;gt; getAttractions(),
    })

    const {
        mutate,
        isLoading: isDoingPost,
        isError: isErrorPost,
    } = useMutation((item: Omit&amp;lt;POI, 'id'&amp;gt;) =&amp;gt; addAttraction(item), {
        onSuccess: () =&amp;gt; {
            queryClient.invalidateQueries(['attractions'])
        },
    });

    return {
        isLoading: isDoingGet || isDoingPost,
        isError: isErrorGet || isErrorPost,
        attractions: attractions || [],
        addAttraction: mutate
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  isLoading and isError
&lt;/h3&gt;

&lt;p&gt;For a better UI experience, it’s good to let the user know what’s happening i.e. the elements are loading, or there has been an error when doing so. They are first declared in the hook and then introduced in the component.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const navigate = useNavigate()
const { isLoading, isError, attractions, addAttraction } = useAttractions()

if (isLoading) {
    return &amp;lt;Loader /&amp;gt;
}

if (isError) {
    return &amp;lt;Alert severity='error'&amp;gt;Error&amp;lt;/Alert&amp;gt;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also create a separate Loader or Alert component for a more customized website.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export default function Loader() {
    return (
        &amp;lt;div&amp;gt;
            &amp;lt;img alt="loading..."
                 src="https://media0.giphy.com/media/RlqidJHbeL1sPMDlhZ/giphy.gif?cid=6c09b9522vr2magrjgn620u5mfz1ymnqhpvg558dv13sd0g8&amp;amp;ep=v1_stickers_related&amp;amp;rid=giphy.gif&amp;amp;ct=s"/&amp;gt;
            &amp;lt;h3&amp;gt;Loading...&amp;lt;/h3&amp;gt;
        &amp;lt;/div&amp;gt;
    )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, when the page is loading the user will see a special animation on the screen.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mapping items (Lists and Keys)
&lt;/h3&gt;

&lt;p&gt;If you want to display all the elements in a list then you need to map through all of them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { useState } from 'react'
import { useNavigate } from 'react-router-dom'
import { useAttractions } from '../hooks/usePOI.ts'
import { POI } from '../model/./POI.ts'

export default function Attractions() {
    const navigate = useNavigate()
    const { isLoading, isError, attractions, addAttraction } = useAttractions()

    return (
      &amp;lt;div style={{ marginTop: '70px' }}&amp;gt;
          {filteredAttractions
              .map(({ id, name, image }: POI) =&amp;gt; (
                  &amp;lt;div onClick={() =&amp;gt; navigate(`/attractions/${id}`)} &amp;gt;
                      &amp;lt;div&amp;gt;
                          &amp;lt;img src={image} alt={name}/&amp;gt;
                          &amp;lt;h3&amp;gt;{name}&amp;lt;/h3&amp;gt;
                      &amp;lt;/div&amp;gt;
                  &amp;lt;/div&amp;gt;
              ))}
      &amp;lt;/div&amp;gt;
    )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a separate file where you declare the Attraction element and its variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// ../model/POI.ts

export interface POI {
    id: string;
    name: string;
    description: string;
    tags: string;
    ageGroup: string;
    image: string;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More here you can create a type to later add more attractions using a form:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export type CreatePOI = Omit&amp;lt;POI, 'id'&amp;gt;; # id is automatically generated so we don't need to manually add it
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Adding items
&lt;/h3&gt;

&lt;p&gt;We already created the fragments and hooks needed for this, so now we can make a form where the user can write the attributes and add a new attraction to the webpage. This form was created using the &lt;a href="https://mui.com/" rel="noopener noreferrer"&gt;MUI&lt;/a&gt;framework. First I’ll show the whole code and explain it in sections.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {CreatePOI} from "../model/./POI.ts";
import {z} from 'zod';
import {zodResolver} from "@hookform/resolvers/zod";
import {Controller, useForm} from "react-hook-form";
import {
    Box,
    Button,
    Dialog,
    DialogActions,
    DialogContent,
    DialogTitle,
    TextField,
} from '@mui/material'

interface AttractionDialogProps {
    isOpen: boolean;
    onSubmit: (attraction: CreatePOI) =&amp;gt; void;
    onClose: () =&amp;gt; void;
}

const itemSchema: z.ZodType&amp;lt;CreatePOI&amp;gt; = z.object({
    name: z.string().min(2, 'Name must be at least 2 characters'),
    description: z.string(),
    tags: z.string(),
    ageGroup: z.string(),
    image: z.string().url(),
})

export function AddAttractionDialog({isOpen, onSubmit, onClose}: AttractionDialogProps) {
    const {
        handleSubmit,
        control,
        formState: {errors},
    } = useForm&amp;lt;CreatePOI&amp;gt;({
        resolver: zodResolver(itemSchema),
        defaultValues: {
            name: '',
            description: '',
            tags: '',
            ageGroup: '',
            image: '',
        },
    });

    return (
        &amp;lt;Dialog open={isOpen} onClose={onClose}&amp;gt;
            &amp;lt;form
                onSubmit={handleSubmit((data) =&amp;gt; {
                    onSubmit(data)
                    onClose()
                })} 
            &amp;gt;
                &amp;lt;div&amp;gt;
                    &amp;lt;DialogTitle&amp;gt;Add attraction&amp;lt;/DialogTitle&amp;gt;
                    &amp;lt;Button onClick={onClose}&amp;gt;
                        X
                    &amp;lt;/Button&amp;gt;
                &amp;lt;/div&amp;gt;
                &amp;lt;DialogContent&amp;gt;
                    &amp;lt;Box&amp;gt;
                        &amp;lt;Controller
                            name="name"
                            control={control}
                            render={({field}) =&amp;gt; (
                                &amp;lt;TextField
                                    {...field}
                                    label="Name"
                                    error={!!errors.name}
                                    helperText={errors.name?.message}
                                    required
                                /&amp;gt;
                            )}
                        /&amp;gt;
                        &amp;lt;Controller
                            name="description"
                            control={control}
                            render={({field}) =&amp;gt; (
                                &amp;lt;TextField
                                    {...field}
                                    label="Description"
                                    error={!!errors.description}
                                    helperText={errors.description?.message}
                                /&amp;gt;
                            )}
                        /&amp;gt;
                        &amp;lt;Controller
                            name="tags"
                            control={control}
                            render={({field}) =&amp;gt; (
                                &amp;lt;TextField
                                    {...field}
                                    label="Tags"
                                    error={!!errors.tags}
                                    helperText={errors.tags?.message}
                                    required
                                /&amp;gt;
                            )}
                        /&amp;gt;
                        &amp;lt;Controller
                            name="ageGroup"
                            control={control}
                            render={({field}) =&amp;gt; (
                                &amp;lt;TextField
                                    {...field}
                                    label="Age group"
                                    error={!!errors.ageGroup}
                                    helperText={errors.ageGroup?.message}
                                    required
                                /&amp;gt;
                            )}
                        /&amp;gt;
                        &amp;lt;Controller
                            name="image"
                            control={control}
                            render={({field}) =&amp;gt; (
                                &amp;lt;TextField
                                    {...field}
                                    label="Image"
                                    error={!!errors.image}
                                    helperText={errors.image?.message}
                                    required
                                /&amp;gt;
                            )}
                        /&amp;gt;
                    &amp;lt;/Box&amp;gt;
                &amp;lt;/DialogContent&amp;gt;
                &amp;lt;DialogActions&amp;gt;
                    &amp;lt;Button type="submit" variant="contained"&amp;gt;
                        Add
                    &amp;lt;/Button&amp;gt;
                &amp;lt;/DialogActions&amp;gt;
            &amp;lt;/form&amp;gt;
        &amp;lt;/Dialog&amp;gt;
    )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to make the form a pop-up instead of a separate page, add &lt;em&gt;isOpen()&lt;/em&gt; and &lt;em&gt;isClosed()&lt;/em&gt; attributes. &lt;em&gt;onSubmit()&lt;/em&gt; is mandatory since this will trigger the &lt;em&gt;createPOI()&lt;/em&gt; function and add a new object to the list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface AttractionDialogProps {
    isOpen: boolean;
    onSubmit: (attraction: CreatePOI) =&amp;gt; void;
    onClose: () =&amp;gt; void;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For user form validation we will install and import &lt;a href="https://zod.dev/" rel="noopener noreferrer"&gt;Zod&lt;/a&gt;. Here declare what format the input needs to be and if there are any requirements like minimum or maximum length.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const itemSchema: z.ZodType&amp;lt;CreatePOI&amp;gt; = z.object({
    name: z.string().min(2, 'Name must be at least 2 characters'),
    description: z.string(),
    tags: z.string(),
    ageGroup: z.string(),
    image: z.string().url(),
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside the component, we need to implement the submit and the user validation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const {
    handleSubmit,
    control,
    formState: {errors},
    } = useForm&amp;lt;CreatePOI&amp;gt;({
    resolver: zodResolver(itemSchema),
    defaultValues: {
        name: '',
        description: '',
        tags: '',
        ageGroup: '',
        image: '',
    },
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The errors will be implemented in the TextField of the form with any other attributes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;TextField
    {...field}
    label="Name"
    error={!!errors.name}
    helperText={errors.name?.message}
    required
/&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure that the form can be closed and submitted in the beginning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Dialog open={isOpen} onClose={onClose}&amp;gt;
    &amp;lt;form
        onSubmit={handleSubmit((data) =&amp;gt; {
            onSubmit(data)
            onClose()
        })} 
    &amp;gt;     
    &amp;lt;/form&amp;gt;
&amp;lt;/Dialog&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can implement this pop-up in another component.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Fab } from '@mui/material'
import AddIcon from '@mui/icons-material/Add'

&amp;lt;Fab
    size='large'
    aria-label='add' 
    onClick={() =&amp;gt; setIsDialogOpen(true)}
&amp;gt;
    &amp;lt;AddIcon /&amp;gt;
&amp;lt;/Fab&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deleting items
&lt;/h3&gt;

&lt;p&gt;Create a hook that uses DELETE and implement it in a component.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query'
import { deleteRefreshmentStand, getRefreshmentStand } from '../services/API.ts'
import { useContext } from 'react' 

export function useRefreshmentStandItem(refreshmentStandId: string) { 
    const queryClient = useQueryClient()

    const {
        isLoading: isDoingGet,
        isError: isErrorGet,
        data: refreshmentStand,
    } = useQuery({
        queryKey: ['refreshmentStand'],
        queryFn: () =&amp;gt; getRefreshmentStand(refreshmentStandId), 
    })

    const deleteRefreshmentStandMutation = useMutation(() =&amp;gt; deleteRefreshmentStand(refreshmentStandId), {
        onSuccess: () =&amp;gt; {
            queryClient.invalidateQueries(['refreshmentStands']);
        },
    });

    const handleDeleteRefreshmentStand = () =&amp;gt; {
        deleteRefreshmentStandMutation.mutate(); // Trigger the delete mutation
    };

    return {
        isLoading: isDoingGet || deleteRefreshmentStandMutation.isLoading,
        isError: isErrorGet || deleteRefreshmentStandMutation.isError,
        refreshmentStand,
        deleteRefreshmentStand: handleDeleteRefreshmentStand,
    };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export default function RefreshmentStand() {
    const { id } = useParams()
    const { isLoading, isError, refreshmentStand, deleteRefreshmentStand } = useRefreshmentStandItem(id!)


    if (isLoading) {
        return &amp;lt;Loader /&amp;gt;
    }

    if (isError || !refreshmentStand) {
        return &amp;lt;Alert severity='error'&amp;gt;Error&amp;lt;/Alert&amp;gt;
    }

    return (
        &amp;lt;&amp;gt;
            &amp;lt;CardMedia component='img' image={background} alt='background' /&amp;gt;
            &amp;lt;AuthHeader /&amp;gt;
            &amp;lt;div style={{ display: 'flex', alignItems: 'center' }}&amp;gt;
                &amp;lt;div&amp;gt;
                    &amp;lt;h1&amp;gt;{refreshmentStand.name}&amp;lt;/h1&amp;gt;
                    &amp;lt;p&amp;gt;Status: {refreshmentStand.isOpen ? 'Open' : 'Closed'}&amp;lt;/p&amp;gt;
                    /* implement the delete button */
                    &amp;lt;Fab&amp;gt;
                        &amp;lt;DeleteIcon onClick={deleteRefreshmentStand}/&amp;gt;
                    &amp;lt;/Fab&amp;gt;
                &amp;lt;/div&amp;gt;
                &amp;lt;img src={refreshmentStand.image} alt='refreshmentStand image' /&amp;gt;
            &amp;lt;/div&amp;gt;
            &amp;lt;Footer /&amp;gt;
        &amp;lt;/&amp;gt;
    )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Filtering items
&lt;/h3&gt;

&lt;p&gt;Inside the component create a toggle for the filter text input and a constant that filters the attractions based on the age group or tags. Optional chaining (?) ensures it handles null or undefined values without errors.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const toggleFilter = () =&amp;gt; {
    setIsFilterOpen(!isFilterOpen)
}

const filteredAttractions = attractions
    .filter((attraction: POI) =&amp;gt;
        attraction.ageGroup?.toLowerCase().includes(ageGroupFilter.toLowerCase()),
    )
    .filter((attraction: POI) =&amp;gt;
        attraction.tags?.toLowerCase().includes(tagsFilter.toLowerCase()),
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Include it when iterating through the list of items.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{filteredAttractions
.filter((attraction: POI) =&amp;gt;
    searchText.trim() === '' ||
    attraction.name.toLowerCase().includes(searchText.toLowerCase()),
)
.map(({ id, name, image }: POI) =&amp;gt; (
    &amp;lt;div&amp;gt;
        &amp;lt;div&amp;gt;
            &amp;lt;img
                src={image}
                alt={name}
            /&amp;gt;
            &amp;lt;h3&amp;gt;{name}&amp;lt;/h3&amp;gt;
        &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
))}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Using React with TypeScript enables you to build dynamic, safe websites that are easy to maintain and scale. TypeScript’s type-checking prevents runtime errors, while React’s component-based structure organizes the project efficiently.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>react</category>
      <category>webdev</category>
      <category>beginners</category>
    </item>
    <item>
      <title>What are Recommender Systems and how to use them</title>
      <dc:creator>alexia cismaru</dc:creator>
      <pubDate>Tue, 05 Nov 2024 19:42:37 +0000</pubDate>
      <link>https://dev.to/alexiacismaru/what-are-recommender-systems-and-how-to-use-them-16n5</link>
      <guid>https://dev.to/alexiacismaru/what-are-recommender-systems-and-how-to-use-them-16n5</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde62f8z3p5es2v8cg6tb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde62f8z3p5es2v8cg6tb.png" alt="Image description" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A recommender system, or a recommendation system, is an algorithm used to filter information. Gathering information provides suggestions that are most relevant to a particular user. This is helpful when someone needs to decide on a platform where there is an overwhelming amount of information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Collaborative filtering
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8os0n1nluni7pxcsrmbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8os0n1nluni7pxcsrmbp.png" alt="Image description" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Collaborative filtering builds a model using the user’s past decisions, such as past watches or ratings on a certain product, and similar behaviors from other users to predict what the user might enjoy. It uses a user-rating matrix and &lt;em&gt;does not require other user information such as demographics or other information besides the ratings&lt;/em&gt;. This is a major strength of collaborative filtering because it relies on minimal information and can recommend items without understanding their contents.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Last.fm recommends songs by observing what bands and individual tracks the user has listened to regularly and comparing those against the listening behavior of other users. Last.fm will play tracks that do not appear in the user’s library but are often played by other users with similar interests. As this approach leverages the behavior of users, it is an example of a collaborative filtering technique. Last.fm requires a lot of information about a user to make accurate recommendations. This is an example of the cold start problem, common in collaborative filtering systems. (source: WiKipedia, &lt;a href="https://en.wikipedia.org/wiki/Recommender_system" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Recommender_system&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Collaborative filtering assumes that people who agreed in the past will agree in the future. It identifies another user with a history similar to the current user and generates predictions using this neighborhood. These methods are considered memory-based and model-based approaches.&lt;/p&gt;

&lt;p&gt;Some problems that may interfere with the collaborative filtering algorithms are cold start, scalability, and sparsity. &lt;strong&gt;Cold starts&lt;/strong&gt; refers to the lack of data to make accurate recommendations. There is also a need for a large amount of computation power, which is often necessary to calculate recommendations, making the algorithm less &lt;strong&gt;scalable&lt;/strong&gt;. Also, there is a huge amount of products and items on the internet, meaning ratings are rather &lt;strong&gt;sparse&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implement Collaborative Filtering in Python
&lt;/h3&gt;

&lt;p&gt;We will be using a dataset containing the top brands and cosmetics reviews. This dataset can be found on the Kaggle website: &lt;a href="https://www.kaggle.com/datasets/jithinanievarghese/cosmetics-and-beauty-products-reviews-top-brands" rel="noopener noreferrer"&gt;https://www.kaggle.com/datasets/jithinanievarghese/cosmetics-and-beauty-products-reviews-top-brands&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rating_list = pd.read_csv('top_brands_cosmetics_product_reviews.csv', sep=',', usecols=['author', 'product_title', 'product_rating', 'review_date'])

items = pd.read_csv('top_brands_cosmetics_product_reviews.csv', 
                    usecols=['product_title', 'product_url', 'brand_name'], encoding='latin-1')

print(f'Number of ratings: {rating_list.author.nunique()} | Number of items: {items.product_title.nunique()}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pivot the training using the user-item matrix.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X_train, X_test, y_train, y_test = train_test_split(rating_list, rating_list.product_rating, test_size=0.1, random_state=42)

ratings = X_train.pivot_table(index=['author'], columns=['product_title'], values='product_rating').fillna(0)
mean_ratings = ratings.mean(axis=1)
print(f'Number of users: {ratings.shape[0]} | Number of items: {ratings.shape[1]}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Calculate similarity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def get_all_recommendations(user_id, model, use_means=True):
    distance, knn = model.kneighbors(ratings.fillna(0)) # nearest neighbors
    knn = pd.DataFrame(knn + 1, index=ratings.index)
    sim = pd.DataFrame(1 - distance, index=ratings.index) # invert the distance
    neighbors = knn.loc[user_id, 1:]
    similarities = sim.loc[user_id, 1:]
    similarities.index = ratings.loc[neighbors].index

    if use_means:
        return pd.Series(mean_ratings.loc[user_id] + ratings.loc[neighbors].subtract(mean_ratings.loc[neighbors], axis='index').mul(similarities, axis='index').sum(axis='index') / similarities.sum(), name='recommendation')
    else:
        return pd.Series(ratings.loc[neighbors].mul(similarities, axis='index').sum(axis='index') / similarities.sum(), name='recommendation')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compute a single recommendation for a given user, product, and model&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def get_recommendations (user_id, product_id, model, use_means=True):
    if product_id not in ratings.columns:
        return 2.5
    recommendations = get_all_recommendations(user_id, model, use_means=use_means)
    return recommendations.loc[product_id]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compute Root Mean Squared Error (RMSE) to evaluate to predict ratings for all products for every user in the dataset. Then, line the predicted ratings with the actual ratings in the test set and calculate the RMSE.7.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model = NearestNeighbors(n_neighbors=40, metric='cosine')
model.fit(ratings.fillna(0))

def get_RMSE(X_test, model, use_means=True):
    group = X_test[['product_title', 'product_rating']].groupby(X_test.author)
    mse = []
    i = 0
    for key in group.groups:
        if key not in rating_list['author']:
            continue  # Skip users not in the dataset
        predictions = get_all_recommendations(key, model=model, use_means=use_means)
        rated_products = group.get_group(key).set_index('product_title')
        df = rated_products.join(predictions).dropna().reset_index()
        mse.append(df)
        if i % 100 == 0:
            score = np.sqrt(mean_squared_error(df.product_rating, df.recommendation))
            print(f'{i}/{X_test.author.nunique()} - RMSE: {score:.4f}')
        i += 1
    mse = pd.concat(mse).reset_index(drop=True)
    score = np.sqrt(mean_squared_error(mse.product_rating, mse.recommendation))
    print(f'{X_test.author.nunique()}/{X_test.author.nunique()} - RMSE: {score:.4f}')

get_RMSE(X_test, model)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ratings_dict = {
    "item": [1, 2, 1, 2, 1, 2, 1, 2, 1],
    "user": ['A', 'A', 'B', 'B', 'C', 'C', 'D', 'D', 'E'],
    "rating": [1, 2, 2, 4, 2.5, 4, 4.5, 5, 3],
}

df = pd.DataFrame(ratings_dict)
reader = Reader(rating_scale=(1, 5))

data = Dataset.load_from_df(df[["user", "item", "rating"]], reader)

movielens = Dataset.load_builtin('ml-100k') 
trainingSet = movielens.build_full_trainset()
algo.fit(trainingSet)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def get_recommendation(id_user, id_movie, ratings):
    #cosine similarity of the ratings
    similarity_matrix = cosine_similarity(ratings.fillna(0), ratings.fillna(0))
    similarity_matrix_df = pd.DataFrame(similarity_matrix, index=ratings.index, columns=ratings.index)

    cosine_scores = similarity_matrix_df[id_user]
    ratings_scores = ratings[id_movie]
    ratings_scores.dropna().dot(cosine_scores[~ratings_scores.isna()]) / cosine_scores[~ratings_scores.isna()].sum()
    return np.dot(ratings_scores.dropna(), cosine_scores[~ratings_scores.isna()]) / cosine_scores[
        ~ratings_scores.isna()].sum()

get_recommendation(196, 8, ratings) # get recommandations for user 196 for movie 8 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Content-based filtering
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F185lp9hobx3bmd20co1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F185lp9hobx3bmd20co1c.png" alt="Image description" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item to recommend additional items with similar properties. It uses item features to select and return items relevant to what the user is looking for. Some content-based recommendation algorithms match items according to their description rather than the actual content.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pandora uses the properties of a song or artist to seed a queue that plays music with similar properties. User feedback is used to refine the station’s results, deemphasizing certain attributes when a user “dislikes” a particular song and emphasizing other attributes when a user “likes” a song. This is an example of a content-based approach. Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed). (source: Wikipedia, &lt;a href="https://en.wikipedia.org/wiki/Recommender_system" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Recommender_system&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The method uses the description of the product and the profile of the user, making it suited for situations where there is known data on an item but not on the user. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user’s likes and dislikes based on an item’s features. Keywords are used to describe items, and a user profile is built to list the user’s likes and dislikes. Various candidate items are compared with items previously rated by the user, and the best-matching items are recommended. A widely used algorithm is the TF-IDF representation. There are machine learning techniques such as Bayesian Classifiers, cluster analysis, decision trees, and artificial neural networks to estimate the probability that the user will like the item.&lt;/p&gt;

&lt;p&gt;However, content-based filtering can often suggest items very similar to what a user already likes, limiting variety and making it harder to discover new things. This can create a “bubble,” where users only see certain types of content. It also depends on how well items are labeled or described, which can be a problem if there’s not enough information or if the user is new and hasn’t interacted with much content yet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementing Content-Based Filtering in Python (using BOW)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;metadata = pd.read_csv('top_brands_cosmetics_product_reviews.csv', low_memory=False)
# keeping the products with more than 90% of the total rating count
quantile = 0.9
metadata = metadata[metadata.product_rating_count &amp;gt; metadata.product_rating_count.quantile(quantile)].reset_index()
pd.DataFrame(metadata.columns, columns=['columns']).T  # printing the columns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Transform the plot summaries into vector representations to be able to apply numeric machine learning algorithms.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vectorizer = TfidfVectorizer(stop_words='english', max_df=0.8, min_df=2)
metadata.review_text = metadata.review_text.fillna('')
tfidf_model = vectorizer.fit_transform(metadata.review_text)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each plot summary will be transformed into a sequence of words to point to a high-dimensional semantic space (the TF-IDF model is used here). This counts the number of times a word appears in the document to decide the importance of the word in the document.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;print(f'Matrix contains {tfidf_model.shape[0]} rows and {tfidf_model.shape[1]} columns')&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;BOW (Bag of Words)&lt;/strong&gt; model counts the number of times a word appears in a document (sparse, most of the entries in the vector are 0)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TF-IDF (Term Frequency — Inverse Document Frequency)&lt;/strong&gt; model counts the number of times a word appears in a document but also considers how often the word appears in all documents. It down-weights words that appear frequently across documents, making them less informative than those that appear rarely. Every plot summary is encoded as a single vector whose length is equal to the size of the vocabulary of all the plot summaries, TD-IDF transforms the plot summaries into a matrix. It ignores words that appear in more than 80% of the reviews and the ones that occur in less than 2 → the noise is reduced.&lt;/p&gt;

&lt;p&gt;Inspect the TF-IDF model using popular makeup vocabulary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;popular_terms = ['cream', 'skin', 'awesome', 'shade', 'mascara', 'red', 'powder', 'blush']

columns = vectorizer.get_feature_names_out()
tdidf_df = pd.DataFrame.sparse.from_spmatrix(tfidf_model, columns=columns)
tdidf_df[popular_terms].head()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the cosine similarity between different products based on their plot summary term frequency occurrence-signature.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def get_content_based_recommendation(product_title, top_n=10, metric='cosine'):
    # get the index of the product that matches the title
    # the index is used to find the row in the tf-idf matrix that corresponds to the product
    idx = metadata[metadata.product_title.str.lower() == product_title.lower()].empty
    model = NearestNeighbors(n_neighbors=top_n, metric=metric)
    model.fit(tfidf_model)
    # use a k-nearest neighbors model to find the most similar products
    similar_products = model.kneighbors(tfidf_model[idx], return_distance=False)[0]

    # top 10 most similar products
    return metadata.iloc[similar_products]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the function to any product in the dataset.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;get_content_based_recommendation('Olay Regenerist Whip Mini and Ultimate Eye Cream Combo')[
    ['product_title', 'review_rating', 'product_rating', 'product_rating_count', 'review_text']]

get_content_based_recommendation('Olay Total Effects 7 In One Anti-Ageing Day Cream Normal SPF 15')[
    ['product_title', 'review_rating', 'product_rating', 'product_rating_count', 'review_text']]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Hybrid systems
&lt;/h2&gt;

&lt;p&gt;You can also combine the two algorithms to offer a more refined recommendation. Recommender systems help users discover items they might not have found otherwise. Recommender systems are often implemented using search engines indexing non-traditional data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, recommender systems play a key role in helping users discover relevant content and products by providing personalized suggestions. They enhance the experience by reducing the time and effort needed to find what interests them.&lt;/p&gt;

&lt;p&gt;** Check out the full code on GitHub: &lt;a href="https://github.com/alexiacismaru/recommender-systems" rel="noopener noreferrer"&gt;https://github.com/alexiacismaru/recommender-systems&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>beginners</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Using VGG16 for face and gender recognition</title>
      <dc:creator>alexia cismaru</dc:creator>
      <pubDate>Tue, 05 Nov 2024 19:07:28 +0000</pubDate>
      <link>https://dev.to/alexiacismaru/using-vgg16-for-face-and-gender-recognition-38a2</link>
      <guid>https://dev.to/alexiacismaru/using-vgg16-for-face-and-gender-recognition-38a2</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkz0v5tsfqnl3s8sgt4h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkz0v5tsfqnl3s8sgt4h.png" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How to build a face and gender recognition Python project using deep learning and VGG16.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is deep learning?
&lt;/h2&gt;

&lt;p&gt;Deep learning is a subcategory of machine learning, a neural network with three or more layers. These neural networks try to simulate the behavior of the human brain by learning from large amounts of data. While a neural network with a single layer can still make approximate predictions, additional hidden layers can help to optimize and refine for accuracy.&lt;/p&gt;

&lt;p&gt;Deep learning improves automation by performing tasks without human intervention. Deep learning can be found in digital assistants, voice-enabled TV remotes, credit card fraud detection, and self-driving cars.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Python project
&lt;/h2&gt;

&lt;p&gt;** Check out the full code on GitHub: &lt;a href="https://github.com/alexiacismaru/face-recognision" rel="noopener noreferrer"&gt;https://github.com/alexiacismaru/face-recognision&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Download the VGG16 Face Dataset and the Haar Cascade XML file used for face detection which will be used for the preprocessing in the face recognition task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;faceCascade = cv2.CascadeClassifier(os.path.join(base_path, "haarcascade_frontal_face_default.xml")) # haar cascade detects faces in images

vgg_face_dataset_url = "http://www.robots.ox.ac.uk/~vgg/data/vgg_face/vgg_face_dataset.tar.gz"

with request.urlopen(vgg_face_dataset_url) as r, open(os.path.join(base_path, "vgg_face_dataset.tar.gz"), 'wb') as f:
  f.write(r.read())

# extract VGG dataset
with tarfile.open(os.path.join(base_path, "vgg_face_dataset.tar.gz")) as f:
  f.extractall(os.path.join(base_path))

# download Haar Cascade for face detection
trained_haarcascade_url = "https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_default.xml"
with request.urlopen(trained_haarcascade_url) as r, open(os.path.join(base_path, "haarcascade_frontalface_default.xml"), 'wb') as f:
    f.write(r.read())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Selectively load and process a specific number of images for a set of predefined subjects from the VGG Face Dataset.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# populate the list with the files of the celebrities that will be used for face recognition
all_subjects = [subject for subject in sorted(os.listdir(os.path.join(base_path, "vgg_face_dataset", "files"))) if subject.startswith("Jesse_Eisenberg") or subject.startswith("Sarah_Hyland") or subject.startswith("Michael_Cera") or subject.startswith("Mila_Kunis") and subject.endswith(".txt")]

# define number of subjects and how many pictures to extract
nb_subjects = 4
nb_images_per_subject = 40
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Iterate through each subject’s file by opening a text file associated with the subject and reading the contents. Each line in these files contains a URL to an image. For each URL (which points to an image), the code tries to load the image using urllib and convert it into a NumPy array.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;images = []

for subject in all_subjects[:nb_subjects]:
  with open(os.path.join(base_path, "vgg_face_dataset", "files", subject), 'r') as f:
    lines = f.readlines()

  images_ = []
  for line in lines:
    url = line[line.find("http://"): line.find(".jpg") + 4]

    try:
      res = request.urlopen(url)
      img = np.asarray(bytearray(res.read()), dtype="uint8")
      # convert the image data into a format suitable for OpenCV
      # images are colored 
      img = cv2.imdecode(img, cv2.IMREAD_COLOR)
      h, w = img.shape[:2]
      images_.append(img)
      cv2_imshow(cv2.resize(img, (w // 5, h // 5)))

    except:
      pass

    # check if the required number of images has been reached
    if len(images_) == nb_images_per_subject:
      # add the list of images to the main images list and move to the next subject
      images.append(images_)
      break
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Face detection set up
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhiy5memq75wsechvdksv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhiy5memq75wsechvdksv.png" alt="Image description" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Locate one or more faces in the image and put it in a box.&lt;/li&gt;
&lt;li&gt;Make sure the face is consistent with the database, such as geometry and photometrics.&lt;/li&gt;
&lt;li&gt;Extract features from the face that can be used for the recognition task.&lt;/li&gt;
&lt;li&gt;Match the face to one or more known faces in a prepared database.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create arrays for all 4 celebrities
jesse_images = []
michael_images = []
mila_images = []
sarah_images = []

faceCascade = cv2.CascadeClassifier(os.path.join(base_path, "haarcascade_frontalface_default.xml"))

# iterate over the subjects
for subject, images_ in zip(all_subjects, images):

  # create a grayscale copy to simplify the image and reduce computation
  for img in images_:
    img_ = img.copy()
    img_gray = cv2.cvtColor(img_, cv2.COLOR_BGR2GRAY)
    faces = faceCascade.detectMultiScale(
        img_gray,
        scaleFactor=1.2,
        minNeighbors=5,
        minSize=(30, 30),
        flags=cv2.CASCADE_SCALE_IMAGE
    )
    print("Found {} face(s)!".format(len(faces)))

    for (x, y, w, h) in faces:
        cv2.rectangle(img_, (x, y), (x+w, y+h), (0, 255, 0), 10)

    h, w = img_.shape[:2]
    resized_img = cv2.resize(img_, (224, 224))
    cv2_imshow(resized_img)

    if "Jesse_Eisenberg" in subject:
        jesse_images.append(resized_img)
    elif "Michael_Cera" in subject:
        michael_images.append(resized_img)
    elif "Mila_Kunis" in subject:
        mila_images.append(resized_img)
    elif "Sarah_Hyland" in subject:
        sarah_images.append(resized_img)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;detectMultiScale&lt;/em&gt; method recognizes faces in the image. It then returns the coordinates of rectangles where it believes faces are located. For each face, a rectangle is drawn around it in the image, indicating the face’s location. Each image is resized to 224x224 pixels.&lt;/p&gt;

&lt;p&gt;Split the dataset into a training and validation set:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;training set&lt;/strong&gt; is used to train the machine learning model. It’s used to learn the patterns, features, and relationships within the data. The model adjusts its parameters to minimize errors in predictions or classifications made on the training data.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;validation set&lt;/strong&gt; evaluates the model’s performance on a new set of data. This helps in checking how well the model generalizes to unseen data. The validation set should be an independent set that is not used during the training of the model(s). Mixing/using information from the validation set during training can lead to skewed results.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create directories for saving faces
for person in ['train/male', 'train/female', 'valid/male', 'valid/female']:
  os.makedirs(os.path.join(base_path, "faces", person), exist_ok=True)
# 'exist_ok=True' parameter allows the function to run without error even if some directories already exist

def split_images(images, train_size):
    training_images = images[:train_size]
    validation_images = images[train_size:train_size + 10]
    return training_images, validation_images

michael_training, michael_testing = split_images(michael_images, 20)
mila_training, mila_testing = split_images(mila_images, 20)

jesse_testing = jesse_images[:10]
sarah_testing = sarah_images[:10]

# Save the pictures to an individual filename
def save_faces(images, directory, firstname, lastname):
    for i, img in enumerate(images):
        filename = os.path.join(base_path, "faces", directory, f"{firstname}_{lastname}_{i}.jpg")
        cv2.imwrite(filename, img)

# Save the split images
save_faces(michael_training, 'train/male', 'Michael', 'Cera')
save_faces(michael_testing, 'valid/male', 'Michael', 'Cera')
save_faces(mila_training, 'train/female', 'Mila', 'Kunis')
save_faces(mila_testing, 'valid/female', 'Mila', 'Kunis')

# Since Jesse and Sarah are only for testing, save them directly to the test directory
save_faces(jesse_testing, 'valid/male', 'Jesse', 'Eisenberg')
save_faces(sarah_testing, 'valid/female', 'Sarah', 'Hyland')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Data Augmentation
&lt;/h3&gt;

&lt;p&gt;The accuracy of deep learning models depends on the quality, quantity, and contextual meaning of training data. This is one of the most common challenges in building deep learning models and it can be costly and time-consuming. Companies use data augmentation to reduce dependency on training examples to build high-precision models quickly.&lt;/p&gt;

&lt;p&gt;Data augmentation means artificially increasing the amount of data by generating new data points from existing data. This includes adding minor alterations to data or using machine learning models to generate new data points in the latent space of original data to amplify the dataset.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synthetics&lt;/strong&gt; represent artificially generated data without using real-world images and it’s produced by Generative Adversarial Networks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Augmented&lt;/strong&gt; derives from original images with some sort of minor geometric transformations (such as flipping, translation, rotation, or the addition of noise) to increase the diversity of the training set.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline_male = Augmentor.Pipeline(source_directory='/content/sample_data/deep_learning_assignment/faces/train/male', output_directory='/content/sample_data/deep_learning_assignment/faces/train_augmented/male')
pipeline_male.flip_left_right(probability=0.7)
pipeline_male.rotate(probability=0.7, max_left_rotation=10, max_right_rotation=10)
pipeline_male.greyscale(probability=0.1)
pipeline_male.sample(50)

pipeline_female = Augmentor.Pipeline(source_directory='/content/sample_data/deep_learning_assignment/faces/train/female', output_directory='/content/sample_data/deep_learning_assignment/faces/train_augmented/female')
pipeline_female.flip_left_right(probability=0.7)
pipeline_female.rotate(probability=0.7, max_left_rotation=10, max_right_rotation=10)
pipeline_female.greyscale(probability=0.1)
pipeline_female.sample(50)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Data augmentation improves the performance of ML models through more diverse datasets and reduces operation costs related to data collection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flip Left-Right: Images are randomly flipped horizontally with a probability of 0.7. This simulates the variation due to different orientations of subjects in images.&lt;/li&gt;
&lt;li&gt;Rotation: The images are rotated slightly (up to 10 degrees in both directions) with a probability of 0.7. This adds variability to the dataset by simulating different head poses.&lt;/li&gt;
&lt;li&gt;Greyscale Conversion: With a probability of 0.1, the images are converted to greyscale. This ensures the model can process and learn from images irrespective of their color information.&lt;/li&gt;
&lt;li&gt;Sampling: The sample(50) method generates 50 augmented images from the original set. This expands the dataset, providing more data for the model to learn from.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementing the VGG16 model
&lt;/h2&gt;

&lt;p&gt;VGG16 is a convolutional neural network widely used for image recognition. It is considered to be one of the best computer vision model architectures. It consists of 16 layers of artificial neurons that process the image incrementally to improve accuracy. In VGG16, “VGG” refers to the &lt;em&gt;Visual Geometry Group of the University of Oxford&lt;/em&gt;, while “16” refers to the network’s 16 weighted layers.&lt;/p&gt;

&lt;p&gt;VGG16 is used for image recognition and classification of new images. The pre-trained version of the VGG16 network is trained on over one million images from the ImageNet visual database. VGG16 can be applied to determine whether an image contains certain items, animals, plants, and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  VGG16 architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwua05ckmskrxv79vjmwn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwua05ckmskrxv79vjmwn.png" alt="Image description" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are 13 convolutional layers, five Max Pooling layers, and three Dense layers. This results in 21 layers with 16 weights, meaning it has 16 learnable parameter layers. VGG16 takes input tensor size as 224x244. The model focuses on having convolution layers of a 3x3 filter with stride 1. It always uses the same padding with a maxpool layer of 2x2 filter of stride 2.&lt;/p&gt;

&lt;p&gt;Conv-1 Layer has 64 filters, Conv-2 has 128 filters, Conv-3 has 256 filters, Conv 4 and Conv 5 have 512 filters, and three fully connected layers where the first two have 4096 channels each, the third performs 1000-way ILSVRC classification and contains 1000 channels (one for each class). The final layer is the soft-max layer.&lt;/p&gt;

&lt;p&gt;Start preparing the base model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

# Set the layers of the base model to be non-trainable
for layer in base_model.layers:
    layer.trainable = False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make sure that the model will classify the images correctly, we need to extend the model with additional layers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x = base_model.output
x = GlobalAveragePooling2D()(x)

# dense layers
x = Dense(1024, activation='relu')(x)
x = Dense(512, activation='relu')(x)
x = Dense(256, activation='relu')(x)
# add a logistic layer for binary classification
x = Dense(1, activation='sigmoid')(x)

model = Model(inputs=base_model.input, outputs=x)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;Global Average Pooling 2D layer&lt;/em&gt; condenses the feature maps obtained from VGG16 into a single 1D vector per map. It simplifies the output and reduces the total number of parameters, aiding in the prevention of overfitting.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;Dense layers&lt;/em&gt; are a sequence of fully connected (Dense) layers that are added. Each layer contains a specified number of units (1024, 512, and 256), chosen based on common practices and experimentation. These layers further process the features extracted by VGG16.&lt;/p&gt;

&lt;p&gt;The final Dense layer (the &lt;em&gt;Output layer&lt;/em&gt;) uses sigmoid activation suitable for binary classification (our two classes being ‘female’ and ‘male’).&lt;/p&gt;

&lt;h3&gt;
  
  
  Adam Optimization
&lt;/h3&gt;

&lt;p&gt;The Adam Optimization algorithm is an extension of the &lt;a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent" rel="noopener noreferrer"&gt;stochastic gradient descent&lt;/a&gt; procedure to update network weights iterative based on training data. The method is efficient when working with large problems involving a lot of data or parameters. It requires less memory and is efficient.&lt;/p&gt;

&lt;p&gt;This algorithm combines two gradient descent methodologies: momentum and Root Mean Square Propagation (RMSP).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Momentum&lt;/strong&gt; is an algorithm used to help accelerate the gradient descent algorithm using the &lt;em&gt;exponentially weighted average&lt;/em&gt; of the gradients.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksudnqui5n1jae8ap9ga.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksudnqui5n1jae8ap9ga.png" alt="Image description" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root mean square prop&lt;/strong&gt; is an adaptive learning algorithm that tries to improve the AdaGrad by taking the “exponential moving average”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kzzo1kkkvf1n2y2r6k9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kzzo1kkkvf1n2y2r6k9.png" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Since mt and vt have both initialized as 0 (based on the above methods), it is observed that they gain a tendency to be ‘biased towards 0’ as both β1 &amp;amp; β2 ≈ 1. This Optimizer fixes this problem by computing ‘bias-corrected’ mt and vt. This is also done to control the weights while reaching the global minimum to prevent high oscillations when near it. The formulas used are:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fseuczqtf88t4g9uv1jk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fseuczqtf88t4g9uv1jk4.png" alt="Image description" width="556" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Intuitively, we are adapting to the gradient descent after every iteration so that it remains controlled and unbiased throughout the process, hence the name Adam.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, instead of our normal weight parameters mt and vt, we take the bias-corrected weight parameters (m_hat)t and (v_hat)t. Putting them into our general equation, we get:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fey5oh6jbtd9t64ih20s7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fey5oh6jbtd9t64ih20s7.png" alt="Image description" width="566" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: Geeksforgeeks, &lt;a href="https://www.geeksforgeeks.org/adam-optimizer/" rel="noopener noreferrer"&gt;https://www.geeksforgeeks.org/adam-optimizer/&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set up image data preprocessing, augmentation, and model training in a deep learning context.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;train_datagen = ImageDataGenerator()

training_set = train_datagen.flow_from_directory(
    '/content/sample_data/deep_learning_assignment/faces/train_augmented',
    batch_size=30,
    class_mode='binary'
)

validation_datagen = ImageDataGenerator() # used for real-time data augmentation and preprocessing
# generates batches of tensor image data with real-time data augmentation 

validation_set = validation_datagen.flow_from_directory(
    '/content/sample_data/deep_learning_assignment/faces/valid',
    batch_size=30,
    class_mode='binary'
)

model.fit(training_set, epochs=10, validation_data=validation_set)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;epochs&lt;/strong&gt;: the number of epochs specifies how much the entire training dataset will be passed forward and backward through the neural network. The model will go through the training data 10 times. An epoch is one complete presentation of the data set to be learned to a learning machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;batch_size&lt;/strong&gt;: this parameter defines the number of samples that are propagated through the network at one time. Here, we are using a batch size of 30, meaning the model will take 30 images at a time, process them, update the weights, and then proceed to the next batch of 30 images.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model’s performance is evaluated by making predictions on the validation set. This gives an idea of how well the model performs unseen data. A threshold is applied to these predictions to classify each image into one of two classes (“male” or “female”).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Evaluate the model on the validation set
validation_loss, validation_accuracy = model.evaluate(validation_set)

print(f"Validation Accuracy: {validation_accuracy * 100:.2f}%")
print(f"Validation Loss: {validation_loss}")

# Make predictions on the validation set
validation_predictions = model.predict(validation_set)

# Apply threshold to determine class
threshold = 0.5
predicted_classes = (validation_predictions &amp;gt; threshold).astype(int)

# Display the predicted classes along with image names
for i in range(len(validation_set.filenames)):
    filename = validation_set.filenames[i]
    prediction = predicted_classes[i][0]  # Binary predictions, extract single value

    class_name = 'male' if prediction == 0 else 'female'
    print(f"Image: {filename}, Predicted Class: {class_name}\n")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a confusion matrix to visualize the accuracy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;actual_labels = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
predictions =   [1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1]

cm = confusion_matrix(actual_labels, predictions)

sns.heatmap(cm, annot=True, fmt='d')
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For binary classification, the Receiver Operating Characteristic (ROC) curve and Area Under Curve (AUC) are useful to understand the trade-offs between true positive rate and false positive rate.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fpr, tpr, thresholds = roc_curve(actual_labels, predictions)
roc_auc = auc(fpr, tpr)

plt.figure()
plt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, by using deep learning and image processing algorithms you can build a Python project that recognizes human faces and can categorize them as either male or female.&lt;/p&gt;

</description>
      <category>python</category>
      <category>vgg16</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
