<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Maik Paixao</title>
    <description>The latest articles on DEV Community by Maik Paixao (@maikpaixao).</description>
    <link>https://dev.to/maikpaixao</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/maikpaixao"/>
    <language>en</language>
    <item>
      <title>Customer segmentation using RFM and K-Means</title>
      <dc:creator>Maik Paixao</dc:creator>
      <pubDate>Sat, 02 Sep 2023 12:04:55 +0000</pubDate>
      <link>https://dev.to/maikpaixao/segmentacao-de-clientes-usando-rfm-e-k-means-15ph</link>
      <guid>https://dev.to/maikpaixao/segmentacao-de-clientes-usando-rfm-e-k-means-15ph</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qHzgCi0R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i.imgur.com/V3qpZBr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qHzgCi0R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i.imgur.com/V3qpZBr.jpg" alt="Image description" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome! In this tutorial, we will guide you in clustering bank customers based on their transaction behaviors using the RFM (Recency, Frequency, Monetary) model and K-Means clustering in Python.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python Installed.&lt;/li&gt;
&lt;li&gt;Familiarity with Python Programming.&lt;/li&gt;
&lt;li&gt;Basic Understanding of Clustering&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1.Install required libraries
&lt;/h2&gt;

&lt;p&gt;Install the necessary packages for data processing and clustering.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pandas scikit-learn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Import Libraries
&lt;/h2&gt;

&lt;p&gt;Import the necessary modules.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;import pandas as pd
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Upload bank details
&lt;/h2&gt;

&lt;p&gt;The dataset has around 10,000 records. Each record addresses a financial transaction carried out by a customer.&lt;/p&gt;

&lt;p&gt;The main columns that stand out include client_id, transaction_date, and transaction_amount. The client_id is a unique identifier assigned to each client, ensuring data consistency and facilitating reference. Meanwhile, transaction_date records the timestamp of each transaction performed, serving as an essential marker for evaluating transaction patterns and behaviors over time.&lt;/p&gt;

&lt;p&gt;The transaction_amount, on the other hand, is a numeric field that quantifies the monetary value associated with each transaction. This field has immense significance as it provides a direct window into understanding an individual's consumption habits, financial capabilities and, to some extent, economic stratum.&lt;/p&gt;

&lt;p&gt;We use the pandas &lt;strong&gt;read_csv()&lt;/strong&gt; function to read the csv containing the transactions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;data &lt;span class="o"&gt;=&lt;/span&gt; pd.read_csv&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'banking_data.csv'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
print&lt;span class="o"&gt;(&lt;/span&gt;data.head&lt;span class="o"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Calculate RFM metrics
&lt;/h2&gt;

&lt;p&gt;The acronym RFM stands for Recency, Frequency and Monetary Value, each representing a unique facet of a customer's transactional pattern.&lt;/p&gt;

&lt;p&gt;Recency: This metric addresses the question of how long ago a customer engaged in a transaction. A shorter time since last transaction typically indicates a more active customer. In our dataset, this is calculated by subtracting the date of each transaction from the date of the last transaction in the dataset, resulting in the number of days elapsed.&lt;/p&gt;

&lt;p&gt;Frequency: Representing the total number of transactions a customer has performed during a specific period, frequency offers information about how often customers interact with banking services. It is a direct measure of customer engagement and loyalty.&lt;/p&gt;

&lt;p&gt;Monetary Value: This metric encapsulates the total amount a customer spent during a period. It is a reflection of customer value, with higher values indicating customers who bring more financial value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Recency&lt;/span&gt;
max_date &lt;span class="o"&gt;=&lt;/span&gt; data[&lt;span class="s1"&gt;'transaction_date'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;.max&lt;span class="o"&gt;()&lt;/span&gt;
data[&lt;span class="s1"&gt;'recency'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;max_date - data[&lt;span class="s1"&gt;'transaction_date'&lt;/span&gt;&lt;span class="o"&gt;])&lt;/span&gt;.dt.days

&lt;span class="c"&gt;# Frequency &amp;amp; Monetary Value&lt;/span&gt;
rfm &lt;span class="o"&gt;=&lt;/span&gt; data.groupby&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'client_id'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;.agg&lt;span class="o"&gt;({&lt;/span&gt;
     &lt;span class="s1"&gt;'recency'&lt;/span&gt;: &lt;span class="s1"&gt;'min'&lt;/span&gt;,
     &lt;span class="s1"&gt;'client_id'&lt;/span&gt;: &lt;span class="s1"&gt;'count'&lt;/span&gt;,
     &lt;span class="s1"&gt;'transaction_amount'&lt;/span&gt;: &lt;span class="s1"&gt;'sum'&lt;/span&gt;
&lt;span class="o"&gt;})&lt;/span&gt;.rename&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;={&lt;/span&gt;
     &lt;span class="s1"&gt;'client_id'&lt;/span&gt;: &lt;span class="s1"&gt;'frequency'&lt;/span&gt;,
     &lt;span class="s1"&gt;'transaction_amount'&lt;/span&gt;: &lt;span class="s1"&gt;'monetary'&lt;/span&gt;
&lt;span class="o"&gt;})&lt;/span&gt;

print&lt;span class="o"&gt;(&lt;/span&gt;rfm.head&lt;span class="o"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Data preprocessing
&lt;/h2&gt;

&lt;p&gt;The “Data Preprocessing” section plays a key role in the analytical pipeline. While raw data often contains a plethora of valuable information, it is also riddled with inconsistencies, discrepancies, and varying scales that, if not addressed, can skew the results of subsequent analysis.&lt;/p&gt;

&lt;p&gt;In the context of our tutorial focusing on grouping bank customers using RFM metrics, preprocessing becomes especially crucial. Clustering algorithms like K-Means are sensitive to the scale of the data. Different magnitudes between variables can disproportionately influence the algorithm, leading to misleading clusters.&lt;/p&gt;

&lt;p&gt;To address this, the preprocessing phase employs scikit-learn's StandardScaler, a renowned Python library for data science. StandardScaler normalizes each variable to have a mean of zero and a standard deviation of one. This ensures that all variables, whether Recency, Frequency or Monetary, contribute equally to the grouping process.&lt;/p&gt;

&lt;p&gt;The transformed data, called rfm_scaled in our tutorial, represents standardized RFM values ready for clustering. In essence, the preprocessing phase acts as a bridge, converting raw, irregular data into a refined, standardized format, ensuring that the subsequent clustering algorithm works efficiently and provides accurate, interpretable clusters. This section highlights the adage, “Garbage in, garbage out,” emphasizing the importance of clean, standardized input data for quality results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;scaler &lt;span class="o"&gt;=&lt;/span&gt; StandardScaler&lt;span class="o"&gt;()&lt;/span&gt;
rfm_scaled &lt;span class="o"&gt;=&lt;/span&gt; scaler.fit_transform&lt;span class="o"&gt;(&lt;/span&gt;rfm&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Determine the number of clusters
&lt;/h2&gt;

&lt;p&gt;This part of the tutorial addresses one of the most critical decisions in the clustering process: identifying the ideal number of clusters. Although clustering aims to segment data into distinct groups based on similarities, the number of these groups is not always evident in advance.&lt;/p&gt;

&lt;p&gt;In our tutorial, the Elbow Method is presented as the technique of choice for discerning this ideal number. This method involves plotting the sum of squared distances (often referred to as "inertia") for various cluster counts. As the number of clusters increases, inertia typically decreases; each data point is closest to its respective centroid. However, beyond a certain point, adding more clusters does not lead to a substantial decrease in inertia. This inflection point, similar to an “elbow” in the plotted curve, suggests an ideal number of clusters.&lt;/p&gt;

&lt;p&gt;By employing the Elbow or Elbow method, our tutorial iteratively tunes the K-Means algorithm with a range of cluster counts. By viewing the resulting inertia values, analysts can discern the “elbow” and subsequently the suggested cluster count.&lt;/p&gt;

&lt;p&gt;In summary, this section highlights the importance of selecting an appropriate cluster count, offering a systematic approach to making this critical decision, ensuring that the resulting clusters are meaningful, distinct and actionable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;distortions &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[]&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in &lt;/span&gt;range&lt;span class="o"&gt;(&lt;/span&gt;1, 11&lt;span class="o"&gt;)&lt;/span&gt;:
     km &lt;span class="o"&gt;=&lt;/span&gt; KMeans&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;n_clusters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;i&lt;span class="o"&gt;)&lt;/span&gt;
     km.fit&lt;span class="o"&gt;(&lt;/span&gt;rfm_scaled&lt;span class="o"&gt;)&lt;/span&gt;
     distortions.append&lt;span class="o"&gt;(&lt;/span&gt;km.inertia_&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. Apply K-Means clustering
&lt;/h2&gt;

&lt;p&gt;The K-Means clustering algorithm works by partitioning data into distinct clusters. This is done by assigning each data point to the cluster whose centroid (or center) is closest. These centroids are iteratively recalculated until they stabilize, meaning that an optimal clustering arrangement has been achieved.&lt;/p&gt;

&lt;p&gt;For our tutorial, the standardized RFM values, derived in the Data Preprocessing phase, serve as input. Leveraging Python's scikit-learn library, a KMeans object is instantiated with the optimal number of clusters deduced from the previous section. This object is then trained using the fit_predict method on our scaled RFM values, and the resulting cluster labels are stored back into the original RFM dataframe according to the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kmeans &lt;span class="o"&gt;=&lt;/span&gt; KMeans&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;n_clusters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3, &lt;span class="nv"&gt;random_state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0&lt;span class="o"&gt;)&lt;/span&gt;
rfm[&lt;span class="s1"&gt;'cluster'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; kmeans.fit_predict&lt;span class="o"&gt;(&lt;/span&gt;rfm_scaled&lt;span class="o"&gt;)&lt;/span&gt;
print&lt;span class="o"&gt;(&lt;/span&gt;rfm.head&lt;span class="o"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  8. Interpretation of Segments
&lt;/h2&gt;

&lt;p&gt;In the end, we need to examine the centroids and characteristics of each generated cluster to understand the segments.&lt;/p&gt;

&lt;p&gt;You can do this with the code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cluster_summary &lt;span class="o"&gt;=&lt;/span&gt; rfm.groupby&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'cluster'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;.agg&lt;span class="o"&gt;({&lt;/span&gt;
     &lt;span class="s1"&gt;'recency'&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'mean'&lt;/span&gt;, &lt;span class="s1"&gt;'std'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;,
     &lt;span class="s1"&gt;'frequency'&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'mean'&lt;/span&gt;, &lt;span class="s1"&gt;'std'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;,
     &lt;span class="s1"&gt;'monetary'&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'mean'&lt;/span&gt;, &lt;span class="s1"&gt;'std'&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;})&lt;/span&gt;

print&lt;span class="o"&gt;(&lt;/span&gt;cluster_summary&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this tutorial we were able to segment customers based on their transactional behavior using the RFM model and K-Means clustering. This segmentation allows the development of targeted marketing strategies, personalized services and also better customer management.&lt;/p&gt;

&lt;p&gt;Download dataset and Jupyter Notebook&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hi, I'm Maik. I hope you liked the article. If you have any questions or want to connect with me and access more content, follow my channels:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/maikpaixao/"&gt;https://www.linkedin.com/in/maikpaixao/&lt;/a&gt;&lt;br&gt;
Twitter: &lt;a href="https://twitter.com/maikpaixao"&gt;https://twitter.com/maikpaixao&lt;/a&gt;&lt;br&gt;
Youtube: &lt;a href="https://www.youtube.com/@maikpaixao"&gt;https://www.youtube.com/@maikpaixao&lt;/a&gt;&lt;br&gt;
Instagram: &lt;a href="https://www.instagram.com/prof.maikpaixao/"&gt;https://www.instagram.com/prof.maikpaixao/&lt;/a&gt;&lt;br&gt;
Github: &lt;a href="https://github.com/maikpaixao"&gt;https://github.com/maikpaixao&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>sklearn</category>
      <category>clustering</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Creating an Image Classification Model with PyTorch</title>
      <dc:creator>Maik Paixao</dc:creator>
      <pubDate>Fri, 18 Aug 2023 15:14:31 +0000</pubDate>
      <link>https://dev.to/maikpaixao/criacao-de-um-modelo-de-classificacao-de-imagem-com-pytorch-3032</link>
      <guid>https://dev.to/maikpaixao/criacao-de-um-modelo-de-classificacao-de-imagem-com-pytorch-3032</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78eatqfbxr4v9kz7ezju.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78eatqfbxr4v9kz7ezju.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Welcome!
&lt;/h1&gt;

&lt;p&gt;In this tutorial, I will walk you through creating a simple image classification model using PyTorch, a popular deep learning library in Python.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python Installed.&lt;/li&gt;
&lt;li&gt;Familiarity with Python Programming.&lt;/li&gt;
&lt;li&gt;Basic understanding of neural networks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. Installation:
&lt;/h2&gt;

&lt;p&gt;Before we begin, make sure you have installed the necessary packages:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It's about setting up your environment and installing PyTorch, which is a widely used library for deep learning tasks, while archvision provides utilities for computer vision.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PyTorch&lt;/strong&gt;: The core library for implementing deep learning architectures such as neural networks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Torchvision&lt;/strong&gt;: A helper library for PyTorch that provides access to popular datasets, model architectures, and image transformations for computer vision.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

pip &lt;span class="nb"&gt;install &lt;/span&gt;torch torchvision


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  2. Import Libraries:
&lt;/h2&gt;

&lt;p&gt;Before you can use the features of a package, you need to import it.&lt;/p&gt;

&lt;p&gt;toch: the main PyTorch module.&lt;/p&gt;

&lt;p&gt;Torchvision: As mentioned, this helps with datasets and models specifically for computer vision tasks.&lt;/p&gt;

&lt;p&gt;transforms: This provides common image transformations. In deep learning, input data often requires preprocessing to improve training efficiency and performance.&lt;/p&gt;

&lt;p&gt;nn: This module provides all the building blocks for neural networks.&lt;br&gt;
optim: Contains common optimization algorithms for tuning model parameters.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.optim as optim


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  3. Read and Preprocess the Data:
&lt;/h2&gt;

&lt;p&gt;We will use the CIFAR-10 dataset, a set of 60,000 32 x 32 color images in 10 classes.&lt;/p&gt;

&lt;p&gt;CIFAR-10 dataset: A well-known dataset in computer vision, consisting of 60,000 32 x 32 color images spanning 10 classes.&lt;/p&gt;

&lt;p&gt;transform.Compose(): chains multiple image transformations. In this example, images are first converted to tensors and then normalized to have values between -1 and 1.&lt;/p&gt;

&lt;p&gt;DataLoader: Helps in feeding data in batches, shuffling it and loading it in parallel, making the training process more efficient.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

transform &lt;span class="o"&gt;=&lt;/span&gt; transforms.Compose&lt;span class="o"&gt;([&lt;/span&gt;
     transforms.ToTensor&lt;span class="o"&gt;()&lt;/span&gt;,
     transforms.Normalize&lt;span class="o"&gt;((&lt;/span&gt;0.5, 0.5, 0.5&lt;span class="o"&gt;)&lt;/span&gt;, &lt;span class="o"&gt;(&lt;/span&gt;0.5, 0.5, 0.5&lt;span class="o"&gt;))&lt;/span&gt; &lt;span class="c"&gt;# Normalize the images&lt;/span&gt;
&lt;span class="o"&gt;])&lt;/span&gt;

trainset &lt;span class="o"&gt;=&lt;/span&gt; torchvision.datasets.CIFAR10&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;root&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'./data'&lt;/span&gt;, &lt;span class="nv"&gt;train&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True, &lt;span class="nv"&gt;download&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True, &lt;span class="nv"&gt;transform&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;transform&lt;span class="o"&gt;)&lt;/span&gt;
trainloader &lt;span class="o"&gt;=&lt;/span&gt; torch.utils.data.DataLoader&lt;span class="o"&gt;(&lt;/span&gt;trainset, &lt;span class="nv"&gt;batch_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4, &lt;span class="nv"&gt;shuffle&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True, &lt;span class="nv"&gt;num_workers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2&lt;span class="o"&gt;)&lt;/span&gt;

testset &lt;span class="o"&gt;=&lt;/span&gt; torchvision.datasets.CIFAR10&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;root&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'./data'&lt;/span&gt;, &lt;span class="nv"&gt;train&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;False, &lt;span class="nv"&gt;download&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True, &lt;span class="nv"&gt;transform&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;transform&lt;span class="o"&gt;)&lt;/span&gt;
testloader &lt;span class="o"&gt;=&lt;/span&gt; torch.utils.data.DataLoader&lt;span class="o"&gt;(&lt;/span&gt;testset, &lt;span class="nv"&gt;batch_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4, &lt;span class="nv"&gt;shuffle&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;False, &lt;span class="nv"&gt;num_workers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2&lt;span class="o"&gt;)&lt;/span&gt;

classes &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'plane'&lt;/span&gt;, &lt;span class="s1"&gt;'car'&lt;/span&gt;, &lt;span class="s1"&gt;'bird'&lt;/span&gt;, &lt;span class="s1"&gt;'cat'&lt;/span&gt;, &lt;span class="s1"&gt;'deer'&lt;/span&gt;, &lt;span class="s1"&gt;'dog'&lt;/span&gt;, &lt;span class="s1"&gt;'frog'&lt;/span&gt;, &lt;span class="s1"&gt;'horse'&lt;/span&gt;, &lt;span class="s1"&gt;'ship'&lt;/span&gt;, &lt;span class="s1"&gt;'truck'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  4. Define the Neural Network:
&lt;/h2&gt;

&lt;p&gt;We define a simple Convolutional Neural Network (CNN) framework.&lt;/p&gt;

&lt;p&gt;Here, we are defining our Convolutional Neural Network (CNN). CNNs are the standard neural network type for image processing tasks.&lt;/p&gt;

&lt;p&gt;nn.Conv2d(): Represents a convolutional layer. It expects (input_channels, output_channels, kernel_size) among other parameters.&lt;/p&gt;

&lt;p&gt;nn.MaxPool2d(): Represents maximum pooling, which reduces the spatial size of the representation, making calculations faster and extracting dominant features.&lt;/p&gt;

&lt;p&gt;nn.Linear(): Represents a fully connected layer that connects each neuron from the previous layer to the next.&lt;/p&gt;

&lt;p&gt;The forward function specifies how data flows across the network. This flow is essential for forward and backward propagation.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

class Net&lt;span class="o"&gt;(&lt;/span&gt;nn.Module&lt;span class="o"&gt;)&lt;/span&gt;:
     def __init__&lt;span class="o"&gt;(&lt;/span&gt;self&lt;span class="o"&gt;)&lt;/span&gt;:
         super&lt;span class="o"&gt;(&lt;/span&gt;Net, self&lt;span class="o"&gt;)&lt;/span&gt;.__init__&lt;span class="o"&gt;()&lt;/span&gt;
         self.conv1 &lt;span class="o"&gt;=&lt;/span&gt; nn.Conv2d&lt;span class="o"&gt;(&lt;/span&gt;3, 6, 5&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# 3 input channels, 6 output channels, 5x5 kernel&lt;/span&gt;
         self.pool &lt;span class="o"&gt;=&lt;/span&gt; nn.MaxPool2d&lt;span class="o"&gt;(&lt;/span&gt;2, 2&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# 2x2 max pooling&lt;/span&gt;
         self.conv2 &lt;span class="o"&gt;=&lt;/span&gt; nn.Conv2d&lt;span class="o"&gt;(&lt;/span&gt;6, 16, 5&lt;span class="o"&gt;)&lt;/span&gt;
         self.fc1 &lt;span class="o"&gt;=&lt;/span&gt; nn.Linear&lt;span class="o"&gt;(&lt;/span&gt;16 &lt;span class="k"&gt;*&lt;/span&gt; 5 &lt;span class="k"&gt;*&lt;/span&gt; 5, 120&lt;span class="o"&gt;)&lt;/span&gt;
         self.fc2 &lt;span class="o"&gt;=&lt;/span&gt; nn.Linear&lt;span class="o"&gt;(&lt;/span&gt;120, 84&lt;span class="o"&gt;)&lt;/span&gt;
         self.fc3 &lt;span class="o"&gt;=&lt;/span&gt; nn.Linear&lt;span class="o"&gt;(&lt;/span&gt;84, 10&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# 10 output classes&lt;/span&gt;

     def forward&lt;span class="o"&gt;(&lt;/span&gt;self, x&lt;span class="o"&gt;)&lt;/span&gt;:
         x &lt;span class="o"&gt;=&lt;/span&gt; self.pool&lt;span class="o"&gt;(&lt;/span&gt;F.relu&lt;span class="o"&gt;(&lt;/span&gt;self.conv1&lt;span class="o"&gt;(&lt;/span&gt;x&lt;span class="o"&gt;)))&lt;/span&gt;
         x &lt;span class="o"&gt;=&lt;/span&gt; self.pool&lt;span class="o"&gt;(&lt;/span&gt;F.relu&lt;span class="o"&gt;(&lt;/span&gt;self.conv2&lt;span class="o"&gt;(&lt;/span&gt;x&lt;span class="o"&gt;)))&lt;/span&gt;
         x &lt;span class="o"&gt;=&lt;/span&gt; x.view&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nt"&gt;-1&lt;/span&gt;, 16 &lt;span class="k"&gt;*&lt;/span&gt; 5 &lt;span class="k"&gt;*&lt;/span&gt; 5&lt;span class="o"&gt;)&lt;/span&gt;
         x &lt;span class="o"&gt;=&lt;/span&gt; F.relu&lt;span class="o"&gt;(&lt;/span&gt;self.fc1&lt;span class="o"&gt;(&lt;/span&gt;x&lt;span class="o"&gt;))&lt;/span&gt;
         x &lt;span class="o"&gt;=&lt;/span&gt; F.relu&lt;span class="o"&gt;(&lt;/span&gt;self.fc2&lt;span class="o"&gt;(&lt;/span&gt;x&lt;span class="o"&gt;))&lt;/span&gt;
         x &lt;span class="o"&gt;=&lt;/span&gt; self.fc3&lt;span class="o"&gt;(&lt;/span&gt;x&lt;span class="o"&gt;)&lt;/span&gt;
         &lt;span class="k"&gt;return &lt;/span&gt;x

net &lt;span class="o"&gt;=&lt;/span&gt; Net&lt;span class="o"&gt;()&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  5. Define the Loss Function and Optimizer:
&lt;/h2&gt;

&lt;p&gt;Let's use Cross-Entropy loss and SGD optimizer.&lt;/p&gt;

&lt;p&gt;Cross-Entropy loss: Commonly used in classification tasks. It measures the difference between predicted probabilities and true class labels.&lt;/p&gt;

&lt;p&gt;SGD (Stochastic Gradient Descent): An optimization method used to minimize loss by adjusting model weights.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

criterion &lt;span class="o"&gt;=&lt;/span&gt; nn.CrossEntropyLoss&lt;span class="o"&gt;()&lt;/span&gt;
optimizer &lt;span class="o"&gt;=&lt;/span&gt; optim.SGD&lt;span class="o"&gt;(&lt;/span&gt;net.parameters&lt;span class="o"&gt;()&lt;/span&gt;, &lt;span class="nv"&gt;lr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.001, &lt;span class="nv"&gt;momentum&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.9&lt;span class="o"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  6. Training the Network:
&lt;/h2&gt;

&lt;p&gt;Here, we will train the model for a few epochs.&lt;/p&gt;

&lt;p&gt;The essence of deep learning is this iterative process of adjusting model weights to minimize loss:&lt;/p&gt;

&lt;p&gt;Clear gradients: Since PyTorch accumulates gradients, you need to clear them before each step.&lt;/p&gt;

&lt;p&gt;Forward Propagation: Pass input through the model to obtain predictions.&lt;/p&gt;

&lt;p&gt;Calculate loss: compare predictions with actual labels.&lt;br&gt;
Backward Propagation: Backpropagate the loss throughout the network to calculate the gradient of the loss with respect to each weight.&lt;/p&gt;

&lt;p&gt;Optimize: Adjust weights in the direction that minimizes loss.&lt;br&gt;
The loop ensures that the model sees the data multiple times (epochs) and adjusts its weights.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="k"&gt;for &lt;/span&gt;epoch &lt;span class="k"&gt;in &lt;/span&gt;range&lt;span class="o"&gt;(&lt;/span&gt;5&lt;span class="o"&gt;)&lt;/span&gt;: &lt;span class="c"&gt;# Loop over the dataset multiple times&lt;/span&gt;

     running_loss &lt;span class="o"&gt;=&lt;/span&gt; 0.0
     &lt;span class="k"&gt;for &lt;/span&gt;i, data &lt;span class="k"&gt;in &lt;/span&gt;enumerate&lt;span class="o"&gt;(&lt;/span&gt;trainloader, 0&lt;span class="o"&gt;)&lt;/span&gt;:
         inputs, labels &lt;span class="o"&gt;=&lt;/span&gt; data

         optimizer.zero_grad&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="c"&gt;# Zero the parameter gradients&lt;/span&gt;

         outputs &lt;span class="o"&gt;=&lt;/span&gt; net&lt;span class="o"&gt;(&lt;/span&gt;inputs&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# Forward&lt;/span&gt;
         loss &lt;span class="o"&gt;=&lt;/span&gt; criterion&lt;span class="o"&gt;(&lt;/span&gt;outputs, labels&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# Calculate loss&lt;/span&gt;
         loss.backward&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="c"&gt;# Backward&lt;/span&gt;
         optimizer.step&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="c"&gt;# Optimize&lt;/span&gt;

         running_loss +&lt;span class="o"&gt;=&lt;/span&gt; loss.item&lt;span class="o"&gt;()&lt;/span&gt;
         &lt;span class="k"&gt;if &lt;/span&gt;i % 2000 &lt;span class="o"&gt;==&lt;/span&gt; 1999: &lt;span class="c"&gt;# Print every 2000 mini-batches&lt;/span&gt;
             print&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'[%d, %5d] loss: %.3f'&lt;/span&gt; % &lt;span class="o"&gt;(&lt;/span&gt;epoch + 1, i + 1, running_loss / 2000&lt;span class="o"&gt;))&lt;/span&gt;
             running_loss &lt;span class="o"&gt;=&lt;/span&gt; 0.0

print&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Finished Training'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  7. Testing the Network:
&lt;/h2&gt;

&lt;p&gt;After training, it is crucial to evaluate the model's performance on unseen data:&lt;/p&gt;

&lt;p&gt;torch.no_grad(): Disables gradient computation, which is not needed during evaluation, saving memory and computation.&lt;/p&gt;

&lt;p&gt;Outputs: These are the predicted probabilities for each class.&lt;br&gt;
Prediction: By choosing the class with the highest probability, we obtain the predicted class label.&lt;/p&gt;

&lt;p&gt;Calculate Accuracy: Count how many predictions match the actual labels and calculate the percentage.&lt;/p&gt;

&lt;p&gt;At the end of this process, you will have a trained neural network model capable of classifying images from the CIFAR-10 dataset. Remember, this is a basic tutorial. For better accuracy and efficiency in real-world applications, more advanced techniques and fine-tuning are required.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

correct &lt;span class="o"&gt;=&lt;/span&gt; 0
total &lt;span class="o"&gt;=&lt;/span&gt; 0
with torch.no_grad&lt;span class="o"&gt;()&lt;/span&gt;:
     &lt;span class="k"&gt;for &lt;/span&gt;data &lt;span class="k"&gt;in &lt;/span&gt;testloader:
         images, labels &lt;span class="o"&gt;=&lt;/span&gt; data
         outputs &lt;span class="o"&gt;=&lt;/span&gt; net&lt;span class="o"&gt;(&lt;/span&gt;images&lt;span class="o"&gt;)&lt;/span&gt;
         _, predicted &lt;span class="o"&gt;=&lt;/span&gt; outputs.max&lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;
         total +&lt;span class="o"&gt;=&lt;/span&gt; labels.size&lt;span class="o"&gt;(&lt;/span&gt;0&lt;span class="o"&gt;)&lt;/span&gt;
         correct +&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;predicted &lt;span class="o"&gt;==&lt;/span&gt; labels&lt;span class="o"&gt;)&lt;/span&gt;.sum&lt;span class="o"&gt;()&lt;/span&gt;.item&lt;span class="o"&gt;()&lt;/span&gt;

print&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Accuracy of the network on the 10000 test images: %d %%'&lt;/span&gt; % &lt;span class="o"&gt;(&lt;/span&gt;100 &lt;span class="k"&gt;*&lt;/span&gt; correct / total&lt;span class="o"&gt;))&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And that's it! In just 10 minutes, you learned how to create and train a simple image classification model using PyTorch. With more time and tweaking, you can improve this model further or dive deeper into advanced architectures and techniques!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hi, I'm Maik. I hope you liked the article. If you have any questions or want to connect with me and access more content, follow my channels:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/maikpaixao/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/maikpaixao/&lt;/a&gt;&lt;br&gt;
Twitter: &lt;a href="https://twitter.com/maikpaixao" rel="noopener noreferrer"&gt;https://twitter.com/maikpaixao&lt;/a&gt;&lt;br&gt;
Youtube: &lt;a href="https://www.youtube.com/@maikpaixao" rel="noopener noreferrer"&gt;https://www.youtube.com/@maikpaixao&lt;/a&gt;&lt;br&gt;
Instagram: &lt;a href="https://www.instagram.com/prof.maikpaixao/" rel="noopener noreferrer"&gt;https://www.instagram.com/prof.maikpaixao/&lt;/a&gt;&lt;br&gt;
Github: &lt;a href="https://github.com/maikpaixao" rel="noopener noreferrer"&gt;https://github.com/maikpaixao&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>pytorch</category>
      <category>deeplearning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building Great Credit Scoring Models</title>
      <dc:creator>Maik Paixao</dc:creator>
      <pubDate>Thu, 27 Jul 2023 21:19:32 +0000</pubDate>
      <link>https://dev.to/maikpaixao/construindo-otimos-modelos-de-escoragem-de-credito-4pc5</link>
      <guid>https://dev.to/maikpaixao/construindo-otimos-modelos-de-escoragem-de-credito-4pc5</guid>
      <description>&lt;p&gt;Granting credit is a decision taken under conditions of uncertainty. In loans, installment sales, provision of services, etc., there is always the possibility of losing the amount borrowed. If a creditor is able to estimate the probability of a certain loss occurring, the decision-making process becomes more assertive, reducing possible losses. Because of this, several companies and financial institutions invest in Credit Scoring models. These models aim to predict, on the date of the decision, a possible granting of credit, if the granted operation implies losses for the creditor. The probability of this happening is called credit risk in the industry.&lt;/p&gt;

&lt;p&gt;Therefore, Credit Score is a measure of credit risk. Credit scoring models is the market's common generic determination for credit score calculation formulas.&lt;br&gt;
Quantitative Credit Score&lt;/p&gt;

&lt;p&gt;The risk of a credit application can be assessed subjectively or measured objectively through quantitative analysis methodologies. The subjective assessment, despite incorporating the experience of a financial analyst, does not quantify credit risk. It is necessary to accurately estimate possible losses and expected gains for each operation and consequently make a decision. This measurement using quantitative methods has some advantages:&lt;/p&gt;

&lt;h2&gt;
  
  
  Consistency in Decisions
&lt;/h2&gt;

&lt;p&gt;If we submit the same credit operation to different analysts, different subjective assessments can be obtained, as the experience and engagement with the customer differ between them. Furthermore, the same financial analyst may give different assessments to the same proposal if it is presented at different times. We humans are like that. However, this does not occur when using a quantitative Credit Scoring model. Keeping the initial characteristics of the proposal unchanged, the calculated score will always be the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Decisions
&lt;/h2&gt;

&lt;p&gt;The computational resources available today allow the score calculation to be computed almost instantly, right after data registration for a given request has been carried out. Hundreds or thousands of decisions can be made in just one day, with security and consistency. This short customer response time allows investment banks to have a competitive advantage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Remote Decisions
&lt;/h2&gt;

&lt;p&gt;Currently, with the data transmission resources available, the lender does not need to allocate a financial credit analyst in each office or in each branch. The credit seller can record the data at the point of sale, and after sending that data, the model can almost instantly calculate the customer's credit score.&lt;/p&gt;

&lt;h2&gt;
  
  
  Portfolio Monitoring
&lt;/h2&gt;

&lt;p&gt;The quantification of individual risks allows continuous monitoring of the analysis of credit portfolios. This, in addition to ensuring the security of the portfolio's cash flow, also allows analyzing trends within the institution itself, an important step in building forecasting models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Credit Scoring Model
&lt;/h2&gt;

&lt;p&gt;The idea behind credit scoring techniques is simple. Suppose that, in car financing, the lender analyzes only three characteristics of the applicants. The type of residence at the date of the request, whether the applicant has debts and whether the vehicle subject to the credit operation is new or used. As the number of variables increases, the complexity of the analysis increases. In this case, analysis is only viable with the help of advanced quantitative techniques.&lt;/p&gt;

&lt;p&gt;The effectiveness of a credit scoring model directly depends on the information used to assess customer and transaction risks. Choosing this information correctly is extremely important for obtaining a good model.&lt;/p&gt;

&lt;p&gt;We initially identified a set of predictor variables that we believe have the potential to discriminate whether a customer is eligible to receive a loan. These variables are used in the score calculation formula.&lt;/p&gt;

&lt;p&gt;One way to begin identifying potential variables is by analyzing available databases. When analyzing these variables, ideas may arise to combine two or more of them to generate a new variable that provides information for the model. In large banks, customer and transaction data are commonly stored in different databases.&lt;br&gt;
When identifying the team of experts, keep in mind the objective you hope to achieve with the Credit Scoring model. Data that can differentiate whether a client is able to honor their debts must be analyzed.&lt;br&gt;
In general, you can classify information by following descriptions:&lt;/p&gt;

&lt;h1&gt;
  
  
  Sociodemographic data of the applicant
&lt;/h1&gt;

&lt;p&gt;Sociodemographic information of the spouse or partner&lt;br&gt;
Applicant's financial information&lt;br&gt;
Lender Relationship Information&lt;br&gt;
Behavioral information&lt;/p&gt;

&lt;h2&gt;
  
  
  Legal concerns
&lt;/h2&gt;

&lt;p&gt;The first point concerns legal certainty. It is important to inform the customer that when requesting credit, they are authorizing consultation with credit protection bodies. Furthermore, the information obtained is confidential and should not be disclosed or shared under any circumstances, restricting its use exclusively to support the granting of credit.&lt;/p&gt;

&lt;p&gt;This conduct should be done as a good practice in your company's routines. After all, it is customer data that can be exposed to criminal action. To do this, you need to train your team well, implement security tools and use secure applications.&lt;/p&gt;

&lt;p&gt;Financial management software includes technologies that strengthen information security, such as cloud computing and blockchain (anchor blocks). It is worth remembering that, in isolation, the credit score is not a parameter for granting, its information needs to be cross-referenced with internal data from its customers, evaluating their payment capacity and their relationship with the company in order to reduce default rates.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hi, I'm Maik. I hope you liked the article. If you have any questions or want to connect with me and access more content, follow my channels:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/maikpaixao/"&gt;https://www.linkedin.com/in/maikpaixao/&lt;/a&gt;&lt;br&gt;
Twitter: &lt;a href="https://twitter.com/maikpaixao"&gt;https://twitter.com/maikpaixao&lt;/a&gt;&lt;br&gt;
Youtube: &lt;a href="https://www.youtube.com/@maikpaixao"&gt;https://www.youtube.com/@maikpaixao&lt;/a&gt;&lt;br&gt;
Instagram: &lt;a href="https://www.instagram.com/prof.maikpaixao/"&gt;https://www.instagram.com/prof.maikpaixao/&lt;/a&gt;&lt;br&gt;
Github: &lt;a href="https://github.com/maikpaixao"&gt;https://github.com/maikpaixao&lt;/a&gt;&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>creditscoring</category>
      <category>businessinteligence</category>
      <category>analytics</category>
    </item>
  </channel>
</rss>
