<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yel Martínez Digital ESG Audit</title>
    <description>The latest articles on DEV Community by Yel Martínez Digital ESG Audit (@yel-martinez-green-tech).</description>
    <link>https://dev.to/yel-martinez-green-tech</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yel-martinez-green-tech"/>
    <language>en</language>
    <item>
      <title>Analyzing the severity of car accidents</title>
      <dc:creator>Yel Martínez Digital ESG Audit</dc:creator>
      <pubDate>Fri, 28 Aug 2020 08:54:38 +0000</pubDate>
      <link>https://dev.to/yel-martinez-green-tech/analyzing-the-severity-of-car-accidents-cp9</link>
      <guid>https://dev.to/yel-martinez-green-tech/analyzing-the-severity-of-car-accidents-cp9</guid>
      <description>&lt;h1&gt;
  
  
  Business problem - Introduction
&lt;/h1&gt;

&lt;h2&gt;
  
  
  1. A description of the problem and a discussion of the background
&lt;/h2&gt;

&lt;p&gt;Traffic accidents represent one of the leading causes of death worldwide and of economic expenditure. Despite the numerous &lt;strong&gt;&lt;a href="https://www.linkedin.com/in/yel-martinez-informatica-seo-desarrollo-web-posicionamiento-buscadores/details/recommendations/" rel="noopener noreferrer"&gt;measures and campaigns&lt;/a&gt;&lt;/strong&gt; that are deployed every year to raise awareness of the seriousness of the problem, it still occurs quite frequently. The impact of road accidents on society and the economy is high, and human losses are compounded by large expenditures on health care, awareness campaigns, mobilization of specialized personnel, etc. The WHO sets the economic impact of road accidents in a developed country at 2 to 3% of GDP, a significant figure for any country. Collaboration to reduce these losses has become an important issue of general interest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Defining the problem&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What are the factors that have a high impact on road accidents?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is there a pattern to them?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Correlation?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will have to analyze the data to get a clearer picture and draw conclusions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Note that this work represents the final project of the &lt;strong&gt;&lt;a href="https://www.credly.com/users/yelenis-martinez/badges" rel="noopener noreferrer"&gt;IBM certification course&lt;/a&gt;&lt;/strong&gt;, for which we have provided the data with which we will develop the project.&lt;/p&gt;

&lt;p&gt;These data have been collected and shared by the Seattle Police Department (Traffic Records) and are provided by Coursera for downloading through a link.&lt;/p&gt;

&lt;p&gt;It takes into account a period of time from 2004 to the present, recording information related to the severity of the traffic accident, location, type of collision, weather and road conditions, visibility, number of people involved, etc.&lt;/p&gt;

&lt;p&gt;The objective is to define the problem, to find the factors that can have a relevant weight in the quantity and seriousness of the accidents, so that any organism, company or enterprise interested in reducing these figures, can focus the resources in points where these conditions converge.&lt;/p&gt;

&lt;p&gt;In order to provide greater clarity, I will try to analyze the data, see if there are relationships or patterns, especially in high impact accidents, so that preventive measures can focus on these points as a first prevention strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data to be used
&lt;/h2&gt;

&lt;h3&gt;
  
  
  2. A description of the data and how it will be used to solve the problem
&lt;/h3&gt;

&lt;p&gt;For an accurate prediction of the magnitude of damage caused by accidents, they require a large number of reports on traffic accidents with accurate data to train prediction models. The data set provided for this work allows the analysis of a record of 200,000 accidents in the state of Seattle, from 2004 to the date it is issued, in which 37 attributes or variables are recorded and the codification of the type of accident is allowed, grouped according to 84 codes. The information can be extracted from it:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;speed information&lt;/em&gt;&lt;br&gt;
&lt;em&gt;information on road conditions and visibility&lt;/em&gt;&lt;br&gt;
&lt;em&gt;type of collision&lt;/em&gt;&lt;br&gt;
&lt;em&gt;affected persons, etc&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The data will be used so that we can determine which attributes are most common in traffic accidents in order to target prevention at these high-incidence points.&lt;/p&gt;
&lt;h3&gt;
  
  
  Data Source
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Data Source: These data have been collected and shared by the Seattle Police Department (Traffic Records) and are provided by Coursera for downloading through a link.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Data Location: Coursera_Capstone/Data assets&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Data set name: Data-Collisions (1)_shaped.csv&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Methodology
&lt;/h2&gt;

&lt;p&gt;Objective: The objective of this project is to predict the severity of a traffic accident based on the other characteristics contained in the report.&lt;/p&gt;

&lt;p&gt;Packages and libraries: We will use libraries and packages for both data manipulation and data visualization. PANDA, NUMPY, SCIPY, Matplotlib, Seaborn&lt;/p&gt;

&lt;p&gt;A data analysis will be performed in order to determine what type of methodology and learning of the machine will be the most appropriate, in addition to obtaining a first contact with the data that we find more relevant to use in this project.&lt;/p&gt;
&lt;h3&gt;
  
  
  Obtaining and cleaning data
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Importing libraries and packages
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
from sklearn.tree import DecisionTreeClassifier
from sklearn import svm
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
print('imported')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Uploading the data
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df_data_1 = pd.read_csv(Data-Collisions.csv)
df_data_1.head()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# choosing the data we will work with
test = ['SEVERITYCODE', 'SPEEDING','ROADCOND']
df_data_1 = df_data_1[test]

# obtaining data dimensions
for feature in ["SPEEDING", "ROADCOND"]:
    print(df_data_1[feature].unique())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;['N' 'Y']&lt;br&gt;
['Wet' 'Dry' 'Unknown' 'Snow/Slush' 'Ice' 'Other' 'Sand/Mud/Dirt'&lt;br&gt;
 'Standing Water' 'Oil']&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# in speed we replace Nan with a negative value N
df_data_1['SPEEDING'] = df_data_1['SPEEDING'].fillna('N')


#we replace the value Nan declaring it as unknown too

df_data_1['ROADCOND'] = df_data_1['ROADCOND'].fillna('Unknown')

# checking value once again...
for feature in ["SPEEDING", "ROADCOND"]:
    print(df_data_1[feature].unique())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;['N' 'Y']&lt;br&gt;
['Wet' 'Dry' 'Unknown' 'Snow/Slush' 'Ice' 'Other' 'Sand/Mud/Dirt'&lt;br&gt;
 'Standing Water' 'Oil']&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# We assign new values to roadcond
df_data_1['ROADCOND'].replace(to_replace=['Wet','Dry','Unknown','Snow/Slush','Ice','Other','Sand/Mud/Dirt','Standing Water','Oil'], value = ['Dangerous','Normal','Normal','Dangerous','Dangerous','Normal','Dangerous','Dangerous','Dangerous'], inplace=True)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df_data_1["SPEEDING"].replace(to_replace=['N', 'Y'], value=[0,1], inplace=True)
df_data_1['ROADCOND'].replace(to_replace=['Dangerous','Normal'],value=[0,1],inplace=True)
test_condition = df_data_1[['SPEEDING','ROADCOND']]
test_condition.head()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;|   | SPEEDING  |  ROADCOND  |&lt;br&gt;
| ------------- |:----------:| &lt;br&gt;
| 0 |    0      |      0     |&lt;br&gt;
| 1 |    0      |      0     |&lt;br&gt;&lt;br&gt;
| 2 |    0      |      1     |&lt;br&gt;&lt;br&gt;
| 3 |    0      |      1     |&lt;br&gt;
| 4 |    0      |      0     |&lt;/p&gt;
&lt;h3&gt;
  
  
  Training the model
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x = test_condition
y = df_data_1['SEVERITYCODE'].values.astype(str)
x = preprocessing.StandardScaler().fit(x).transform(x)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1234)

# obtaining data dimensions
print("Training set: ", x_train.shape, y_train.shape)
print("Testing set: ", x_test.shape, y_test.shape)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Training set:  (155738, 2) (155738,)&lt;br&gt;
Testing set:  (38935, 2) (38935,)&lt;/p&gt;
&lt;h3&gt;
  
  
  Selecting the methods: Tree model, Logistic Regression and KNN methodology
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Tree model
Tree_model = DecisionTreeClassifier(criterion="entropy", max_depth = 4)
Tree_model.fit(x_train, y_train)
predicted = Tree_model.predict(x_test)
Tree_f1 = f1_score(y_test, predicted, average='weighted')
Tree_acc = accuracy_score(y_test, predicted)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Logistic Regression
LR_model = LogisticRegression(C=0.01, solver='liblinear').fit(x_train, y_train)
predicted = LR_model.predict(x_test)
LR_f1 = f1_score(y_test, predicted, average='weighted')
LR_acc = accuracy_score(y_test, predicted)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#KNN methodology
KNN_model = KNeighborsClassifier(n_neighbors = 4).fit(x_train, y_train)
predicted = KNN_model.predict(x_test)
KNN_f1 = f1_score(y_test, predicted, average='weighted')
KNN_acc = accuracy_score(y_test, predicted)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Comparing the results obtained
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;results = {
    "Method of Analisys": ["KNN", "Decision Tree", "LogisticRegression"],
    "F1-score": [KNN_f1, Tree_f1, LR_f1],
    "Accuracy": [KNN_acc, Tree_acc, LR_acc]
}

results = pd.DataFrame(results)
results

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;|   | Method of Analisys | F1-score | Accuracy |&lt;br&gt;
| ---------------------- |:--------:| :-------:|&lt;br&gt;
| 0 |         KNN        | 0.591378 |  0.69675 |&lt;br&gt;
| 1 |   Decision Tree    | 0.576051 |  0.699679|&lt;br&gt;&lt;br&gt;
| 2 | LogisticRegression | 0.576051 |  0.699679|&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Comparing results using LR
results = {
    "Intercept": LR_model.intercept_,
    "SPEEDING ": LR_model.coef_[:,0],
    "ROADCOND ": LR_model.coef_[:,1],
}

results = pd.DataFrame(results)
results

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;|   | Intercept | SPEEDING |  ROADCOND |&lt;br&gt;
| ----- ------- |:--------:| :--------:|&lt;br&gt;
| 0 | -0.853729 | 0.067702 | -0.068295 |&lt;/p&gt;

&lt;p&gt;Looking at the results obtained in the comparison, it is understood that speed and road conditions influence the severity of traffic accidents.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>python</category>
      <category>career</category>
    </item>
    <item>
      <title>SQL for dummies</title>
      <dc:creator>Yel Martínez Digital ESG Audit</dc:creator>
      <pubDate>Sun, 10 May 2020 19:56:24 +0000</pubDate>
      <link>https://dev.to/yel-martinez-green-tech/sql-for-dummies-2b0l</link>
      <guid>https://dev.to/yel-martinez-green-tech/sql-for-dummies-2b0l</guid>
      <description>&lt;p&gt;SQL is a structured query language through which it is
possible to access, manage and retrieve data contained in a relational
database, including the creation of databases, the deletion and recovery of
rows or the modification of these, etc. SQL is an ANSI (American National
Standards Institute) standard language, although there are multiple versions of
this language.&lt;/p&gt;

&lt;p&gt;SQL is the standard computer language used in relational
database management systems (RDMS) such as MySQL, MS Access, Oracle, Sybase,
Informix, Postgres and SQL Server. &lt;/p&gt;

&lt;h2&gt;SQL Functionalities&lt;/h2&gt;

&lt;p&gt;SQL is one of the most widely used query languages in
relational database management because of the utilities it offers users:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; access to data contained in relational databases,  &lt;/li&gt;

&lt;li&gt; the description of the data, &lt;/li&gt;

&lt;li&gt; defining the data within a database, &lt;/li&gt;

&lt;li&gt; the manipulation of these, &lt;/li&gt;

&lt;li&gt; embedding in other languages through the use of SQL
modules, libraries and pre-compilers &lt;/li&gt;

&lt;li&gt; the creation of databases and tables, &lt;/li&gt;

&lt;li&gt; the **[views, procedures and functions](https://www.coursera.org/account/accomplishments/verify/SF9MPC4NSUQL?utm_campaign=sharing_cta&amp;amp;utm_content=cert_image&amp;amp;utm_medium=certificate&amp;amp;utm_product=course&amp;amp;utm_source=link)**, &lt;/li&gt;

&lt;li&gt; establish permissions on tables, procedures and views.&lt;/li&gt;

&lt;li&gt; Among other functionalities.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Components involved in the SQL process&lt;/h2&gt;

&lt;p&gt;When a SQL command is executed for an RDBMS, the system
determines the best way to process that request, while the SQL engine
determines how to interpret that command.&lt;/p&gt;

&lt;p&gt;Among the components involved in this process are the query
dispatcher, optimization engines, the classic query engine, the SQL query
engine, etc.&lt;/p&gt;

&lt;p&gt;Note that the classic query engine handles non- SQL queries,
just as the SQL query engine does not handle logical files.&lt;/p&gt;

&lt;h2&gt;Main SQL commands&lt;/h2&gt;

&lt;p&gt;The main standard SQL commands with which to interact with
relational databases are CREATE, SELECT, INSERT, UPDATE, DELETE and DROP. These
commands are classified in groups according to their typology.&lt;/p&gt;

&lt;h2&gt;RDBMS - relational database management system &lt;/h2&gt;

&lt;p&gt;DBMS - database management system &lt;/p&gt;

&lt;p&gt;RDBMS - Relational Database Management System&lt;/p&gt;

&lt;p&gt;The Relational Database Management System, known as RDBMS,
is a database management system (DBMS) based on Edgar Frank Codd's relational
model. On this relational model the ANSI and ISO standards of the management
language, definition and manipulation of the SQL relational databases have been
defined. It is the basis of SQL and other database systems such as MS SQL
Server, Oracle, MySQL or Microsoft Access.&lt;/p&gt;

&lt;h2&gt;Parts of a table in SQL&lt;/h2&gt;

&lt;h3&gt;Explaining what a table is in SQL&lt;/h3&gt;

&lt;p&gt;In a relational database management system, data is stored
in database objects called tables, which collect multiple data entries and
consist of numerous rows and columns.&lt;/p&gt;

&lt;p&gt;The table is the simplest and most widely used form of data
storage within a relational database. &lt;/p&gt;

&lt;h3&gt;Explaining what a field is in SQL&lt;/h3&gt;

&lt;p&gt;Each of the SQL tables are divided into fields or columns
within a table, designed to contain information specific to each record in the
corresponding table.&lt;/p&gt;

&lt;h3&gt;Explaining what a record or row is in SQL&lt;/h3&gt;

&lt;p&gt;A record is a horizontal entity in a table. Each individual
entry in a table is a data row. &lt;/p&gt;

&lt;h3&gt;Explaining what a column is in SQL&lt;/h3&gt;

&lt;p&gt;A column is a vertical entity in a table, which contains
information related to a specific field within the table.&lt;/p&gt;

&lt;h3&gt;Explaining what a NULL value is in SQL&lt;/h3&gt;

&lt;p&gt;A field with a NULL value is a field that is left blank
during the creation of a record. In a table, a NULL value represents a field
with no value, other than a zero value or a field with spaces. &lt;/p&gt;

&lt;h2&gt;Constraints in SQL&lt;/h2&gt;

&lt;p&gt;When we talk about constraints in SQL, we refer to the rules
that have been defined for the data columns in a table. These constraints are
used to limit the type of data that a table can contain, guaranteeing the
reliability of the data.&lt;/p&gt;

&lt;p&gt;These restrictions can be defined at column or table level,
applying only to one column in the first case and to the whole table if it is
the second case. &lt;/p&gt;

&lt;p&gt;Some of the most common restrictions you can find in SQL are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NOT NULL constraint (makes a column unable to contain a NULL
value)&lt;/li&gt;

&lt;li&gt;DEFAULT restriction (assigns a default value to a column
when one is not specified)&lt;/li&gt;

&lt;li&gt;Single restriction (makes the values of one column
different)&lt;/li&gt;

&lt;li&gt;PRIMARY KEY (makes the identification of each row or record
unique within a database table)&lt;/li&gt;

&lt;li&gt;Foreign key (makes the identification of each row or record
unique in any other database table)&lt;/li&gt;

&lt;li&gt;Checking restriction (ensures that the values in a column
meet specific conditions)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Categories of Data Integrity in SQL&lt;/h2&gt;

&lt;p&gt;The data integrity categories of each RDBMS are:&lt;/p&gt;

&lt;p&gt;- Entity integrity (no duplicate rows in a table)&lt;/p&gt;

&lt;p&gt;- Domain integrity (restricts the type, format, and value
range that applies to valid entries for a column within a table)&lt;/p&gt;

&lt;p&gt;- **[Referential integrity](https://www.linkedin.com/in/yel-martinez-informatica-seo-desarrollo-web-posicionamiento-buscadores/details/recommendations/)** (makes rows in a table that are
being used by other records impossible to delete)&lt;/p&gt;

&lt;p&gt;- User-defined integrity (other specific rules not included
above apply)&lt;/p&gt;

&lt;h2&gt;Standardization of a SQL database&lt;/h2&gt;

&lt;p&gt;Standardization of a SQL database consists of a series of
guidelines established as a guide for the optimal creation of a database
structure.&lt;/p&gt;

&lt;p&gt;SQL database normalization is the process of organizing data
in a database efficiently, eliminating duplicate data and ensuring that data
dependencies make sense. Normalization seeks to reduce the amount of space used
in a database and ensure that data is stored in a logical manner. &lt;/p&gt;

&lt;h2&gt;Basic SQL Syntax&lt;/h2&gt;

&lt;p&gt;A syntax is a unique set of rules and guidelines. In [SQL,
the syntax](https://www.credly.com/badges/b0809aba-1bde-43fb-aa29-7815fdef90a2) states that all declarations start with one of the keywords SELECT,
INSERT, UPDATE, DELETE, ALTER, DROP, CREATE, USE, SHOW and end with a
semicolon.&lt;/p&gt;

&lt;p&gt;It is important to note that SQL is not case sensitive in
SQL declarations, while MySQL is case sensitive in table names, so you will
need to handle the table names as defined in the database.&lt;/p&gt;

&lt;h2&gt;SQL data types&lt;/h2&gt;

&lt;p&gt;In SQL, the attributes that specify the type of data an
object will contain are known as the SQL Data Type.&lt;/p&gt;

&lt;p&gt;Each column, variable, and expression has a related data
type. These can be used when creating tables by choosing the data type to be
used for a table column.&lt;/p&gt;

&lt;p&gt;These data types are divided into six categories in SQL
Server&lt;/p&gt;

&lt;h3&gt;data types categories&lt;/h3&gt;

&lt;p&gt;- Exact numerical data types&lt;/p&gt;

&lt;p&gt;- Types of approximate numerical data&lt;/p&gt;

&lt;p&gt;- Types of date and time data&lt;/p&gt;

&lt;p&gt;- Character string data types&lt;/p&gt;

&lt;p&gt;- Unicode string data types&lt;/p&gt;

&lt;p&gt;- Binary data types&lt;/p&gt;

&lt;p&gt;- Various types of data&lt;/p&gt;

&lt;h2&gt;The SQL operators&lt;/h2&gt;

&lt;p&gt;In SQL, when we speak of operators, we refer to reserved
words or characters, used mainly in the WHERE clause of a SQL statement, in
order to carry out arithmetic operations, comparisons, etc. &lt;/p&gt;

&lt;p&gt;These operators are used to specify conditions within a SQL
statement. &lt;/p&gt;

&lt;p&gt;Types of SQL operators include arithmetic operators,
comparison operators, and logical operators.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>beginners</category>
      <category>database</category>
      <category>mysql</category>
    </item>
  </channel>
</rss>
