<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ruthvik Raja M.V</title>
    <description>The latest articles on DEV Community by Ruthvik Raja M.V (@ruthvikraja_mv).</description>
    <link>https://dev.to/ruthvikraja_mv</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ruthvikraja_mv"/>
    <language>en</language>
    <item>
      <title>How to create a DataFrame that consists of a list of DataFrames and corresponding Name using Python</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Mon, 04 Sep 2023 01:25:12 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/how-to-create-a-dataframe-that-consists-of-a-list-of-dataframes-and-corresponding-name-using-python-4d0a</link>
      <guid>https://dev.to/ruthvikraja_mv/how-to-create-a-dataframe-that-consists-of-a-list-of-dataframes-and-corresponding-name-using-python-4d0a</guid>
      <description>&lt;p&gt;Hello polymaths, &lt;br&gt;
The above-mentioned task is important to know for most of the Python Developers working in the Data Field. Imagine, you have several Excel files (or) CSV files (or) a single Excel file with multiple sheets etc. and you want to compute a logic that considers the entire data for calculation -&amp;gt; Obviously, you have to append each file and its corresponding Name to a separate DataFrame that consists of a list of DataFrames to obtain the Output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;&lt;br&gt;
Create an empty list to append the names of each DataFrame.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;&lt;br&gt;
Create an empty list to append the data related to each DataFrame.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;&lt;br&gt;
Make use of loop concepts to iterate through each DataFrame. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;&lt;br&gt;
Perform Data Cleaning, Transformations etc. if necessary and finally append the data to the previously created empty lists.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;&lt;br&gt;
Create a new DataFrame and assign the data parameter with the above two lists.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Sample Code:-&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="c1"&gt;# Import necessary Libraries
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;ef&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ExcelFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;path/input.xlsx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# Load the Excel File
&lt;/span&gt;
&lt;span class="n"&gt;dataframes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt; &lt;span class="c1"&gt;# Empty List to append the data of each File
&lt;/span&gt;&lt;span class="n"&gt;names&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt; &lt;span class="c1"&gt;# Empty List to append the name of each File
&lt;/span&gt;
&lt;span class="c1"&gt;# Iterate through all the sheets within the Excel object
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;ef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sheet_names&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;# Store the data as a DataFrame from each sheet
&lt;/span&gt;    &lt;span class="n"&gt;df_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;# Store the name of the DataFrame from each sheet
&lt;/span&gt;
    &lt;span class="c1"&gt;# Perform Data Cleaning and Tranformations, if necessary #
&lt;/span&gt;
    &lt;span class="n"&gt;dataframes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;names&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df_name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;df_final&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DataFrame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;names&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DataFrame&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;dataframes&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt; &lt;span class="c1"&gt;# Create the Final DataFrame
&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Done&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>programming</category>
      <category>softwaredevelopment</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How to read all the sheets from an Excel file and push into Data Lake using Azure Data Factory</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Sun, 03 Sep 2023 00:14:15 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/how-to-read-all-the-sheets-from-an-excel-file-and-push-into-data-lake-using-azure-data-factory-3l41</link>
      <guid>https://dev.to/ruthvikraja_mv/how-to-read-all-the-sheets-from-an-excel-file-and-push-into-data-lake-using-azure-data-factory-3l41</guid>
      <description>&lt;p&gt;Azure Data Factory doesn't have an in-built function to read all the sheets from an Excel file but it supports reading data from an Excel file using various methods, including Azure Data Factory Mapping Data Flows. Below are the general steps to read all the sheets from an Excel file in Azure Data Factory:-&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create an Azure Data Factory Service in Microsoft Azure Portal&lt;/strong&gt;&lt;br&gt;
If you haven't already, create an Azure Data Factory instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create Linked Services for Source and Destination Files&lt;/strong&gt;&lt;br&gt;
In Azure Data Factory, create two Linked Services for your source (Excel File) and destination (Azure Data Lake) files. This is the connection information needed to access and store the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create Datasets for Source and Destination Files&lt;/strong&gt;&lt;br&gt;
In Azure Data Factory, create two Datasets for your source (Excel File) and destination (Azure Data Lake) files. Set the Linked Services to the one you created in the previous step. &lt;/p&gt;

&lt;p&gt;Let us name these two Datasets as follows:-&lt;br&gt;
Source -&amp;gt; &lt;em&gt;source_ds&lt;/em&gt;&lt;br&gt;
Destination -&amp;gt; &lt;em&gt;destination_ds&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In &lt;em&gt;source_ds&lt;/em&gt; create a new parameter under Parameters. Name the parameter as &lt;em&gt;source_sheet_names&lt;/em&gt;. Under Connection send the newly created parameter as an input to the Sheet name. This action can be done by adding in the dynamic content under the Sheet name. Use the following expression to add -&amp;gt; &lt;em&gt;@dataset().source_sheet_names&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Source Dataset -&amp;gt; source_ds
Parameters -&amp;gt; New -&amp;gt; source_sheet_names (Type: String)
Connection -&amp;gt; Sheet name -&amp;gt; Dynamic Content -&amp;gt; @dataset().source_sheet_names

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thereby, the sheet names will be sent as a Parameter from the Pipeline.&lt;/p&gt;

&lt;p&gt;In &lt;em&gt;destination_ds&lt;/em&gt; create a new parameter under Parameters. Name the parameter as &lt;em&gt;destination_file_name&lt;/em&gt;. Under Connection send the newly created parameter as an input to the File path. This functionality will create a new folder for each sheet name and name it with the respective sheet name. This action can be done by adding in the dynamic content under the File path. Use the following expression to add -&amp;gt; &lt;em&gt;destination/final_file/@{dataset().destination_file_name}/@dataset().destination_file_name&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Destination Dataset -&amp;gt; destination_ds
Parameters -&amp;gt; New -&amp;gt; destination_file_name (Type: String)
Connection -&amp;gt; File path -&amp;gt; Dynamic Content -&amp;gt; destination/final_file/@{dataset().destination_file_name}/@dataset().destination_file_name 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thereby, the file names will be sent as a Parameter from the Pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a Pipeline&lt;/strong&gt;&lt;br&gt;
Create a new Pipeline in Azure Data Factory. &lt;/p&gt;

&lt;p&gt;Let us name the above created Pipeline as follows:-&lt;br&gt;
Pipeline -&amp;gt; &lt;em&gt;adl_pipeline&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Create a new Variable under Variables and name it as &lt;em&gt;sheet_names&lt;/em&gt;. Select the type as Array and under Default value send the list of sheet names as a default value -&amp;gt; &lt;em&gt;["sheet 1", "sheet 2", "sheet 3", "sheet 4"]...&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Pipeline -&amp;gt; adl_pipeline
Variables -&amp;gt; New -&amp;gt; sheet_names -&amp;gt; ["sheet 1", "sheet 2", "sheet 3", "sheet 4"]...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thereby, the sheet names will be sent during the runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add an Activity&lt;/strong&gt;&lt;br&gt;
In your Pipeline, add a ForEach activity and send the &lt;em&gt;sheet_names&lt;/em&gt; as input to the ForEach activity. This can be achieved by navigating to Settings -&amp;gt; Items -&amp;gt; Dynamic Content -&amp;gt; &lt;em&gt;@variables('sheet_names')&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ForEach Activity -&amp;gt; Settings -&amp;gt; Items -&amp;gt; Dynamic Content -&amp;gt; @variables('sheet_names')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Add an Activity within the ForEach Activity to copy the data from Source to Destination&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In your ForEach Activity, add a Copy data Activity and map the Dataset properties with the &lt;em&gt;sheet_names&lt;/em&gt;. This can be achieved by navigating to Source -&amp;gt; Name -&amp;gt; source_sheet_names -&amp;gt; &lt;em&gt;@item()&lt;/em&gt; and Sink -&amp;gt; Name -&amp;gt; destination_file_name -&amp;gt; &lt;em&gt;@item()&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ForEach Activity -&amp;gt; Copy data Activity -&amp;gt; 
Source -&amp;gt; Name -&amp;gt; source_sheet_names -&amp;gt; @item() 
Sink -&amp;gt; Name -&amp;gt; destination_file_name -&amp;gt; @item()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thereby, the ForEach Activity will iterate through each of the sheet and copies the data to the destination folder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Save and Execute the Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The entire workflow in a Nutshell:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Source Dataset -&amp;gt; source_ds
Parameters -&amp;gt; New -&amp;gt; source_sheet_names (Type: String)
Connection -&amp;gt; Sheet name -&amp;gt; Dynamic Content -&amp;gt; @dataset().source_sheet_names

Destination Dataset -&amp;gt; destination_ds
Parameters -&amp;gt; New -&amp;gt; destination_file_name (Type: String)
Connection -&amp;gt; File path -&amp;gt; Dynamic Content -&amp;gt; destination/final_file/@{dataset().destination_file_name}/@dataset().destination_file_name

Pipeline -&amp;gt; adl_pipeline
Variables -&amp;gt; New -&amp;gt; sheet_names -&amp;gt; ["sheet 1", "sheet 2", "sheet 3", "sheet 4"]...

ForEach Activity -&amp;gt; Settings -&amp;gt; Items -&amp;gt; Dynamic Content -&amp;gt; @variables('sheet_names')

ForEach Activity -&amp;gt; Copy data Activity -&amp;gt; 
Source -&amp;gt; Name -&amp;gt; source_sheet_names -&amp;gt; @item() 
Sink -&amp;gt; Name -&amp;gt; destination_file_name -&amp;gt; @item()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Done&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>datascience</category>
      <category>softwaredevelopment</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to read an Excel file from Azure Databricks using PySpark</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Sat, 02 Sep 2023 20:42:35 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/how-to-read-an-excel-file-from-azure-databricks-using-pyspark-4n0n</link>
      <guid>https://dev.to/ruthvikraja_mv/how-to-read-an-excel-file-from-azure-databricks-using-pyspark-4n0n</guid>
      <description>&lt;p&gt;The Excel file cannot be read directly using Py-Spark in Databricks so necessary Libraries (com.crealytics.spark.excel) have to be installed in the Cluster to successfully run the Python code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;&lt;br&gt;
Navigate to the Cluster that will be used to run the Python script under Compute in Databricks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;&lt;br&gt;
 Click on the tab Libraries -&amp;gt; Install new.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;&lt;br&gt;
Select Maven as a Library source and click on Search Packages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;&lt;br&gt;
Type com.crealytics in the search bar and select Maven Central.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;&lt;br&gt;
Select the com.crealytics.spark.excel package version that matches with the version of Scala (Cluster -&amp;gt; Configuration -&amp;gt; Databricks Runtime Version) in your Cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6&lt;/strong&gt;&lt;br&gt;
Click Install&lt;/p&gt;

&lt;p&gt;Use the following code to load the Excel file:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Specify&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;Excel&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;
&lt;span class="n"&gt;val&lt;/span&gt; &lt;span class="n"&gt;excelFilePath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/mnt/&amp;lt;your-mount-name&amp;gt;/path_to_your_excel_file.xlsx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Replace&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;your&lt;/span&gt; &lt;span class="n"&gt;actual&lt;/span&gt; &lt;span class="n"&gt;spark&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt;

&lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Read&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;Excel&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt; &lt;span class="n"&gt;into&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;Spark&lt;/span&gt; &lt;span class="n"&gt;DataFrame&lt;/span&gt;
&lt;span class="n"&gt;val&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;spark&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;read&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;com.crealytics.spark.excel&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;option&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;location&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;excelFilePath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;option&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;useHeader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Use&lt;/span&gt; &lt;span class="n"&gt;this&lt;/span&gt; &lt;span class="n"&gt;option&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;your&lt;/span&gt; &lt;span class="n"&gt;Excel&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt; &lt;span class="n"&gt;has&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whereas using Pandas Library the following Python code could be used:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; 
&lt;span class="n"&gt;excelFilePath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/mnt/&amp;lt;your-mount-name&amp;gt;/path_to_your_excel_file.xlsx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;# Replace with your actual file path
&lt;/span&gt;
&lt;span class="n"&gt;ef&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pandas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ExcelFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;excelFilePath&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;# Load the Excel file as an object
&lt;/span&gt;
&lt;span class="c1"&gt;# Mention the Sheet_Name or use ef.sheet_names to iterate through each sheet data
&lt;/span&gt;
&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Sheet_Name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;# Load the required Excel sheet data
&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Done
&lt;/h2&gt;

</description>
      <category>azure</category>
      <category>python</category>
      <category>cloudcomputing</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>How to impute Null values using Python</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Sun, 19 Jun 2022 23:05:17 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/how-to-impute-null-values-using-python-4bf9</link>
      <guid>https://dev.to/ruthvikraja_mv/how-to-impute-null-values-using-python-4bf9</guid>
      <description>&lt;p&gt;Hello all, this blog will provide you with an insight into handling Null values using Python programming language.&lt;/p&gt;

&lt;p&gt;Download the pre-processed and final Datasets, python code (.ipynb file) from the following links:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kaggle.com/datasets/ruthvikrajamv/home-insurance-dataset" rel="noopener noreferrer"&gt;https://www.kaggle.com/datasets/ruthvikrajamv/home-insurance-dataset&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/ruthvikraja/Python-Code-on-Home-Insurance-Dataset.git" rel="noopener noreferrer"&gt;https://github.com/ruthvikraja/Python-Code-on-Home-Insurance-Dataset.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This project uses the home insurance dataset from Stable Home Insurance company, and data were collected from 2007 to 2012 for residential house sales. This dataset contains 66 data measurements containing various information from each of 256,136 insurance clients. To properly extract information from the dataset, some transformations need to be made to clean the data for analysis. An important step in this process is checking the data for null values. These values occur when a particular measurement does not apply to a client and can cause issues when building computer models to analyse the data. For example, a client who does not provide their personal information would contain null values for these information columns in the dataset. To ensure the data models run smoothly, I need to either remove data points with null values or replace these Null values with a valid proxy.&lt;/p&gt;

&lt;p&gt;It is not sufficient to simply drop all the rows that consist of Null values, so I implemented different techniques to remove the Null values without losing information. In the stable home insurance data set, there are a few features which contain little information, as they are missing more than 98% of the data. These columns – CAMPAIGN DESC, P1 PT EMP STATUS and CLERICAL were all removed from the data set. After these columns were removed, I performed some further cleaning by removing all the rows which contain information about less than half of the features. These data points represent clients that the company has very little information about and thus will not be useful for analysis.&lt;/p&gt;

&lt;p&gt;Additionally, further data processing was performed on the columns like QUOTE DATE, RISK RATED AREA B, RISK RATED AREA C, PAYMENT FREQUENCY, MTA FAP, MTA FAP &amp;amp; MTA DATE as these columns contain many Null values of different proportions. Features like MTA FAP, MTA APRP and MTA DATE were removed for having above 70% Null values. Features containing this high proportion of null values can be removed without losing important information. The PAYMENT FREQUENCY feature consisted of 57% Null values and contained only one unique value for the entire column so I dropped the column since it doesn’t contribute to the response variables of the analysis. QUOTE DATE was also dropped for having 58% Null values. The remaining columns with Null values were RISK RATED AREA C and RISK RATED AREA B. For these columns, there was enough information present to impute the Null values. Imputation is the process whereby Null values are replaced with a value based on the information present in the dataset. Mean Imputation is the process of replacing Null values with the mean of the remaining data points. This technique is appropriate in situations where there are few missing data points and thus was used for RISK RATED AREA C.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyja0tghjfxrex7ldd27i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyja0tghjfxrex7ldd27i.png" alt=" " width="375" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above Figure illustrates the Box plot for the feature RISK RATED AREA C and it is clear that this feature contains many outliers. Initially, I treated it using different methodologies and finally imputed the Null values with the mean value. For the feature RISK RATED AREA B, more of the data was missing, so I implemented the K Nearest Neighbors (KNN) Imputer, which is a more appropriate technique for this amount of missing data. This process involves using the value of the most similar point as determined by the nearest neighbour (k). The input values were initially scaled using MinMax Scaler and then the data was sent to the algorithm to make unbiased predictions. The final dataset consists of 189021 rows &amp;amp; 57 features.&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
    </item>
    <item>
      <title>Mathematical Formulae behind Optimization Algorithms for Neural Networks</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Thu, 24 Mar 2022 01:04:01 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/mathematical-formulae-behind-optimization-algorithms-for-neural-networks-121p</link>
      <guid>https://dev.to/ruthvikraja_mv/mathematical-formulae-behind-optimization-algorithms-for-neural-networks-121p</guid>
      <description>&lt;p&gt;Hello, &lt;br&gt;
The following topics are covered in this blog:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Introduction&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optimization Algorithms:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Gradient Descent (GD)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stochastic Gradient Descent (SGD)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mini-batch SGD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SGD with Momentum&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AdaGrad&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AdaDelta &amp;amp; RMSProp&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adam&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Conclusion                   &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fspw17rq43xssner41udc.png" alt=" " width="366" height="978"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Download the whole blog from the following link:-&lt;/em&gt;&lt;br&gt;
&lt;a href="https://github.com/ruthvikraja/Optimization-Algorithms.git" rel="noopener noreferrer"&gt;https://github.com/ruthvikraja/Optimization-Algorithms.git&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Introduction&lt;/em&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Neural networks is a subset of Machine Learning in which  Neural networks adapt and learn from vast amounts of data. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Neuron is the building block of a Neural network that takes some input, does some mathematical computation by multiplying the input values with their corresponding random weights and finally produces the output. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpaq6ca9vkyzy7pryzpl.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpaq6ca9vkyzy7pryzpl.gif" alt=" " width="1700" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Each node (Hidden and Output layers) in a Neural network is composed of two functions, namely linear and activation function. In the forward propagation, the Linear function is computed by summation of multiplying previously connected nodes output and corresponding weight, bias as shown in the Figure. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After applying the Linear function, Activation functions like Sigmoid, Relu, Leaky Relu, Parametric Relu, Swish Relu, Softplus AF’s etc are implemented based on the problem type and requirement. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qt9gif6iv0jnepzfv3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qt9gif6iv0jnepzfv3b.png" alt=" " width="800" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role of an Optimizer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;After computing the output at the output layer, the predicted value is compared with the actual value by computing Loss.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Loss function is used to determine the error between the actual and predicted value. The Optimization algorithm is used to determine the new weight values i.e Loss w.r.t change in weights to bring the output of the next trial closer to the actual output.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsoffod6o8go04waom7y.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsoffod6o8go04waom7y.gif" alt=" " width="400" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Gradient Descent&lt;/em&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute new weights using Gradient Descent is as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ybylr5xgabx7ztci8vy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ybylr5xgabx7ztci8vy.png" alt=" " width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute Loss using Gradient Descent is as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqff8zptfbausp1vkjt0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqff8zptfbausp1vkjt0.png" alt=" " width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ics4iqduxkvq4ypiyfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ics4iqduxkvq4ypiyfk.png" alt=" " width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Stochastic Gradient Descent&lt;/em&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute new weights using Stochastic Gradient Descent is   as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi92jsgvzgw7ifos32fwr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi92jsgvzgw7ifos32fwr.png" alt=" " width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute Loss using Stochastic Gradient Descent is as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00qbuo8d8k28qtwqtdxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00qbuo8d8k28qtwqtdxz.png" alt=" " width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyydh087q9pn97kalnnh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyydh087q9pn97kalnnh.png" alt=" " width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Mini-Batch Stochastic Gradient Descent&lt;/em&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute new weights using Mini-Batch Stochastic Gradient Descent is as follows:-&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64yixshwdn4e3jxxwb2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64yixshwdn4e3jxxwb2a.png" alt=" " width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute Loss using Mini-Batch Stochastic Gradient Descent is as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp381rmuf6978ozj9ooaw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp381rmuf6978ozj9ooaw.png" alt=" " width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ficq4ase8w9aiqi7w3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ficq4ase8w9aiqi7w3v.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overall Comparison (GD (vs) SGD (vs) Mini-Batch SGD)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjyqpo6l6c37xggkmpyt2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjyqpo6l6c37xggkmpyt2.png" alt=" " width="800" height="645"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Stochastic Gradient Descent with Momentum&lt;/em&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute new weights using Stochastic Gradient Descent  with Momentum is as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuvws80p8pzn1fc1tnsv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuvws80p8pzn1fc1tnsv.png" alt=" " width="800" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute Loss using Stochastic Gradient Descent with Momentum is as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzgd9xbce39nm01e2tej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzgd9xbce39nm01e2tej.png" alt=" " width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fol2znfvnz2cnjypmu04b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fol2znfvnz2cnjypmu04b.png" alt=" " width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For better illustration, consider the following scenario to calculate Exponential Weighted Average:- &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxs8y3gox4ctj70mqwt8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxs8y3gox4ctj70mqwt8.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhev601ic9vf8f3zpkhax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhev601ic9vf8f3zpkhax.png" alt=" " width="800" height="328"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxt7katopzbhkopjzx8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxt7katopzbhkopjzx8g.png" alt=" " width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Therefore, the final updated formulae to calculate new weights &amp;amp; bias are as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvctcknsdtt0yhgoqkdb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvctcknsdtt0yhgoqkdb.png" alt=" " width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fculdfngqkzelpsw0wooi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fculdfngqkzelpsw0wooi.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;where,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2ih7lhz2zn12eqckkap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2ih7lhz2zn12eqckkap.png" alt=" " width="800" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx57a5pwe6fdoui69h82n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx57a5pwe6fdoui69h82n.png" alt=" " width="800" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Adaptive Gradient Descent&lt;/em&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute new weights using Adaptive Gradient Descent is as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdedip7vzs46fp2yfexjl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdedip7vzs46fp2yfexjl.png" alt=" " width="800" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute Loss using Adaptive Gradient Descent is as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5q974br8mr1d2c0h91e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5q974br8mr1d2c0h91e.png" alt=" " width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvk9sc4ybk3ne09xztzk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvk9sc4ybk3ne09xztzk.png" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;where,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcg1ophjv76jbqbtch66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcg1ophjv76jbqbtch66.png" alt=" " width="800" height="183"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp81mwmmibg4vqzphu9xf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp81mwmmibg4vqzphu9xf.png" alt=" " width="800" height="68"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffefbw5cphx6gpyr9wfiv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffefbw5cphx6gpyr9wfiv.png" alt=" " width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Adaptive Learning Rate Method &amp;amp; Root Mean Squared Propagation&lt;/em&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute new weights using AdaDelta &amp;amp; RMSProp is as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmakpzm9i9zsb72xmj5in.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmakpzm9i9zsb72xmj5in.png" alt=" " width="800" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The formula to compute Loss using AdaDelta &amp;amp; RMSProp is as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbm116a6zayv5aki4fgw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbm116a6zayv5aki4fgw.png" alt=" " width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;where, &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn47056475lk4fmfea5t5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn47056475lk4fmfea5t5.png" alt=" " width="800" height="175"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xd2zz9cum5lm0ghaevj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xd2zz9cum5lm0ghaevj.png" alt=" " width="800" height="83"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx6bcbdzkjkoy4xy8abo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx6bcbdzkjkoy4xy8abo.png" alt=" " width="800" height="82"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vsafbuascvcz0vmh8ys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vsafbuascvcz0vmh8ys.png" alt=" " width="800" height="73"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Adaptive Moment Estimation&lt;/em&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The formulae to compute new weights &amp;amp; bias using Adam are as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F423q01v8y9uapfu0nhch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F423q01v8y9uapfu0nhch.png" alt=" " width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The formulae to compute Loss for Regression &amp;amp; Classification problems using Adam are as follows:- &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v5q7g1pmzppjuizkrlk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v5q7g1pmzppjuizkrlk.png" alt=" " width="800" height="111"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7gye5x9x5brvd99bvx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7gye5x9x5brvd99bvx6.png" alt=" " width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;where,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2ea5rdj7w0s57qqezl0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2ea5rdj7w0s57qqezl0.png" alt=" " width="800" height="267"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbz8zsnzc9zbv17vg82n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbz8zsnzc9zbv17vg82n.png" alt=" " width="800" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When utilising Exponential Weighted Averages, there is a process known as bias correction. Scientists have introduced Bias correction to get better results at the initial time stamps. Therefore, the formulae for Bias correction is as follows:-&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nesqiksshb4tmqc1laz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nesqiksshb4tmqc1laz.png" alt=" " width="800" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The updated Weight &amp;amp; Bias formulae are as follows:-
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8asi9dcl2dz0udsw8xhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8asi9dcl2dz0udsw8xhx.png" alt=" " width="800" height="109"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uo1usr9bie8iihwxfb8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uo1usr9bie8iihwxfb8.png" alt=" " width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Conclusion&lt;/em&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In this Presentation, different Optimization algorithms that are available in the field of Artificial Intelligence were discussed in detail to reduce the Loss function of a Neural Network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Overall, Adam Optimizer is comparatively better than other algorithms because it was implemented using some advanced theories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;However, there is no guarantee that the Adam optimizer will outperform all the given datasets because it depends on several other features like the type of problem, size of the input data, number of features, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gradient Descent and SGD algorithm work well for small datasets, Mini-batch SGD, SGD with Momentum &amp;amp; RMSProp can be tried on large datasets. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  THANK YOU
&lt;/h2&gt;

</description>
      <category>python</category>
      <category>deeplearning</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Difference between Iteration and Epoch in Neural Networks</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Tue, 22 Mar 2022 23:46:43 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/difference-between-iteration-and-epoch-in-neural-networks-5ddn</link>
      <guid>https://dev.to/ruthvikraja_mv/difference-between-iteration-and-epoch-in-neural-networks-5ddn</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;Epoch&lt;/em&gt;&lt;/strong&gt; describes the number of times the algorithm sees the entire dataset whereas an &lt;strong&gt;&lt;em&gt;Iteration&lt;/em&gt;&lt;/strong&gt; tells the number of times a batch of data passed through the algorithm.&lt;/p&gt;

&lt;p&gt;For example, consider the input dataset consists of 100,000 records and if we train the model for 10 epochs, let the batch size = 1 i.e loading each data point every time and performing forward, backward propagation for every data point [Optimiser would be the Stochastic Gradient Descent]. Hence, the number of iterations per epoch for the above example would be 100,000 and the total number of epochs would be 10.&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>beginners</category>
      <category>datascience</category>
    </item>
    <item>
      <title>How to install Apache PySpark on Mac using Python?</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Tue, 08 Feb 2022 16:34:55 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/how-to-install-apache-pyspark-on-mac-using-python-4mhk</link>
      <guid>https://dev.to/ruthvikraja_mv/how-to-install-apache-pyspark-on-mac-using-python-4mhk</guid>
      <description>&lt;p&gt;Hello,&lt;/p&gt;

&lt;p&gt;Apache PySpark works with Java 8 version and not with the latest Java version so, make sure that you install the correct version to run Apache PySpark on your Machine.&lt;/p&gt;

&lt;p&gt;Download Java 8 from the following link and install the software:&lt;br&gt;
&lt;a href="https://www.java.com/en/download/manual.jsp" rel="noopener noreferrer"&gt;https://www.java.com/en/download/manual.jsp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you already have the latest Java version on your Machine and want to remove the latest Java software from your Machine, then please visit the following blog:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/ruthvikraja_mv/how-to-uninstall-java-on-mac-104a"&gt;https://dev.to/ruthvikraja_mv/how-to-uninstall-java-on-mac-104a&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, its time to check the installed Java version on your Mac, enter the following command in the terminal to check the version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;java&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Launch Anaconda Navigator (or) any other IDE to run python code for installing Apache PySpark on your Machine. Type the following command and hit enter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;pyspark&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will take some time to install the software and once the software got installed you can check the version by entering the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pyspark&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once everything is done please type the following command in the terminal to check whether we can create a new session using PySpark:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;spark&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;shell&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the software has been successfully installed on your machine without any dependency errors then it should show as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonbxz4ohmqag55hsa2rt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonbxz4ohmqag55hsa2rt.png" alt=" " width="563" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Done...&lt;/p&gt;

</description>
      <category>python</category>
      <category>beginners</category>
      <category>machinelearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>How to Uninstall Java on Mac?</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Tue, 01 Feb 2022 04:31:28 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/how-to-uninstall-java-on-mac-104a</link>
      <guid>https://dev.to/ruthvikraja_mv/how-to-uninstall-java-on-mac-104a</guid>
      <description>&lt;p&gt;Hello everyone,&lt;/p&gt;

&lt;p&gt;There are two methods to uninstall Java on your Mac and they are as follows:-&lt;/p&gt;

&lt;h2&gt;
  
  
  Method 1:-
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Click on the Finder App.[Bottom left of your Mac]&lt;/li&gt;
&lt;li&gt;Navigate to Applications Folder and type JavaAppletPlugin.plugin in the search bar.&lt;/li&gt;
&lt;li&gt;Finally, move the plug-in to the Bin and empty the Bin.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then, check whether the Java software was successfully uninstalled by typing the following command in the Terminal:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;java&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should prompt you to install Java if the software was successfully uninstalled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Method 2:-
&lt;/h2&gt;

&lt;p&gt;Consider if you are unable to find the plugin by entering JavaAppletPlugin.plugin in the search bar [Application Folder], then follow the below steps to remove the concrete version of the Java file [Ex: JDK file]:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check what Java versions are available by entering the following command in the Terminal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;ls&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;Library&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;Java&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;JavaVirtualMachines&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Remove the corresponding folder with that version:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;sudo&lt;/span&gt; &lt;span class="n"&gt;rm&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;fr&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;Library&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;Java&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;JavaVirtualMachines&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;jdk&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;9.0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;1.j&lt;/span&gt;&lt;span class="n"&gt;dk&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;[Ex: In the above statement I am uninstalling Java 9]&lt;/p&gt;

&lt;p&gt;Then, check whether the Java software was successfully uninstalled by typing the following command in the Terminal:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;java&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should prompt you to install Java if the software was successfully uninstalled.&lt;/p&gt;

&lt;p&gt;Done...&lt;/p&gt;

</description>
      <category>java</category>
      <category>osx</category>
    </item>
    <item>
      <title>Prediction of Customer Churn in the Telecom Industry using Neural Networks</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Sun, 26 Dec 2021 01:43:58 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/prediction-of-customer-churn-in-the-telecom-industry-using-neural-networks-3ec8</link>
      <guid>https://dev.to/ruthvikraja_mv/prediction-of-customer-churn-in-the-telecom-industry-using-neural-networks-3ec8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:-
&lt;/h2&gt;

&lt;p&gt;Customer churn is one of the major problem and very important problems in the field of Telecommunications. It directly impacts the companies revenue, particularly in the field of the Telecom Industry. Thereby, companies are trying to develop methods for predicting client attrition. Thereby, it is much important to find the factors that impact the customer churn from the company. This article provides a brief description of customer churn in the Telecom Industry, provides a churn prediction model which assists telecom companies to predict customers who are likely to churn and also provides some useful data analysis to draw insights from the data. Neural Networks, Machine Learning Algorithms &amp;amp; other technologies can be implemented to develop a churn prediction model that can predict with high Accuracy Score. Performance metrics like Accuracy Score, Area under Curve (AUC), Sensitivity, Specificity etc can be implemented on the model to measure the goodness of the model on test data. The Dataset consists of various parameters and multiple datasets are extracted from different Telecom companies to predict the likelihood of a customer to churn from the company so that measures can be taken by the company from being churned from the telecom company. Also, Big Data technologies could be implemented if the datasets are too big for faster calculation and easy access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Download the Datasets and Python file(.ipynb) from the following link:-
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/ruthvikraja/Customer-Churn-Prediction.git" rel="noopener noreferrer"&gt;https://github.com/ruthvikraja/Customer-Churn-Prediction.git&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Dataset:-
&lt;/h2&gt;

&lt;p&gt;The datasets are collected from different Telecom companies and open source databases. Each dataset consists of a different set of features because they are taken from different service providers and each provider has its own set of parameters to make a better strategy so that the customer won’t churn from the service provider. The features (or) parameters from different datasets are as follows:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Churn - Customers who left the service provider last month.&lt;/li&gt;
&lt;li&gt;Account Length (or) Tenure - This feature describes the number of months the customer has stayed with the company.&lt;/li&gt;
&lt;li&gt;PhoneService, MultipleLines, InternetService, OnlineSecurity, OnlineBackup, DeviceProtection, TechSupport, StreamingTV, ScreamingMovies etc - These are categorical features that describe different services opted by each customer.&lt;/li&gt;
&lt;li&gt;PaperlessBilling, PaymentMethod, MonthlyCharges and TotalCharges - These are the features that describe customer billing and payment methods.&lt;/li&gt;
&lt;li&gt;Gender, Age, Dependents etc - These are the features that describe the demographic info of a customer.&lt;/li&gt;
&lt;li&gt;Number of Calls, Minutes, Messages etc - These features describes the number of minutes a customer has spoken in a day, number of calls a customer has made in a day, number of messages a customer has sent to another person in a day etc.&lt;/li&gt;
&lt;li&gt;InternationalPlan - This feature gives information about whether a customer has opted for an international plan (or) not. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, the dataset consists of some additional features like the total number of minutes a customer has spoken in the Morning, Afternoon, Evening and during Nights. &lt;br&gt;
The above mentioned features are some of the important features from different datasets that impacts the output parameter Churn. &lt;/p&gt;

&lt;h2&gt;
  
  
  Data Analysis:-
&lt;/h2&gt;

&lt;p&gt;In this study, Data Analysis played a vital role than models, which were built to predict the churn of a customer from the telecom industry because data analysis provides useful insights for the companies to concentrate more on particular parameters in which they are lacking to lose and acquire more new customers.&lt;/p&gt;

&lt;p&gt;To perform analysis on the Datasets, Uni-variate and Bi-variate analysis was performed on both categorical and numerical typed features. Uni-variate analysis was performed on each individual feature whereas the Bi-variate analysis was performed for every input feature with respect to the output feature Churn.&lt;/p&gt;

&lt;h4&gt;
  
  
  Uni-Variate Analysis:-
&lt;/h4&gt;

&lt;p&gt;In the Figure 1, it is clear that almost 49.6% of the customers are staying at area code 415 so, it is better to provide more services and comparatively  better plans for the customers who are residing at area code 415 to increase the retention period of a customer from the company. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8aqzx4g4gpuszet8xuvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8aqzx4g4gpuszet8xuvw.png" alt=" " width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, from Figure 2 it is clear that the distribution of the number of customers who will leave the company is very low than the number of customers who stays with the telecom company. This creates bias due to imbalanced classes so, it is crucial to apply Upsampling technique on the minority class i.e on the data points with churn value “yes”. Thereby, the Synthetic Minority Oversampling Technique could be applied to the minority class to overcome the problem. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bxfe9dckm7zg6cte8ut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bxfe9dckm7zg6cte8ut.png" alt=" " width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Bi-Variate Analysis:-
&lt;/h4&gt;

&lt;p&gt;In Figure 3, it is clear that the customers with a tenure period of more than 50 months are less likely to churn from the telecom company whereas customers with a tenure period of fewer than 10 months are more likely to churn from the company so, the telecom company has to provide better deals and extra data for new customers to increase the retention period of a customer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyolb7oxouq8daygjrqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyolb7oxouq8daygjrqi.png" alt=" " width="800" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16w53fq58l840t7xoq7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16w53fq58l840t7xoq7f.png" alt=" " width="800" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the above Figure, it is clear that the customers who are paying very little and high monthly charges are not leaving the company whereas the customers who are paying monthly charges around $70 - $100 are having a 50% chance of leaving the company. Also, the company should concentrate more on the customers who are paying around $70 as monthly charges.&lt;/p&gt;

&lt;p&gt;From the Figures 5 &amp;amp; 6, it is clear that if "The total day calls" is between 85 to 115 times then the churn rate is high and if the "Number of voice mail messages" is equal to 0, then the churn rate is very high.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oi3vu8cytr21tr89i72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oi3vu8cytr21tr89i72.png" alt=" " width="706" height="710"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcxn8xef9m5cpvwxxuii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcxn8xef9m5cpvwxxuii.png" alt=" " width="660" height="698"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Methods:-
&lt;/h2&gt;

&lt;p&gt;In this Article, two different approaches have been proposed, Neural Networks based and Machine Learning based to predict the Customer Churn from a company. Many authors have proposed different algorithms that are available in Artificial Intelligence to predict the churn of a customer and found that AI technology has outperformed most of the Datasets. Also, for large datasets, Big Data technology can be implemented to store, process and retrieve the data. So, in this Article, I have chosen to build Random forest classifier and Dense Neural Networks to predict the churn of a customer.&lt;/p&gt;

&lt;p&gt;Different weight initialisation techniques, optimisers and hidden layers are implemented to build a better predictive model.&lt;/p&gt;

&lt;p&gt;The Neural network chosen for this study consists of 3 to 5 hidden layers for different datasets and different weight initialisation techniques, activation functions are applied on the Neural network. He Normal weight initialisation technique was implemented on neurons having activation function “Relu”, Glorot Normal (or) Xavier Normal weight initialisation technique was implemented on neurons having activation function “Sigmoid” because He Normal works well with Relu whereas Glorot Normal works well with Sigmoid activation function.&lt;/p&gt;

&lt;p&gt;Initially, I tried implementing a Neural network with random weights but the accuracy score was not pretty good so, I tried implementing different methods on the models to achieve a better accuracy score. Also, at the output layer, the Sigmoid activation function was implemented because it will transfer the input function between 0 and 1. In the hidden layers, the Sigmoid activation function was not implemented instead Relu activation function was implemented because the Sigmoid activation function creates a Vanishing Gradient problem [The derivative of Sigmoid activation function is between 0 and 0.25 only]. &lt;/p&gt;

&lt;h2&gt;
  
  
  Results:-
&lt;/h2&gt;

&lt;p&gt;Machine Learning classification algorithms like Random Forest classifier, KNN classifier didn’t perform as expected on the first dataset i.e Telco Customer Churn, even after performing Hyperparameter tuning and Principal component analysis to capture the variability of all the features. The reason behind this not performing as expected is due to the lack of correlation between the input features and the output feature. I have implemented the Pearson correlation coefficient on the Telco customer churn dataset to find the important features that have more impact on the output feature but not even a single feature showed a high correlation value with the output parameter “Churn”. So, I tried implementing dense Neural Network to capture the input data and the results are as follows:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawtfo96t2dyxi7cke1gg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawtfo96t2dyxi7cke1gg.png" alt=" " width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The AI algorithms have performed well on the second dataset i.e Customer Churn Prediction 2020 with more than 90% accuracy score using Machine learning algorithms. Also, I have tried implementing Artificial Neural networks on the dataset to predict the churn of a customer with a different number of epochs and weight initialisation techniques. The Accuracy scores are as follows:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjt5nf9r24vnqvw93m4p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjt5nf9r24vnqvw93m4p.png" alt=" " width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thereby, from the above table, it is clear that implementing a Neural network using different weight initialisation techniques is better than initialising the weights randomly. Therefore, the final results i.e ROC curve and Accuracy scores are mentioned in Table 3 and Figure 7. Also, I have implemented dill  library to save the current python session so, that I can continue my work from where I have left. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8rxw51sk5dvq6tzhuqz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8rxw51sk5dvq6tzhuqz.png" alt=" " width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaoqnzu3da1j14oih975.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaoqnzu3da1j14oih975.png" alt=" " width="800" height="652"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  In this Article, Artificial Intelligence was implemented to predict the Churn of a customer from a telecom company and in this study, two different datasets are analysed to extract important features using Feature Engineering and Data Analysis techniques. Overall, Deep Learning performed well to predict the churn of a customer whereas Machine Learning performed better to some extent however so many factors come into the picture when it comes to predicting the goodness of different algorithms. In future, different AI algorithms can be implemented to further decrease the error value.
&lt;/h5&gt;

&lt;h6&gt;
  
  
  ################# THE END
&lt;/h6&gt;

</description>
    </item>
    <item>
      <title>Back Propagation in Neural Networks</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Sun, 28 Nov 2021 05:41:05 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/back-propagation-in-neural-networks-3ald</link>
      <guid>https://dev.to/ruthvikraja_mv/back-propagation-in-neural-networks-3ald</guid>
      <description>&lt;p&gt;Hello all,&lt;/p&gt;

&lt;p&gt;It is very important to know how the Back Propagation works in Neural Networks in order to find the optimal weights. So, to know more about this concept let us quickly dive into following slides:-&lt;/p&gt;

&lt;p&gt;Download the whole document as a pdf document from the following link:-&lt;br&gt;
&lt;a href="https://www.kaggle.com/ruthvikrajamv/back-propagation" rel="noopener noreferrer"&gt;https://www.kaggle.com/ruthvikrajamv/back-propagation&lt;/a&gt; &lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How to save the entire user session using Python?</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Sun, 28 Nov 2021 00:14:39 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/how-to-save-the-entire-user-session-using-python-2h2c</link>
      <guid>https://dev.to/ruthvikraja_mv/how-to-save-the-entire-user-session-using-python-2h2c</guid>
      <description>&lt;p&gt;It is very important to know how to save the entire current session like local variables, objects etc when we are working with AI projects using Python because it is very difficult to run the entire python code every time to initialise the objects, Models, variables etc. &lt;/p&gt;

&lt;p&gt;To overcome this problem we have pickle to take care of it but some times it fails to deserialise the pickled objects so, &lt;br&gt;
dill library can be implemented to quickly store and retrieve the current session.&lt;/p&gt;
&lt;h2&gt;
  
  
  Here is a quick introduction to dill:-
&lt;/h2&gt;

&lt;p&gt;dill extends python’s pickle module for serializing and de-serializing python objects to the majority of the built-in python types. Serialization is the process of converting an object to a byte stream, and the inverse of which is converting a byte stream back to a python object hierarchy.&lt;/p&gt;

&lt;p&gt;dill provides the user the same interface as the pickle module, and also includes some additional features. In addition to pickling python objects, dill provides the ability to save the state of an interpreter session in a single command. Hence, it would be feasable to save an interpreter session, close the interpreter, ship the pickled file to another computer, open a new interpreter, unpickle the session and thus continue from the ‘saved’ state of the original interpreter session.&lt;/p&gt;

&lt;p&gt;Therefore, the following code can be implemented to save the current session:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;##### User's python code #####
# pip install dill (or) conda install dill
&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;dill&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;# Save the entire session by creating a new pickle file 
&lt;/span&gt;&lt;span class="n"&gt;dill&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dump_session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./your_bk_dill.pkl&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;# Restore the entire session
&lt;/span&gt;&lt;span class="n"&gt;dill&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load_session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./your_bk_dill.pkl&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From, above it is clear that implementing dill is very easy and also it provides function like dill.detect to investigate the failed attributes inside that object.&lt;/p&gt;

</description>
      <category>python</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Stock Price Prediction using Supervised Learning</title>
      <dc:creator>Ruthvik Raja M.V</dc:creator>
      <pubDate>Mon, 16 Aug 2021 06:18:53 +0000</pubDate>
      <link>https://dev.to/ruthvikraja_mv/stock-price-prediction-using-supervised-learning-392n</link>
      <guid>https://dev.to/ruthvikraja_mv/stock-price-prediction-using-supervised-learning-392n</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:-
&lt;/h2&gt;

&lt;p&gt;The impact of numerous factors on stock prices makes stock prediction a complex and time-consuming endeavour. Predicting the price of a stock is computationally hard because of its non-stationary nature and also it depends on many factors like News Headlines, Tweets, Historical Trends, Social Media News etc. In this paper, Machine Learning Algorithms and Neural Networks are implemented on various Companies like Apple, Amazon, Pfizer, Walmart Stores etc to overcome the difficulties and to achieve better accuracy in predicting the price of a stock. Artificial Intelligence algorithms like Random Forest, XG Boost (Extreme Gradient Boosting), LSTM (Long Short Term Memory), GRU (Gated Recurrent Units) etc are developed and their RMSE (Root Mean Square Error) are compared in predicting the price of a stock. The Dataset is an open- source Time Series dataset and consists of stock prices for 88 different companies that fall under 9 different sectors for around 5 years.&lt;/p&gt;

&lt;h2&gt;
  
  
  About the Dataset:-
&lt;/h2&gt;

&lt;p&gt;The dataset is a Time Series data and consists of stock prices of 88 different companies like Apple, Amazon, Chevron Corporation, Sanofi, Duke Energy Corporation, Visa, Alphabet etc. These companies fall under 9 different categories namely Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology. In total there are 88 files in the dataset and each file consists of features like Date, Open price of a stock when it was opened on a particular day, High and Low price of a stock within a period, Volume of the stocks and the Adjusted closing price of an individual company. The output (or) the predicted variable is the Closing price of a stock for a particular day.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workflow:-
&lt;/h2&gt;

&lt;p&gt;To train the Machine Learning algorithm like Random Forest Regressor, initially, all the CSV files are loaded and converted into Data frames, then Scaling is done on all the Data frames such that each feature is translated to a given range. MinMax Scaler can be implemented for scaling to normalise all the features because each feature in the dataset is of a different scale and it is very important to scale each feature before it is sent to the model for training. Also, the dataset consists of a date feature but this is not understood by the algorithm so, the datasets have to be pre-processed by splitting the date column into three different columns (Year, Month and Day). At last, the dataset is split for training and testing data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results:-
&lt;/h2&gt;

&lt;p&gt;Machine Learning Algorithm like Random Forest Regressor has outperformed for 79 companies among 88 companies and the RMSE values ranges from 0 to 1 for those 79 companies. The RMSE values for the test data on 79 best performing companies are as follows:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1radr748ki0scjnsfr8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1radr748ki0scjnsfr8.png" alt="Best RMSE" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nearly, for 8 companies the Random Forest Regressor didn’t perform as expected and the RMSE values for those 8 companies range from 0 to 10 whereas for 1 company the algorithm has performed poorly and the RMSE value ranges from 750 to 830. The better and worst-performing companies are shown as follows:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf7q5rsuyba1yooochl6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf7q5rsuyba1yooochl6.png" alt="Better RMSE" width="609" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffco5g067vn85aix7iikc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffco5g067vn85aix7iikc.png" alt="Worst RMSE" width="336" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Overall the RMSE values for all the companies are shown as follows:-
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4rt8dihk9b3alayx62j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4rt8dihk9b3alayx62j.png" alt="image" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the above Figure it is clear that the Machine Learning has not performed well on one company that is BRK-A so, Deep Learning was implemented on Worst performing Companies to achieve better RMSE value and the results are as follows:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3wi4jqcxj9v5e1pnnrd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3wi4jqcxj9v5e1pnnrd.png" alt="image" width="800" height="237"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;The&lt;/span&gt; &lt;span class="n"&gt;Python&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Datasets&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="nb"&gt;all&lt;/span&gt; &lt;span class="n"&gt;other&lt;/span&gt; &lt;span class="n"&gt;files&lt;/span&gt; &lt;span class="n"&gt;can&lt;/span&gt; &lt;span class="n"&gt;be&lt;/span&gt; &lt;span class="n"&gt;found&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;following&lt;/span&gt; &lt;span class="n"&gt;Link&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/ruthvikraja/Stock-Market-Price-Prediction-using-Supervised-Learning.git" rel="noopener noreferrer"&gt;https://github.com/ruthvikraja/Stock-Market-Price-Prediction-using-Supervised-Learning.git&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:-
&lt;/h2&gt;

&lt;p&gt;In this article, Artificial intelligence is used to make predictions about stock market prices. A stock market is a place where buying and selling of shares happen for companies. The data that is used in this work is a time Series dataset and consists of stock prices of 88 different companies as described in Section About the Dataset. However, while the time component adds additional information, it also makes time series problems more difficult to handle compared to many other prediction tasks. In this study, we proposed two methods, Machine Learning-based, and Deep Learning-based. As the proposed methods show in this study different AI-based algorithms together with ensemble learning are used to make the predictions and make a comparison between the results of different methodologies. It has been shown that GRU performs better than Deep Learning based methods in terms of both accuracy and processing time. Also, Machine Learning-based methods perform pretty well for most of the companies but it fails when it comes to large stock price values so, Deep Learning methods were implemented on the stocks with high price values and the results were far better using Deep Learning Methods. In future, different AI algorithms can be implemented to further decrease the error value.&lt;/p&gt;

</description>
      <category>python</category>
      <category>stockmarket</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
    </item>
  </channel>
</rss>
