<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Packt</title>
    <description>The latest articles on DEV Community by Packt (@packtpub).</description>
    <link>https://dev.to/packtpub</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/packtpub"/>
    <language>en</language>
    <item>
      <title>Managing Endpoint Devices in Microsoft Teams </title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Wed, 12 May 2021 14:29:42 +0000</pubDate>
      <link>https://dev.to/packt/managing-endpoint-devices-in-microsoft-teams-4o2f</link>
      <guid>https://dev.to/packt/managing-endpoint-devices-in-microsoft-teams-4o2f</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwymxmaa691btu538v77e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwymxmaa691btu538v77e.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we will show you how to manage devices to use Microsoft Teams. This will include deploying the Teams client to devices such as Windows, virtual desktops, macOS, and mobile devices. You will also learn how to manage the settings that are deployed to these devices by setting up configuration profiles. Finally, we will examine Teams Rooms and explain the principles of configuring Teams Rooms devices and collaboration bars. &lt;/p&gt;

&lt;p&gt;By the end, you will understand the different endpoint devices that can be configured with Microsoft Teams and the settings and controls that can be applied to them. &lt;/p&gt;

&lt;h1&gt;
  
  
  Technical requirements
&lt;/h1&gt;

&lt;p&gt;You will need to have access to the Microsoft Teams admin center, which you can reach at &lt;a href="https://admin.teams.microsoft.com" rel="noopener noreferrer"&gt;https://admin.teams.microsoft.com&lt;/a&gt;. You will need to be either a &lt;em&gt;Global Administrator, a Teams Service Administrator, or a Teams Communications Administrator&lt;/em&gt; to carry out the steps covered. &lt;/p&gt;

&lt;h1&gt;
  
  
  Deploying Teams clients to devices
&lt;/h1&gt;

&lt;p&gt;In this section, we will examine how Microsoft Teams may be installed or deployed across many devices and platforms, including Windows, virtual desktops, macOS, and mobile devices. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important note&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The details of the technologies that we will be discussing in this section, such as Microsoft Endpoint Configuration Manager, Group Policy, and Virtualized Desktop Infrastructure, are beyond the scope of this article. These technologies will be described only at a very high level, to provide you with a basic awareness of how these technologies relate to Microsoft Teams. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First, we will look at the methods available to install or deploy Microsoft Teams to Windows devices. &lt;/p&gt;

&lt;h2&gt;
  
  
  Windows
&lt;/h2&gt;

&lt;p&gt;Microsoft Teams may be deployed to your Windows devices using several methods. The simplest method is to use &lt;strong&gt;self-service&lt;/strong&gt;. MS Teams has been included in Microsoft 365 apps for some time now, so when a user installs Microsoft 365 to their Windows computer, Microsoft Teams will be part of that installation. &lt;/p&gt;

&lt;p&gt;To install Microsoft Teams for Windows, please note the following minimum requirements: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;.NET Framework 4.5 or above &lt;/li&gt;
&lt;li&gt;Windows 8.1 or above &lt;/li&gt;
&lt;li&gt;Windows Server 2012 R2 or above &lt;/li&gt;
&lt;li&gt;Recommended minimum of 3 GB of disk space available for each user profile &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To install Microsoft Teams as part of the Microsoft 365 apps installation, your users will need to have a valid license for Microsoft 365, which includes Microsoft Teams. The user may carry out the installation by completing the following steps: &lt;/p&gt;

&lt;p&gt;1) Log in to the Microsoft 365 portal at &lt;a href="https://portal.office.com/account" rel="noopener noreferrer"&gt;https://portal.office.com/account&lt;/a&gt; and choose &lt;strong&gt;My account&lt;/strong&gt;. Then select &lt;strong&gt;Office apps &amp;amp; devices | Install Office&lt;/strong&gt; as shown in &lt;em&gt;Figure 1&lt;/em&gt;: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F012jzc47b97ocz88gqdw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F012jzc47b97ocz88gqdw.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1 – A user installs Microsoft 365 apps including Microsoft Teams&lt;/p&gt;

&lt;p&gt;2) You will be prompted to open the setup file and complete the installation of &lt;strong&gt;Office&lt;/strong&gt; as shown in &lt;em&gt;Figure 2&lt;/em&gt;:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n3gxuqqsergrigew1ri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n3gxuqqsergrigew1ri.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 2 – Installing Office&lt;/p&gt;

&lt;p&gt;However, for many organizations, it is often more effective to deploy Microsoft Teams by using a method such as &lt;strong&gt;Microsoft Endpoint Configuration Manager&lt;/strong&gt; or &lt;strong&gt;Group Policy&lt;/strong&gt;. To facilitate this, Microsoft has provided some MSI files, which can be used for bulk deployment scenarios. In this situation, users will not be required to install Microsoft Teams themselves.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;MSI files&lt;/em&gt; are available in either 32-bit, 64-bit, or ARM64 flavors, and there are versions for Commercial, Federal Government – GCC, Federal Government – GCC High, and Federal Government – DoD.&lt;/p&gt;

&lt;p&gt;When you identify which MSI file you need, the steps required to deploy Microsoft Teams are as follows:&lt;/p&gt;

&lt;p&gt;1) Download your preferred MSI package.&lt;br&gt;
2) Use the prepopulated MSI defaults if possible.&lt;br&gt;
3) Deploy using your chosen method, such as Microsoft Endpoint Configuration Manager or Group Policy.&lt;/p&gt;

&lt;p&gt;Once executed, the MSI package will copy an installation package to the Program Files folder of the Windows device. When the user logs into their computer, the installer will be triggered, and the Microsoft Teams application will be installed on that device. The application is installed in the AppData folder for the user. Should Teams already be detected in this folder, the installation process will be skipped as it is not necessary.&lt;/p&gt;

&lt;p&gt;You can also deploy Microsoft Teams to Windows devices in your environment by using the &lt;strong&gt;Office Deployment Tool&lt;/strong&gt;. This tool allows you to include or exclude Microsoft Teams from an Office installation package that you download and configure. The following example shows Microsoft Teams being excluded from an Office XML installation file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Configuration&amp;gt; 

     &amp;lt;Add OfficeClientEdition="64" Channel="Current"&amp;gt; 

         &amp;lt;Product ID="O365ProPlusRetail"&amp;gt; 

          &amp;lt;Language ID="en-us" /&amp;gt; 

          &amp;lt;ExcludeApp ID="Teams" /&amp;gt; 

         &amp;lt;/Product&amp;gt; 

         &amp;lt;Product ID="VisioProRetail"&amp;gt; 

          &amp;lt;Language ID="en-us" /&amp;gt; 

         &amp;lt;/Product&amp;gt; 

         &amp;lt;Product ID="ProjectProRetail"&amp;gt; 

          &amp;lt;Language ID="en-us" /&amp;gt; 

         &amp;lt;/Product&amp;gt; 

         &amp;lt;Product ID="LanguagePack"&amp;gt; 

          &amp;lt;Language ID="de-de" /&amp;gt; 

         &amp;lt;/Product&amp;gt; 

     &amp;lt;/Add&amp;gt; 

 &amp;lt;/Configuration&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important note&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Further reading on the Office Deployment Tool can be found at the end of this article. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is also possible for users to download and install the Microsoft Teams app from the Microsoft Teams web client by clicking on their username at the top right of the screen and selecting &lt;strong&gt;Download the desktop app&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Next, we will look at virtual desktops. &lt;/p&gt;

&lt;h2&gt;
  
  
  Virtual desktops
&lt;/h2&gt;

&lt;p&gt;Microsoft Teams is also supported within &lt;strong&gt;Virtual Desktop Infrastructure (VDI)&lt;/strong&gt; environments, where desktop operating systems are hosted at a central source. The following VDI platforms support Microsoft Teams: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle5de2h30u86v1klpxj5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle5de2h30u86v1klpxj5.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
In the preceding environments, Microsoft Teams supports chat, collaboration, calling, and meetings. &lt;/p&gt;

&lt;p&gt;It is possible to configure virtualized environments for multiple configuration modes, which include the following: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dedicated &lt;/li&gt;
&lt;li&gt;Shared &lt;/li&gt;
&lt;li&gt;Persistent (users' changes are saved when the user logs off) &lt;/li&gt;
&lt;li&gt;Non-persistent (users' changes are not saved when the user logs off) &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It should be noted that some advanced features of Microsoft Teams may not work in VDI environments. &lt;/p&gt;

&lt;p&gt;Microsoft Teams used in VDI environments will require the following components: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A virtualization broker (such as Azure) &lt;/li&gt;
&lt;li&gt;A virtual desktop machine &lt;/li&gt;
&lt;li&gt;A thin client endpoint &lt;/li&gt;
&lt;li&gt;The Microsoft Teams desktop app &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Microsoft recommends the following as the minimum configuration: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkt5xnvxuyvag6v36u26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkt5xnvxuyvag6v36u26.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
To deploy the Microsoft Teams application to a VM, you must download the MSI package and install it on the VM by running one of the following commands: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For a per-user installation, run the following command to install the Microsoft Teams app to the users, AppData folder. Note that a per-user installation will not work on a non-persistent setup:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;msiexec /i &amp;lt;MSI path&amp;gt; /l*v &amp;lt;logfilename&amp;gt; ALLUSERS=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;For a per-machine installation, run the following command to install the Microsoft Teams app to the Program Files (x86) folder. However, it should be noted that client updates will not occur using this method. In this instance, the client must be uninstalled and reinstalled with the newer version:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;msiexec /i &amp;lt;MSI path&amp;gt; /l*v &amp;lt;logfilename&amp;gt; ALLUSER=1 
ALLUSERS=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we will look at macOS. &lt;/p&gt;

&lt;h2&gt;
  
  
  macOS
&lt;/h2&gt;

&lt;p&gt;Microsoft Teams for macOS is installed by using a PKG installation file, which can be downloaded from Microsoft. Administrative access will be required to complete the installation, which will be placed in the /Applications folder. &lt;/p&gt;

&lt;p&gt;The installation of Teams for macOS devices is machine-wide as opposed to user-wide. &lt;/p&gt;

&lt;p&gt;The application can be installed on a per-machine basis, or by a broader deployment approach such as &lt;strong&gt;Jamf Pro&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Now, let's look at mobile devices. &lt;/p&gt;

&lt;h2&gt;
  
  
  Mobile devices
&lt;/h2&gt;

&lt;p&gt;The Microsoft Teams app may be downloaded from the Apple iOS and Google Play for Android app stores and installed on devices by users. &lt;/p&gt;

&lt;p&gt;The currently supported platforms are as follows: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;iOS – Limited to the two most recent versions of the iOS app &lt;/li&gt;
&lt;li&gt;Android – Limited to the last four major versions of the Android app &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important note&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The mobile client for Microsoft Teams may also be deployed using Intune app protection policies. Further reading on this option may be found at the end of the article. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this section, we have explained how the Microsoft Teams app can be downloaded and installed, or deployed to devices and platforms in your organization such as Windows, VDI, macOS, and mobile devices. &lt;/p&gt;

&lt;p&gt;Next, we will show you how to manage your device settings from the Microsoft Teams admin center. &lt;/p&gt;

&lt;h1&gt;
  
  
  Managing device settings
&lt;/h1&gt;

&lt;p&gt;Any devices that are used with Microsoft Teams may be managed from the Microsoft Teams admin center. It is possible to carry out device-related tasks such as updating, restarting, and monitoring devices. &lt;/p&gt;

&lt;p&gt;To carry out these tasks, you must be either a Global Administrator, a Teams Service Administrator, or a Teams Device Administrator. &lt;/p&gt;

&lt;p&gt;When assigned one of these roles, you may manage the following devices: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IP phones&lt;/strong&gt; – Including desk phones and conference phones. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Teams Rooms&lt;/strong&gt; – Including consoles, microphones, and cameras. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Collaboration bars&lt;/strong&gt; – These are Android base units with the Microsoft Teams app, which can be connected to TVs and screens. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Teams displays&lt;/strong&gt; – All-in-one business communication devices designed for Microsoft Teams.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To manage devices in the &lt;strong&gt;Microsoft Teams admin center&lt;/strong&gt;, we need to complete the following steps: &lt;/p&gt;

&lt;p&gt;1) Log in to the &lt;strong&gt;Microsoft Teams admin center&lt;/strong&gt; at &lt;a href="https://admin.teams.microsoft.com" rel="noopener noreferrer"&gt;https://admin.teams.microsoft.com&lt;/a&gt; and navigate to &lt;strong&gt;Devices&lt;/strong&gt; on the left-hand-side menu as shown in &lt;em&gt;Figure 3&lt;/em&gt;: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq179jorcqsj86e0kde1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq179jorcqsj86e0kde1j.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 3 – Devices in the Microsoft Teams admin center &lt;/p&gt;

&lt;p&gt;2) When selecting &lt;strong&gt;IP phones&lt;/strong&gt;, you can filter phones by their categories, which include individual &lt;strong&gt;User phones&lt;/strong&gt;, &lt;strong&gt;Common area phones&lt;/strong&gt;, and &lt;strong&gt;Conference phones&lt;/strong&gt;. You may edit the device settings, assign configuration, manage device tags, and add or edit &lt;strong&gt;Configuration profiles&lt;/strong&gt; as shown in Figure 4: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbxdnbwn4i3p8jpk0f1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbxdnbwn4i3p8jpk0f1e.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 4 – IP phones in the Microsoft Teams admin center&lt;/p&gt;

&lt;p&gt;3) Similarly, the &lt;strong&gt;Teams Rooms&lt;/strong&gt; section will show you any Rooms devices:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46gmugg7thhloet4oyqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46gmugg7thhloet4oyqn.png" alt="image"&gt;&lt;/a&gt; &lt;br&gt;
Figure 5 – Teams Rooms in the Microsoft Teams admin center &lt;/p&gt;

&lt;p&gt;4) Next, we can see the &lt;strong&gt;Collaboration bars&lt;/strong&gt; section, with similar configuration options: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wb68g60tcvudf7so2tm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wb68g60tcvudf7so2tm.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 6 – Collaboration bars in the Microsoft Teams admin center&lt;/p&gt;

&lt;p&gt;5) And finally, we can view the &lt;strong&gt;Teams displays&lt;/strong&gt; section:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91ycydeyd9fay31m4t5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91ycydeyd9fay31m4t5c.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 7 – Teams displays in the Microsoft Teams admin center&lt;/p&gt;

&lt;p&gt;Let's take a closer look at the functions you may carry out in relation to these devices, starting with Teams Rooms devices. &lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Teams Rooms devices
&lt;/h2&gt;

&lt;p&gt;The following table lists the functions that are available to administrators in the Microsoft Teams admin center in relation to Teams Room devices: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvrhnm8h75fq3qfvuwpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvrhnm8h75fq3qfvuwpi.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, let's examine more closely the settings available for managing phones, collaboration bars, and Teams displays.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Teams phones, collaboration bars, and Teams displays
&lt;/h2&gt;

&lt;p&gt;The following table lists the functions that are available to administrators in the Microsoft Teams admin center in relation to Teams phones, collaboration bars, and Teams displays:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rpzu520xbqvfzoif04r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rpzu520xbqvfzoif04r.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
In this section, we have shown you how you can manage Teams devices from the Microsoft Teams admin center, including IP phones, Teams Rooms, Collaboration Bars, and Teams displays.&lt;/p&gt;

&lt;p&gt;Next, we will show you how to manage configuration profiles.&lt;/p&gt;

&lt;h1&gt;
  
  
  Managing configuration profiles
&lt;/h1&gt;

&lt;p&gt;Configuration profiles are used to manage settings for Teams phones, collaboration bars, and Teams displays. These profiles may be configured to include settings you wish to add or remove from a device or several devices. &lt;/p&gt;

&lt;p&gt;To create a configuration profile, we need to complete the following steps: &lt;/p&gt;

&lt;p&gt;1) Log in to the &lt;em&gt;Microsoft Teams admin center&lt;/em&gt; at &lt;a href="https://admin.teams.microsoft.com" rel="noopener noreferrer"&gt;https://admin.teams.microsoft.com&lt;/a&gt; and navigate to &lt;strong&gt;Devices&lt;/strong&gt; on the left-hand-side menu as shown in &lt;em&gt;Figure 8&lt;/em&gt;: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbb4ama0ps03tycn86o6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbb4ama0ps03tycn86o6y.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 8 – Devices in the Microsoft Teams admin center&lt;/p&gt;

&lt;p&gt;2) Choose either &lt;strong&gt;IP phones&lt;/strong&gt;, &lt;strong&gt;Collaboration bars&lt;/strong&gt;, or &lt;strong&gt;Teams displays&lt;/strong&gt; (configuration profiles may not be applied to &lt;strong&gt;Teams Rooms&lt;/strong&gt;). In this example, we will choose &lt;strong&gt;IP phones&lt;/strong&gt; and select the &lt;strong&gt;Configuration profiles&lt;/strong&gt; tab as shown in &lt;em&gt;Figure 9&lt;/em&gt;:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5knbbst1ws3cm4hmn9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5knbbst1ws3cm4hmn9m.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 9 – Configuration profiles tab in the IP phones section of the Microsoft Teams admin center&lt;/p&gt;

&lt;p&gt;3) Click &lt;strong&gt;+ Add&lt;/strong&gt;, and then enter a name and description for your configuration profile, as shown in &lt;em&gt;Figure 10&lt;/em&gt;:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztoosld6w336grb2mped.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztoosld6w336grb2mped.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 10 – Set the name and description for the configuration profile&lt;/p&gt;

&lt;p&gt;4) Scroll down to the &lt;strong&gt;General&lt;/strong&gt; section of the new configuration profile settings and complete the required settings shown in &lt;em&gt;Figure 11&lt;/em&gt;:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1psb0xojwfk1yacp9se2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1psb0xojwfk1yacp9se2.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 11 – Complete the general settings for the new configuration profile&lt;/p&gt;

&lt;p&gt;5) Scroll to the &lt;strong&gt;Device settings&lt;/strong&gt; section and complete the settings as required, as shown in &lt;em&gt;Figure 12&lt;/em&gt;:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85b2ends4smnd0vigiiy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85b2ends4smnd0vigiiy.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 12 – Complete the device settings for the new configuration profile&lt;/p&gt;

&lt;p&gt;6) Scroll down to the &lt;strong&gt;Network settings&lt;/strong&gt; section and complete the required settings as shown in &lt;em&gt;Figure 13&lt;/em&gt;:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpp0sgpdlso98os93sg9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpp0sgpdlso98os93sg9.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 13 – Complete the network settings for the new configuration profile&lt;/p&gt;

&lt;p&gt;7) When you have completed all the required settings, click on &lt;strong&gt;Save&lt;/strong&gt;:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4e4jnto3u4e3e2i9stq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4e4jnto3u4e3e2i9stq.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 14 – Save the new configuration profile&lt;/p&gt;

&lt;p&gt;8) The new configuration profile is now visible. Click to select the profile and choose &lt;strong&gt;Assign to device&lt;/strong&gt; as shown in &lt;em&gt;Figure 15&lt;/em&gt;:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8rsrpymajpjepay6nqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8rsrpymajpjepay6nqq.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 15 – Assign the new configuration profile to a device&lt;/p&gt;

&lt;p&gt;9) Search for the devices that you wish to apply the new configuration profile to, and then click on &lt;strong&gt;Apply&lt;/strong&gt; as shown in &lt;em&gt;Figure 16&lt;/em&gt;: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnacymkblx0xsubmogwgc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnacymkblx0xsubmogwgc.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 16 – Apply the new configuration profile to your chosen devices&lt;/p&gt;

&lt;p&gt;In this section, we explained how configuration profiles are groups of device settings that may be quickly and easily applied to your Teams devices from the Microsoft Teams admin center.&lt;/p&gt;

&lt;p&gt;In the final section, we will show you how to set up Microsoft Teams Rooms.&lt;/p&gt;

&lt;h1&gt;
  
  
  Setting up Microsoft Teams Rooms
&lt;/h1&gt;

&lt;p&gt;With Microsoft Teams Rooms, you can use HD video and audio, and share content in meetings on supported devices in your organization's meeting and conference rooms. The &lt;strong&gt;Microsoft Teams Rooms&lt;/strong&gt; software is run on device hardware that can be provided by a number of different vendors and that runs Windows. It is also possible to mix devices for separate functions, such as one device for the camera and another for Microsoft Teams Rooms itself.&lt;/p&gt;

&lt;p&gt;There are many Microsoft Teams Rooms devices available for purchase, but before we talk about these devices, we need to configure accounts for Microsoft Teams Rooms to use.&lt;/p&gt;

&lt;p&gt;The accounts that you create for Microsoft Teams Rooms will need to be set up in line with the following available deployment options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Online deployment&lt;/strong&gt;: Where your organization is entirely based in Microsoft 365 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;On-premises deployment&lt;/strong&gt;: Where you have on-premises Active Directory, Exchange servers, and Skype for Business servers &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hybrid deployment&lt;/strong&gt;: Where you have a mixture of on-premises and cloud services &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A device account must be set up for every Microsoft Teams Rooms device that you configure. This account is used to host the calendar for the device and to manage connectivity. The result of this is that your users can schedule a meeting on the device. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important note&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;It is Microsoft's recommendation to create device accounts for your Microsoft Teams Rooms at least 2 to 3 weeks before you plan to install the actual device. &lt;/p&gt;

&lt;p&gt;You should also be aware of the license requirements for using Microsoft Teams Rooms and common area phones. Further reading is included at the end of the article. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The following table shows the minimum requirements for the configuration of a device account: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flschbk2n1xt6950er584.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flschbk2n1xt6950er584.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
The simplest method to create these accounts is by using Windows PowerShell, and Microsoft provides a script for this. &lt;/p&gt;

&lt;p&gt;In the example that follows, we will explain the process of deploying a Microsoft Teams room with Microsoft 365. To do this, you will need to be aware of the following requirements: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Skype for Business Online (Plan 2) or a higher plan that allows dial-in conferencing. &lt;/li&gt;
&lt;li&gt;An audio conferencing and phone system license if dial-in features are required. &lt;/li&gt;
&lt;li&gt;An audio conferencing license if dial-out features are required. &lt;/li&gt;
&lt;li&gt;Tenant users will require Exchange mailboxes. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To add a device account, we need to complete the following steps: &lt;/p&gt;

&lt;p&gt;1) Connect to Exchange Online PowerShell as an administrator by completing the following commands: &lt;/p&gt;

&lt;pre&gt;
&lt;b&gt;$Cred=GET-CREDENTIAL

$s = NEW-PSSESSION -ConfigurationName Microsoft. 

Exchange -ConnectionUri https://ps.outlook.com/ 

powershell -Credential $Cred -Authentication Basic 

-AllowRedirection 

$importresults=import-pssession $s&lt;/b&gt;
&lt;/pre&gt;

&lt;p&gt;2) Now we need to create a new room mailbox and add an account to it, which will allow it to authenticate. To create a room mailbox, we need to run the following command, which will set up a new room mailbox called Conference-01: &lt;/p&gt;

&lt;pre&gt;
&lt;b&gt;New-Mailbox -Name "Conference-01" -Alias 
Conference1 -Room -EnableRoomMailboxAccount $true 
-MicrosoftOnlineServicesID conference01@domainname.com 
-RoomMailboxPassword (ConvertTo-SecureString -String 
'Password' -AsPlainText -Force)&lt;/b&gt;
&lt;/pre&gt;

&lt;p&gt;The mailbox account is now created. Next, we will show how you can configure your Microsoft Teams Rooms console device. &lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring a Microsoft Teams Rooms console device
&lt;/h2&gt;

&lt;p&gt;Now that we have a device account ready, we can configure a Microsoft Teams Rooms console device. You will need to meet the hardware and software requirements that are laid out in this Microsoft guidance: &lt;a href="https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/requirements" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/requirements&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Once you have fulfilled these requirements, you can start to configure your Microsoft Teams Rooms device by completing the following steps, which include the optional step to create USB media. Most Microsoft Teams Rooms devices now come pre-deployed with the required software: &lt;/p&gt;

&lt;p&gt;1) Prepare the installation media. You will need to use a USB storage device with nothing else on it and with a minimum of 32 GB capacity. &lt;br&gt;
2) Download the media script from &lt;a href="https://go.microsoft.com/fwlink/?linkid=867842" rel="noopener noreferrer"&gt;https://go.microsoft.com/fwlink/?linkid=867842&lt;/a&gt;. &lt;br&gt;
3) The script is named CreateSrsMedia.ps1 and must be run from Windows PowerShell as an administrator. &lt;br&gt;
4) Running the script will download an MSI installer package for Microsoft Teams Rooms. &lt;br&gt;
5) Next, you need to establish the build of Windows that will be required, download it and any required supporting elements, and place these on the installation media. &lt;br&gt;
6) Plug the USB media into the Microsoft Teams Rooms device. Boot to the setup and follow the manufacturer's instructions. &lt;br&gt;
7) Select and apply your desired language. &lt;br&gt;
8) Windows will now install and the console will trigger its native setup upon restart. &lt;/p&gt;

&lt;p&gt;Once you have the preceding steps completed, you can complete further steps to complete the setup of the device depending on the type of device and your environment. For example, you may need to use the &lt;strong&gt;Configure Domain&lt;/strong&gt; option to set the settings to connect to your Skype for Business Server if applicable. You may also configure microphone and speaker settings at this point. &lt;/p&gt;

&lt;p&gt;In this section, we have explained the principles of Microsoft Teams Room devices. These are certified devices that can be used to book meeting and conference rooms when an appropriate device account is set up and associated with a compatible device. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this article, we have explained how to install or deploy Microsoft Teams client apps to your devices, including Windows, VDI, macOS, and mobile devices. We’ve also shown you how to configure settings for Teams devices such as IP phones, Teams Rooms, collaboration bars, and Teams displays from the Microsoft Teams admin center. You learned how configuration profiles may be used to apply a collection of settings to the devices listed in the Microsoft Teams admin center. Finally, we introduced you to Microsoft Teams Rooms devices and explained the prerequisites and steps to configure them within your environment. &lt;/p&gt;

&lt;h1&gt;
  
  
  Further reading
&lt;/h1&gt;

&lt;p&gt;Deploy Microsoft Teams with Microsoft 365 Apps: &lt;a href="https://docs.microsoft.com/en-us/deployoffice/teams-install" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/deployoffice/teams-install&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Microsoft Teams operations guide: &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/upgrade-operate-my-service" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/microsoftteams/upgrade-operate-my-service&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Microsoft Teams for Virtualized Desktop Infrastructure: &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/teams-for-vdi" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/microsoftteams/teams-for-vdi&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Install Microsoft Teams using Microsoft Endpoint Configuration Manager: &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/msi-deployment" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/microsoftteams/msi-deployment&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Manage your devices in Microsoft Teams: &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/devices/device-management" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/microsoftteams/devices/device-management&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Microsoft Teams Rooms deployment overview: &lt;a href="https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/rooms-deploy" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/rooms-deploy&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Configure a Microsoft Teams Rooms console: &lt;a href="https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/console" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/console&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Microsoft Teams Rooms requirements: &lt;a href="https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/requirements" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/requirements&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Get clients for Microsoft Teams: &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/get-clients" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/microsoftteams/get-clients&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Configure accounts for Microsoft Teams Rooms: &lt;a href="https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/rooms-configure-accounts" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/rooms-configure-accounts&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Configure a Microsoft Teams Rooms console: &lt;a href="https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/console" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/MicrosoftTeams/rooms/console&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Configure Microsoft Teams Rooms for Microsoft 365: &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/rooms/with-office-365" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/microsoftteams/rooms/with-office-365&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Microsoft Endpoint Configuration Manager: &lt;a href="https://docs.microsoft.com/en-us/mem/configmgr/" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/mem/configmgr/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Windows Virtual Desktop: &lt;a href="https://azure.microsoft.com/en-gb/services/virtual-desktop/" rel="noopener noreferrer"&gt;https://azure.microsoft.com/en-gb/services/virtual-desktop/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Group Policy: &lt;a href="https://docs.microsoft.com/en-us/troubleshoot/windows-server/group-policy/use-group-policy-to-install-software" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/troubleshoot/windows-server/group-policy/use-group-policy-to-install-software&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Microsoft Office Deployment tool: &lt;a href="https://docs.microsoft.com/en-us/deployoffice/overview-office-deployment-tool" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/deployoffice/overview-office-deployment-tool&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Deploy Teams to mobile devices using Intune app protection policies: &lt;a href="https://docs.microsoft.com/en-us/mem/intune/apps/manage-microsoft-teams" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/mem/intune/apps/manage-microsoft-teams&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Licensing for Microsoft Teams Rooms: &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/rooms/rooms-licensing" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/microsoftteams/rooms/rooms-licensing&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Licensing for common area phones: &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/set-up-common-area-phones" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/microsoftteams/set-up-common-area-phones&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This article is part of the &lt;a href="https://packt.live/3noysVv" rel="noopener noreferrer"&gt;Managing Microsoft Teams: MS-700 Exam Guide&lt;/a&gt; by Peter Rising and Nate Chamberlain. Check it out now to understand how to plan a Microsoft Teams deployment within a business environment and manage Teams administrative functions with ease. &lt;/p&gt;

</description>
      <category>office365administration</category>
      <category>microsoft365dministration</category>
      <category>microsoftteamsadministration</category>
      <category>microsoftteams</category>
    </item>
    <item>
      <title>Variables and Data Types in Roblox Lua </title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Wed, 05 May 2021 14:54:26 +0000</pubDate>
      <link>https://dev.to/packt/variables-and-data-types-in-roblox-lua-4p3c</link>
      <guid>https://dev.to/packt/variables-and-data-types-in-roblox-lua-4p3c</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G26wOshq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r9ku6btnylnfa6yblpqj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G26wOshq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r9ku6btnylnfa6yblpqj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Roblox Lua&lt;/strong&gt; language is a fast, procedural programming language adapted from &lt;strong&gt;Lua&lt;/strong&gt;. Lua was originally created in 1993 due to software trade barriers in Brazil, which prevented many from buying specialized software from outside the country. Because of this, the language was designed to be highly customizable and C-based so that programmers could make their own implementations using Lua's C API.&lt;/p&gt;

&lt;p&gt;In this article, we’ll look at the different data types provided by Roblox Lua, and see how to create and manipulate variables.  &lt;/p&gt;

&lt;h1&gt;
  
  
  Technical requirements
&lt;/h1&gt;

&lt;p&gt;The code included in this article should be intelligible when read alongside the text; a computer is not required. However, if you want to interact with the code, and perhaps go on to experiment with Roblox game programming, then you’ll need access to a computer capable of running Roblox Studio. To utilize most of the features of Roblox Studio, you will also need a stable internet connection. More information on the system requirements of Roblox can be found here &lt;a href="https://en.help.roblox.com/hc/en-us/articles/203312800"&gt;https://en.help.roblox.com/hc/en-us/articles/203312800&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The code snippets featured in this article are available at &lt;a href="https://github.com/PacktPublishing/Coding-Roblox-Games-Made-Easy/tree/main/Chapter03"&gt;https://github.com/PacktPublishing/Coding-Roblox-Games-Made-Easy/tree/main/Chapter03&lt;/a&gt; &lt;/p&gt;

&lt;h1&gt;
  
  
  Creating variables
&lt;/h1&gt;

&lt;p&gt;In programming, a &lt;strong&gt;variable&lt;/strong&gt; is a way for your code to hold various types of data, or &lt;strong&gt;data types&lt;/strong&gt;. In most programming languages, these variables are &lt;strong&gt;typed&lt;/strong&gt;, meaning that the type of variable must be declared when it is created. If you were to create a variable that holds a number, for example, you could only assign data that was a number to that variable. In Roblox Lua, variables are not typed; if you were to initialize a variable as a number, you could later set it to anything else without issue. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important note&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;It should be noted that there is a new option for a typed version of Lua, though it is currently in beta, and not available to the general public. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is important that you know what the most common data types are before you begin programming. &lt;/p&gt;

&lt;h2&gt;
  
  
  Data types
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Integers&lt;/strong&gt; are any whole, real numbers within a &lt;strong&gt;64-bit&lt;/strong&gt; range. This means that the value of a signed 64-bit integer can be either positive or negative but can only hold values between (-2^(63)) and ((2^(63)) - 1). The following diagram shows how the 64 bits are formatted to represent a signed integer in binary: &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EemumdSF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2n7gfnyisa1u1ubw5f27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EemumdSF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2n7gfnyisa1u1ubw5f27.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1 – In a signed integer, the first bit is the sign and the remaining bits represent the magnitude &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Numbers&lt;/strong&gt; can hold both whole and fractional portions of numeric values, meaning that they can hold virtually any number between (-1.8*(10308)) and (1.8*10308)). This difference in the range of possible values stems from how the data type itself is formatted in computer systems; the following diagram shows how binary formatting for numbers differs from integers. The number data type is actually named a &lt;strong&gt;double precision floating-point number&lt;/strong&gt;, though it is more commonly called a &lt;strong&gt;double&lt;/strong&gt;, which is a type of &lt;strong&gt;float&lt;/strong&gt;. These numbers use what is known as IEEE-754, a technical standard that is used to represent floats at the binary level. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6SmqIZi9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjo5xbr15nf067kgxshe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6SmqIZi9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjo5xbr15nf067kgxshe.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 2 – Float bits are formatted differently from integers but how many are used remains the same&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Booleans&lt;/strong&gt; are a simple data type with a binary true or false value; when stored in a variable, they are more frequently called &lt;strong&gt;bools&lt;/strong&gt;. It is important to note that in Lua, 1 and 0 only have numeric values and do not have any uses in Boolean logic, as sometimes seen in other languages. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strings&lt;/strong&gt;, as a data type, are best explained in most languages as an array of individual characters. In Lua, however, characters are not a data type. Strings are used to represent letters, numbers, and other characters to the client, but they can have different internal applications. &lt;/p&gt;

&lt;p&gt;In Lua, &lt;strong&gt;tables&lt;/strong&gt;, which are closely associated with arrays in other languages, are data types that can hold a potentially unlimited number of elements for a piece of data. Unlike arrays in other programming languages, tables act more like lists, as they are not limited to an initialized size, nor do they require one; as a result, additional table positions do not need to be preemptively reserved. Elements contained in these tables are indexed from 1 to &lt;em&gt;n&lt;/em&gt;, with &lt;em&gt;n&lt;/em&gt; being how many elements are in your table. It should be noted that in most other languages, the starting index value of a table is 0; this is called zero-based indexing, as distinct from Lua’s one-based indexing. Additionally, like any other variable in Lua, tables are untyped; it could be said that this has many advantages, as in other languages you would be restricted to only adding values of the same type into a single array. However, you should still keep your tables organized and not loosely throw any type of data into it. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dictionaries&lt;/strong&gt; are tables that utilize key-based indexing as opposed to numbered indexes. As we mentioned previously, their size is not finite and elements within them are still untyped. The greatest benefit dictionaries offer is the ability to index anything with a convenient key. For example, if you wanted all the apples in the &lt;strong&gt;Workspace&lt;/strong&gt; to have one functionality and all bananas to have another, you could use the name of the fruit as a dictionary key that has an associated function; you could even use the fruit instance itself as the key since the keys are not limited just to strings. As we mentioned previously, Roblox does not make it very practical to utilize &lt;strong&gt;object-oriented programming (OOP)&lt;/strong&gt;, so creating a dictionary containing different data types with unique keys allows you to make what is essentially a class within Lua, which can be quite convenient for both organizational and functional purposes.&lt;/p&gt;

&lt;p&gt;In mathematics, a &lt;strong&gt;vector&lt;/strong&gt; is a quantity that contains both direction and magnitude. Vectors can be used to represent data about both position and rotation. In Roblox, vectors are a userdatum, meaning that they are a custom data type made by Roblox, not a Lua &lt;strong&gt;primitive&lt;/strong&gt; (native data type) like those previously mentioned. There are two specific vector types you will commonly work with: the Vector3 and Vector2 data types. A Vector3 contains &lt;em&gt;X&lt;/em&gt;, &lt;em&gt;Y&lt;/em&gt;, and &lt;em&gt;Z&lt;/em&gt; components and is used to define the position and orientation of instances such as parts. A Vector2 is typically only used when working with a UI or in any other two-dimensional scenario; it contains only &lt;em&gt;X&lt;/em&gt; and &lt;em&gt;Y&lt;/em&gt; components. Vectors are useful for performing various calculations, from checking the distance between two positions to calculus-level computations such as cross and dot products:&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rIvzqrmx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxxgnfk7537i3vg6ddqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rIvzqrmx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxxgnfk7537i3vg6ddqn.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 3 – A vector is capable of conveying information about both direction and magnitude&lt;/p&gt;

&lt;p&gt;A CFrame, or coordinate frame, is another userdatum similar to a vector but having the ability to hold additional information. Like vectors, &lt;strong&gt;CFrames&lt;/strong&gt; contain positional data, but they also include nine elements that make up a 3x3 matrix describing the rotation of the CFrame. Because of this, most CFrame manipulations allow for positional changes to be relative to the orientation of the coordinate frame itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instances&lt;/strong&gt; are a userdatum and represent everything you can interact with within the &lt;strong&gt;Explorer&lt;/strong&gt; window of Studio. Different types of instances are called classes, and each class possesses different associated properties. To see the extent of instance classes, view the full list on the developer website: &lt;a href="https://developer.roblox.com/en-us/api-reference/index"&gt;https://developer.roblox.com/en-us/api-reference/index&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important note&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;While you have learned about all the primitive data types Lua contains, there are many more Roblox userdata types. The full list can be found here: &lt;a href="https://developer.roblox.com/en-us/apireference/data-types"&gt;https://developer.roblox.com/en-us/apireference/data-types&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next, you will learn how to handle these different data types by assigning them to variables. Variables allow you to hold and manipulate data so that it can be used with any program write.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting and manipulating variables
&lt;/h2&gt;

&lt;p&gt;Initializing and changing variables in Lua is a process that utilizes a variety of different operators to achieve a desired result. A convenient way to check the value of your variable is by using the print() function. For many programmers, print("Hello World!") is the first line of code they ever write. The print() function is a valuable tool, not only when looking for bugs, but also for observing what your code produces when it would otherwise not be visible.&lt;/p&gt;
&lt;h2&gt;
  
  
  Numbers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Numbers&lt;/strong&gt; are among the most intuitive variables to set and change in Lua. If you want to initialize a variable, you should utilize the &lt;strong&gt;local&lt;/strong&gt; keyword. Initializing a variable without this keyword will work, but it defines the variable for the entire script once it has been run, which is unnecessary in almost all cases and is considered poor style. After putting local, you put the name of your variable. A variable name cannot start with non-alphabetical characters and cannot contain any characters that are non-alphabetical or non-numeric, except for underscores. For example, if you wanted to have a variable that simply held the value of 99, your code would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local myNumber = 99
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are many different operators we can use to change the variable and even &lt;strong&gt;libraries&lt;/strong&gt; of special functions, but for this example, we simply want the value to increment by 1 to reach a total value of 100. To accomplish this, we can set the variable to itself plus 1 by using the addition operator (+):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myNumber = myNumber + 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may have noticed that local does not precede the variable name here. This is because local is only put before your variable name when you're initializing it; to reference or alter a variable, you simply put the variable's name. Depending on the scenario, it may be more practical to simply set the variable to 100 directly. In this case, you would simply set the variable to the value, similar to what you did when initializing it (without the local statement, of course).&lt;/p&gt;

&lt;p&gt;The Lua programming language supports the arithmetic operators that are standard across most languages, those being for addition (+), subtraction (-), multiplication (*), division (/), and modulo (%). For more advanced operations, Lua provides a library with the same functionality as the standard math library in the C language. This library provides trigonometric functions, value conversions, and specific values with convenience and accuracy. To utilize this library, we can use the math keyword. Here is an example of getting a more accurate value for pi by using the math library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myNumber = math.pi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Booleans
&lt;/h2&gt;

&lt;p&gt;Setting a Boolean is simple as there are only two initialization options: using the true or false keyword. An initialization statement for a Boolean value typically looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local myBool = true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While changing the value of this variable is simply a matter of setting the bool to true or false, there is a trick for setting the bool to its opposite value in one line, rather than using a conditional. This is to use the not operator, which serves to simply return the opposite of the input following it. For example, if we wanted to change the preceding myBool variable from true to false, we could simply say:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myBool = not myBool
print(myBool) -&amp;gt; false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Strings
&lt;/h2&gt;

&lt;p&gt;To declare a string, you should use the same variable initialization style and encapsulate your text inside double-quotes. While single quotes can be used, double quotes are more typical, unless double quotes are contained within the string itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local myString = "Hello"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your string contains double quotes, Lua uses the backslash () as an escape character. This means that any character that would normally be special is treated as text within a string. For example, if someone in some game dialog is speaking from the third person, you could create double quote marks, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myString = "He said \"I don't like apples!\"" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This backslash operator also makes some normal text special. The two characters that are granted functionality by the backslash character are the letters n and t. When a \t is present within your string, a tab will be added in that place; as a side note, a tab is considered a single character by computer systems. When a \n is in your string, a new line is inserted at that point, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myString = "Separated\tby\ttabs" 
print(myString) -&amp;gt; "Separated by tabs" 
myString = "Separated\nby\nlines" 
print(myString) -&amp;gt; 
"Separated 
by 
lines"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have multiple lines in your string, you do not necessarily need to use the \n operator. Lua, unlike some other languages, supports the use of multi-line strings. Aside from being able to simply press your &lt;em&gt;Enter&lt;/em&gt; key to create new lines, you can more conveniently format paragraph-sized strings in your programs. To initialize a paragraph string, you must capture your text within double brackets, as shown here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myString = [[This string 
can span 
multiple lines.]] 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One of the most common ways string variables can be altered is by &lt;strong&gt;concatenating&lt;/strong&gt; them. By following any string with .. and providing another string, the latter string will be attached at that position:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myString = "Hello" 
myString = myString.. " World!" 
print(myString) -&amp;gt; "Hello World!" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ability to append is particularly useful when you're presenting context-dependent information to a player via a UI element. For example, if you wanted to announce who the winner of the round of a game was, you could append the name of that player to a string:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local winnerName = "WinnerWinner" 
myString = "Game over! ".. winnerName.. " has won the round!" 
print(myString) -&amp;gt; "Game over! WinnerWinner has won the round!"  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similar to how numeric data has a library for mathematical operations, there exists a library of string functions for more complex manipulations, as well as data management. This library can be accessed by using the string keyword. Some functions include the ability to change the case of all letters within a string, the ability to split strings at certain points, and even to find all strings within a string that match a certain pattern, which is useful for systems such as in-game search bars. For example, all the letters in the following string will be converted into uppercase using one of the string library's functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myString = "iT iS wARm tOdaY." 
print(string.upper(myString)) -&amp;gt; "IT IS WARM TODAY." 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using strings in numeric arithmetic should be avoided when possible, but there may be situations where it would be appropriate. Whenever a string is used where a number is required, Lua will attempt to automatically convert that string into a number. For example, if you try to add the string "50" to the number 100, it will function correctly, as shown here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print("50" + 100) -&amp;gt; 150 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, if the string you are attempting to perform an operation on contains non-numeric characters, the string-to-number conversion will fail. To prevent this, you can check if a string is fully numeric by using the tonumber() function. If the string that's been passed to the function cannot be converted into a number, the value that's returned will be nil; &lt;strong&gt;nil&lt;/strong&gt; is a value that represents something non-existent. If we attempt to add the string "Hello" to the number 100, an error will occur: &lt;/p&gt;

&lt;pre&gt;

   myString = "Hello" 
   print(tonumber(myString)) -&amp;gt; nil
   local myNumber = 100 + myString -&amp;gt; &lt;b&gt;"local myNumber = 100 + 
   myString:3: attempt to perform arithmetic (add) on number and 
   string"&lt;/b&gt;

&lt;/pre&gt;

&lt;h2&gt;
  
  
  Tables
&lt;/h2&gt;

&lt;p&gt;Tables are straightforward but less intuitive to set and manipulate than the other data types we have covered so far, as you must make use of another library to accomplish most alterations. To create a new, empty table, you must set your variable to a set of braces, as shown here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local myTable = {} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When initializing a new table, you do not need to have it start out empty; you can include elements within your table when it is first created. Elements in tables require a separating character, which can be either a comma (,) or semicolon (;). For example, if a player were tasked with retrieving items from a grocery list, you could initialize a table of predetermined foodstuffs in this fashion:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local myTable = {"Tofu", "Milk", "Bacon"} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you've created your table, you will need to be able to index items within your list. Without loops (not covered here) you can only index items individually. Remember that tables use one-based numeric indexing, so indexing items is just done with a number. All items from the grocery list could be either assigned to a variable or acquired directly, as seen in the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local myTable = {"Tofu", "Milk", "Bacon"} 
local firstItem = myTable[1] 
print(firstItem, myTable[2], myTable[3]) -&amp;gt; "Tofu Milk Bacon" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To add or remove elements from a table, you can use the table library, which can be accessed using the table keyword. This library allows you to alter table structures by changing how they are sorted, what their contents are, and where existing table entries are located. To add new elements to a table, you should use the table.insert() function. The function requires a minimum of two arguments: the table being targeted and the value to be added to the table. If three arguments are provided, the first argument is the targeted table, the second is the desired table position, and the third is the value to be added. When using the function with three arguments, it is important to remember that all the elements following or at the desired position are shifted to the right. There are no restrictions on the provided index, meaning that the index can be negative or can be an element that hasn't been reached yet by the length of the table (though you should avoid doing this). Here is an example of adding an element to the beginning of a table and an element without a position specified, which by default will go to the end of the table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local items = {"Elephant", "Towel", "Turtle"} 
table.insert(items, 1, "Rock") 
table.insert(items, "Cat") 
-&amp;gt; items = {"Rock", "Elephant", "Towel", "Turtle", "Cat"}  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without loops, you cannot remove all the elements of a specified value or those that meet some criteria with complete certainty. In this case, you will need to know the index of the value you want removed from the table. For example, if the list is only supposed to contain living things, we would want to remove the Rock and Towel items. We can do this by using the table.remove() function. It is important to note that removing an element from a table will shift all the elements that follow it to the left. So, if the rock was removed from the table first, the indexes of all the other items in the table would be one less than they were before. This can be seen in the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;items = {"Rock", "Elephant", "Towel", "Turtle", "Cat"} 
table.remove(items, 1) 
-&amp;gt; items = {"Elephant", "Towel", "Turtle", "Cat"} 
table.remove(items, 2) 
-&amp;gt; items = {"Elephant", "Turtle", "Cat"} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To confirm that the correct number of elements are within your table at any given time, you can preface a table or table variable with the # operator to return the number of elements within it. Additionally, you can use the table.getn() function to return the same result, though this is a little bit longer to write. You can prove these techniques return the same result by making the following comparison:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print(#items == table.getn(items)) -&amp;gt; true 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Dictionaries
&lt;/h2&gt;

&lt;p&gt;As mentioned previously, dictionaries are tables that use custom, key-based indexing as opposed to sorted numeric indexes. Conceptually, you can think of entering values into a dictionary as declaring a variable, except that the local keyword is not applicable here. While elements in a dictionary can be laid out like a table, it is more common for the elements to be situated like a &lt;strong&gt;struct&lt;/strong&gt;, with each entry having its own line; the separating character for elements can be either a semi-colon or a comma. If you had a restaurant's menu within your game, you could arrange the items within a dictionary, with the key being the name of the meal's course and the value being the name of the dish:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local menu = { 
    appetizer = "House salad"; 
    entree = "Ham sandwich"; 
    dessert = "Ice cream"; 
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Indexing these declared elements is quite intuitive as you simply follow the path to the desired value. In this case, let's say you wanted to capture what dish was being served as the entrée on the menu with a new variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local meal = menu.entree 
print(meal) -&amp;gt; "Ham sandwich" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setting elements is equally straightforward: by following the path, you can set or alter the element based on its data type like any other variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;menu.entree = "Turkey sandwich" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One of the advantages of using these keys in Lua is that they are not restricted just to string indexes. By using brackets ([ ]), you can use any data type as an index of your value. This is particularly useful if you want one data type to have a direct association with another at a given value. For example, if you wanted to set a list of threshold prices that correlated with a descriptive string, you could use a number as an index. Bear in mind that in order to index non-string keys, you must also use brackets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local prices = { 
    [0] = "Free"; 
    [5] = "Cheap"; 
    [20] = "Average"; 
    [50] = "Expensive"; 
} 
print(prices[0]) -&amp;gt; "Free" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What may be less obvious is the ability to use userdata values as keys. We could associate the origin of our &lt;strong&gt;Workspace&lt;/strong&gt; with a string, number, or another position; you are not restricted in any regard. &lt;/p&gt;

&lt;p&gt;Something to note is that a table can be the value of another table; whenever something exists within another entity of the same type, we call this &lt;strong&gt;nesting&lt;/strong&gt;. You can create tree-like structures by nesting tables within each other, fetching them with the same key-based style. Nesting tables is somewhat of a common practice for organizational and functional purposes. For instance, if you wanted to list some basic stats of &lt;strong&gt;non-playable characters (NPCs)&lt;/strong&gt; in your game, it may make sense to include those stats in a table under one collective NPC table so that the information can be indexed by the name of the NPC:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local units = { 
    ["Heavy Soldier"] = { 
        WalkSpeed = 16; 
        Damage = 25; 
    }; 

      Scout = { 
          WalkSpeed = 25; 
          Damage = 15; 
      }; 
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s look at &lt;strong&gt;vectors&lt;/strong&gt;, knowledge of which will allow you to better understand 3D environments. &lt;/p&gt;

&lt;h2&gt;
  
  
  Vectors
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Vectors&lt;/strong&gt; are values that represent both direction and magnitude. In Roblox programming, vectors are used to represent positions in three- and two-dimensional environments, define the orientations of different instances, show the direction of &lt;strong&gt;CFrames&lt;/strong&gt;, and calculate additional information about objects in relation to each other. &lt;/p&gt;

&lt;p&gt;Declaring a vector is much like creating many other Roblox userdata types. After stating the name of the userdata, you choose the constructing function. In most cases, when you're working with vectors, you will use the new option. For this example, we will be using a Vector3, though a Vector2 follows the same format but with only two components:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local myVector = Vector3.new(0,0,0) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Changing vector values is a little different than changing the other data types we have covered. This is because arithmetic is done across all components with changing behaviors, depending on what is being used in the operation. To demonstrate this, arithmetic between two vectors is done by component, meaning that adding two vectors will combine the values of each component. You can conceptualize this as the vectors being physically overlaid and adding each column together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myVector = Vector3.new(1,3,5) + Vector3.new(2,4,6) 
-&amp;gt; Vector3.new(3,7,11) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, the behavior of vector arithmetic changes when &lt;strong&gt;scalar&lt;/strong&gt; values are present. A scalar is any value that conveys magnitude but not direction. For example, vectors can be both multiplied and divided by scalars, but you cannot perform addition or subtraction with these mismatched data types. The only exception to this is when a scalar is divided by a vector, in which case division is done by component, with the scalar acting as the numerator of each element:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myVector = Vector3.new(2,4,6) * 2 -&amp;gt; Vector3.new(4,8,12) 
myVector = Vector3.new(2,4,6) / 2 -&amp;gt; Vector3.new(1,2,3) 
myVector = 2 / Vector3.new(2,4,6) -&amp;gt; Vector3.new(1,0.5,0.333) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Aside from changing vectors as a whole, you can capture individual values from a vector. Here, we are setting three local variables at once. Typically, this format is reserved for what is called a &lt;strong&gt;tuple&lt;/strong&gt;; a tuple is essentially when a function returns multiple values that are not grouped together in a structure such as a table and, consequently, more than one variable must be assigned to one statement. By indexing the &lt;em&gt;X&lt;/em&gt;, &lt;em&gt;Y&lt;/em&gt;, and &lt;em&gt;Z&lt;/em&gt; fields of the vector, we can capture number values, which can be used in various computations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local x,y,z = myVector.X, myVector.Y, myVector.Z  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One of the most common calculations that's done with vectors is finding the distance between two positions. While you could use the distance formula using the math library, there is a more direct way to do this. As we mentioned previously, all vectors have magnitude; this magnitude can also be manually calculated. However, Roblox includes a magnitude property for all vectors that can be captured, as shown here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local magnitude = myVector.Magnitude 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To calculate the distance between two positional vectors, the vectors must be subtracted from each other; then, the resulting magnitude of the new vector will be the distance between them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local vector1 = Vector3.new(1,5,7) 
local vector2 = Vector3.new(2,4,6) 
local distance = (vector1 - vector2).Magnitude 
print(distance) -&amp;gt; 1.73205 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another data type used in 3D environments is the &lt;strong&gt;CFrame&lt;/strong&gt;. We will discuss them in the next section. &lt;/p&gt;

&lt;h2&gt;
  
  
  CFrames
&lt;/h2&gt;

&lt;p&gt;The userdata CFrame is similar to a vector but has a wider range of uses and behaviors because of the additional information it carries. Declaring a CFrame variable with only positional data is the same as what you'd do with a vector; you can use the new constructor and must provide &lt;em&gt;X&lt;/em&gt;, &lt;em&gt;Y&lt;/em&gt;, and &lt;em&gt;Z&lt;/em&gt; coordinates, as shown here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local myCFrame = CFrame.new(0,0,0) --CFrame.new() also works 
--for creating blank CFrame identities. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What makes a CFrame distinct from a vector is its matrix of rotational information, which describes its orientation via directional vectors. While you will likely not deal with these matrix components individually, changing the orientation of &lt;strong&gt;CFrames&lt;/strong&gt; is something you will be doing constantly. A typical way to set the orientation of a CFrame is to use the LookAt portion of the CFrame.new() constructor. By providing an origin and target, a new CFrame will be created at the provided position, with LookVector (the front of part being manipulated) directed toward the specified LookAt position. The greatest advantage of this is for making a part look at a different position for ease of movement relative to its forward-facing direction. To test this, add two parts to our &lt;strong&gt;Workspace&lt;/strong&gt;, named Part1 and Part2. Position Part1 wherever you would like and place Part2 where you want the first part to look. Once you've done this, execute the following code to see how the front face of Part1 now points directly at Part2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local Part1 = workspace.Part1 
local Part2 = workspace.Part2 
Part1.CFrame = CFrame.new(Part1.Position, Part2.Position)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we mentioned previously, one of the benefits of manipulating CFrame rotation is for relative movement. If you've already played around with the position property of our parts, you may have noticed that the position is global and there is not really a way for a part to move in the direction it is facing. This global view is called the &lt;strong&gt;world space&lt;/strong&gt;. By using CFrame arithmetic, you can move parts in relation to their orientation, which is important for systems such as projectiles, doors, and even vehicles; this relative view is called the &lt;strong&gt;object space&lt;/strong&gt;. Here is a simple application where you could move a part forward in the direction it is facing by one stud:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myCFrame = myCFrame * CFrame.new(0,0,-1) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that multiplying two CFrame values does not actually multiply the components, and the operation that's occurring is conceptually more like addition. When the preceding code is implemented inside a loop, particularly a fast running one, you can simulate movement. This is a technique that's often used for projectile systems, as it saves many resources. This is because the &lt;em&gt;moving&lt;/em&gt; part is anchored, so physics calculations do not need to be made. This directional movement is used for almost any moving part and in most cases should be used instead of the Position property. &lt;/p&gt;

&lt;p&gt;While it may not be intuitive, you must be aware of which axes control the rotational behavior you seek in a three-dimensional environment. As shown in the following image, the &lt;em&gt;X&lt;/em&gt;, &lt;em&gt;Y&lt;/em&gt;, and &lt;em&gt;Z&lt;/em&gt; axes may control different directions of rotation than you may have expected: &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QKnRAOFr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7e3380t6oe9glsnbszva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QKnRAOFr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7e3380t6oe9glsnbszva.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 4 – Looking at the front of this part, you can see which axes cause what rotational behavior &lt;/p&gt;

&lt;p&gt;You may be wondering why the handles for the &lt;em&gt;Y&lt;/em&gt; axis are horizontal and the handles for the &lt;em&gt;X&lt;/em&gt; axis are vertical. The reason is that the axes of rotation are defined by the object moving around the axis. Due to this, they will not stand in the same orientation that the axes lay on a graph. So, in the case of an object being rotated around the &lt;em&gt;Y&lt;/em&gt; axis, you would need horizontal handles: &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HOPf-GvW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m8msheq8vdnm1v4wp4tv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HOPf-GvW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m8msheq8vdnm1v4wp4tv.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 5 – Here, you can see that the handles correspond to the object rotating around the axes&lt;/p&gt;

&lt;p&gt;By using the CFrame.Angles() constructor, you can directly manipulate the orientation of a CFrame. Remember that parts use degrees for their Orientation property, but &lt;strong&gt;CFrames&lt;/strong&gt; use radians when working with their rotational matrix. While keeping the same behavior of other CFrame arithmetic cases, multiplying a CFrame by CFrame.Angles() effectively adds to each orientation component. Let's follow an example where we want a CFrame to fully turn around by 180 degrees. To achieve this, we will need to rotate around the &lt;em&gt;Y&lt;/em&gt; axis and use radians, as shown here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myCFrame = CFrame.new() --No rotation 
myCFrame = myCFrame * CFrame.Angles(0,math.pi,0) --you can also 
use math.rad(180) for math.pi 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;CFrames&lt;/strong&gt; have many built-in functions but do not use a library like the Lua primitive types we have already covered. There are many functions and constructors – even some redundant ones – that all have their own uses for accomplishing complex operations for use in a variety of world-based computations. While these functions are abundant, you will not likely use many of them unless you're making some very math-intensive systems. However, say you wanted to obtain the orientation of the CFrame we just set. As stated previously, &lt;strong&gt;CFrames&lt;/strong&gt; use a matrix of directional vectors to describe their rotation; you are unable to just index this value like a part's orientation. Isolating this orientation for use with parts or calculations is not immediately apparent and requires the use of a special function. To extract this information, we must use the ToEulerAnglesYXZ() function. This function will return three numbers as a tuple in the normal &lt;em&gt;X&lt;/em&gt;, &lt;em&gt;Y&lt;/em&gt;, &lt;em&gt;Z&lt;/em&gt; order. These values represent a close approximation of the rotation of the CFrame in radians. Using the CFrame value from the previous example, we can capture all three components of the orientation within a Vector3 without assigning the values to variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local orientation = Vector3.new(myCFrame:ToEulerAnglesYXZ()) 
-&amp;gt; Vector3.new(-0,3.1415925,0) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this new value, there are several types of application you may want to perform. This orientation can now be applied to a part in our &lt;strong&gt;Workspace&lt;/strong&gt;; this can be done by multiplying the vector by a conversion value in order to obtain the orientation in degrees:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;part.Orientation = orientation * (360 / (2 * math.pi)) 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are interested in exploring more information about other CFrame functions and constructors, you can view all of them by visiting the API reference for the topic on the developer website: &lt;a href="https://developer.roblox.com/en-us/api-reference/datatype/CFrame"&gt;https://developer.roblox.com/en-us/api-reference/datatype/CFrame&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Instances
&lt;/h2&gt;

&lt;p&gt;Instances are a userdatum and consequently are also created by using the new constructor. There are hundreds of different instances, though only some can be created from scripts, due to security permissions. For now, let's simply make a new part in our &lt;strong&gt;Workspace&lt;/strong&gt; and color it a bright cyan. To do this, we can use Instance.new() and provide a string of the class name as its argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;local part = Instance.new("Part") 
part.BrickColor = BrickColor.new("Cyan") 
part.Parent = workspace 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;In this article, you’ve learned about the different data types defined in Roblox Lua, and have seen how to create and manipulate variables. These are foundational topics, and a natural next step would be to learn about the language’s other main programming constructs. With that knowledge, you will be in a good position to begin your Roblox game development journey.  &lt;/p&gt;

&lt;p&gt;Get started with your first Roblox Lua game development project with Zander Brumbaugh's &lt;a href="https://packt.live/3efXhyx"&gt;Coding Roblox Games Made Easy&lt;/a&gt;. With Zander's expert guidance, this book will walk you through the basics of Roblox game development and help you develop two complete games in no time. &lt;/p&gt;

</description>
      <category>robloxgame</category>
      <category>robloxstudio</category>
      <category>luaprogramming</category>
      <category>gameprogramming</category>
    </item>
    <item>
      <title>Introducing Tor </title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Fri, 30 Apr 2021 12:32:10 +0000</pubDate>
      <link>https://dev.to/packt/introducing-tor-1ob9</link>
      <guid>https://dev.to/packt/introducing-tor-1ob9</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyhq36qp58w7c2uq5yig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyhq36qp58w7c2uq5yig.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In recent years, privacy has become one of the fundamentals of security and information technology. &lt;strong&gt;The Onion Router (Tor)&lt;/strong&gt; Project can help us achieve what many users have been asking for in terms of an assurance of online anonymity. Tor is a global network of computers run by volunteers to provide online anonymity to anyone who needs it. &lt;/p&gt;

&lt;p&gt;This article will explain how Tor can help us research and develop tools for the online anonymity and privacy of its users while they're surfing the internet. Tor does this by setting up virtual circuits between the various nodes that make up the Tor network. We will also look at how Tor works from an anonymity point of view, stopping websites from tracking you.  &lt;/p&gt;

&lt;p&gt;Let's get started! &lt;/p&gt;

&lt;h1&gt;
  
  
  Understanding the Tor Project and hidden services
&lt;/h1&gt;

&lt;p&gt;The internet is arguably the largest source of mass surveillance in the world but is also one of the safest ways to send anonymous messages. Most internet users use the default applications and settings, making it possible to track, log, and analyze almost all of their communications. This is exemplified by the data exfiltration performed by large companies, which aim to obtain economic advantage from their users’ data. &lt;/p&gt;

&lt;p&gt;There are different types of anonymous browsing. Browsing through a single proxy offers a level of anonymity at the network level.  &lt;/p&gt;

&lt;p&gt;Another widely used system for anonymization is the use of VPNs to send traffic. In general, this works the same way as Tor, sending your traffic through another user's computer. The difference is the lack of anonymization between your computer and the VPN provider. In Tor, for example, the "exit node" is the one that actually collects your data – for example, the website you are trying to view anonymously – but it is more difficult to track the user and discover their origin address. &lt;/p&gt;

&lt;p&gt;All this requires the use of programs that aim to hide the user's identity. Perhaps the biggest anonymization device in use at the moment is Tor. This system facilitates anonymous communication by routing the messages on the Tor network through other computers. &lt;/p&gt;

&lt;p&gt;Thanks to the Tor network, we can connect completely anonymously due to it being an encrypted connection where the IP changes with each request that is made to each of the nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring the Tor network
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tor&lt;/strong&gt; is a network of virtual tunnels that protect you or your corporation from being placed at a specific location in the network. The objective of this network is to change the traditional routing mode, which we all use, so as to maintain the anonymity and privacy of our data. &lt;/p&gt;

&lt;p&gt;Tor provides anonymity by routing all your packets in an encrypted way through a complex web of repeaters. These communicate with each other to help transport your messages to the right destination, without anyone knowing who made the request or actually sent it. &lt;/p&gt;

&lt;p&gt;From a privacy point of view, Tor has two distinct purposes: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hiding the locations of users who are browsing the web&lt;/strong&gt;: Your computer can be traced through your IP address. Tor ensures untraceability through this method. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encrypting your browsing traffic&lt;/strong&gt;: Tor encrypts your browsing traffic by mixing it with other users' traffic using a technique called onion routing, which hides your IP address from the websites you visit. It also hides the traffic from your ISP address, which can see when you're connected to the Tor network but cannot determine what sites you are accessing through it. Now would be a good time to briefly highlight the use of DNS servers for the resolution of domain names provided by our ISP. If we have access to the configuration of our router, it's possible for us to change the DNS servers that we use and opt for a DNS service that offers us additional services, such as anonymity or protection against fraudulent or potentially dangerous destinations for our equipment or the integrity of our data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we've understood the purpose of Tor networking, let's look at how it works. &lt;/p&gt;

&lt;h3&gt;
  
  
  Onion routing
&lt;/h3&gt;

&lt;p&gt;The Tor network is based on the principle of &lt;strong&gt;onion routing&lt;/strong&gt;. This means that a connection goes through several encrypted layers, and the router at each layer only knows what is essential to perform the work at that layer. &lt;/p&gt;

&lt;p&gt;When you connect to the Tor network, the following process occurs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The client downloads a list of all available Tor relays and selects three: one guard node, one middle or relay node, and one exit node. &lt;/li&gt;
&lt;li&gt;If you then send information through the Tor network to the internet, it's first encrypted so that only the exit relay can see what website you're requesting. From a user privacy point of view, the exit nodes have visibility of this data through the network packets that are sent, but in most cases, the identity of those packets is not known. &lt;/li&gt;
&lt;li&gt;Then, this already encrypted layer is further encrypted so that only the middle relay node knows that it should be sent to the exit relay. This doubly encrypted layer is encrypted so that only the guard relay can see who the middle relay is: 
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpg3r630wki48uoeuoei.png" alt="image"&gt;
Figure 1 – Onion routing connection flow between the client and server&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All this encryption is done before the network traffic leaves your computer, which means the following for us: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Anyone monitoring your internet connection can only see you exchanging encrypted information with the &lt;strong&gt;guard&lt;/strong&gt; relay. &lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;guard&lt;/strong&gt; relay only knows your IP address and who the middle relay is. &lt;/li&gt;
&lt;li&gt;The middle relay only knows the &lt;strong&gt;guard&lt;/strong&gt; relay and the &lt;strong&gt;exit&lt;/strong&gt; relay, but not who you are or what website you're requesting.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;exit node&lt;/strong&gt; knows what you're requesting off the internet, as well as who the middle relay is, but not who you are or who the guard relay is.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This process completely separates the content you're requesting from anything that can be used to establish your identity. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important Note&lt;/strong&gt;&lt;br&gt;
The source code for the Tor Project is available at the project's website at &lt;a href="https://www.torproject.org/download/tor/" rel="noopener noreferrer"&gt;https://www.torproject.org/download/tor/&lt;/a&gt; and the project's GitHub repository at &lt;a href="https://github.com/torproject/tor" rel="noopener noreferrer"&gt;https://github.com/torproject/tor&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, how does the network work? Let's suppose that we have two computers: computer A and computer B. A wants to send a message to B and makes a connection to a server that contains the addresses of the Tor nodes. &lt;/p&gt;

&lt;p&gt;You can see this process in a graphical way on the official Tor website: &lt;a href="https://2019.www.torproject.org/about/overview.html.en" rel="noopener noreferrer"&gt;https://2019.www.torproject.org/about/overview.html.en&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Let's take a look at how this works, step by step: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The first step is getting a directory listing from the central server. &lt;/li&gt;
&lt;li&gt;After receiving the dialog list from this server, our Tor client will connect to a random node through an encrypted connection. This node will pick another random node with another encrypted connection, and so on until it reaches the node before the message arrives at computer B. The egress node (the penultimate node of the communication) will make an unencrypted connection to node B. All Tor nodes are chosen at random and no node can be used twice. &lt;/li&gt;
&lt;li&gt;Using asymmetric encryption, computer A encrypts the message into a structure that resembles an onion's structure: layered. First, it will encrypt the message with the public key of the last node of the route so that only computer B can decrypt it. In addition to the message, it includes (also encrypted) directions to the destination, B. This entire package, along with directions to the last node on the list, is encrypted again so that it can only be decrypted by the penultimate node on the route. &lt;/li&gt;
&lt;li&gt;Now, we can already see the structure of the data in onion routing. Using asymmetric encryption, computer A encrypts the message in layers. The first thing computer A will do is encrypt the message with the public key of the last node in the list so that only A can decrypt it. In addition, it encrypts and includes directions to the destination, which is computer B. This entire packet is encrypted again by instructions being added to get to the last node in the list. This is done so that it can decrypt the packet and eventually reach node B. &lt;/li&gt;
&lt;li&gt;To avoid third-party analysis of our communications, every 10 minutes, the Tor connection nodes are changed, with new nodes being chosen. &lt;/li&gt;
&lt;li&gt;The nodes of the Tor network are public. If we ourselves are a node, we will increase our privacy. Although this sounds contradictory, I'll explain why this happens: if Alice uses the Tor network to connect to Bob, she will need to connect to another Tor node. However, if it works as a node for Jane or Dave, it will also be connected to another node. Therefore, a third party will not be able to know if the communication by Alice has been initiated as a user or as a node. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This makes it more complex for a third party to extract information. If Alice were to function as a node for hundreds of users, it would be difficult to spy on their data. &lt;/p&gt;

&lt;p&gt;This process is repeated until we're finished with all the nodes of the route. With this, we already have the data package ready, so it's time to send it. Computer A connects to the first node on the route and sends the packet to it. This node decrypts it and follows the instructions it has decrypted to send the rest of the packet to the next node. This one will be decrypted again and resent to the next one, and so on. The data will finally arrive at the output node, which will send the message to its destination. &lt;/p&gt;

&lt;p&gt;The Tor protocol works by multiplexing multiple circuits over a single node-to-node TLS connection. Each circuit is a path that's created by clients via the Tor network. This path consists of randomly selected nodes. Tor traffic is routed through three nodes by default: &lt;strong&gt;Guard&lt;/strong&gt;, &lt;strong&gt;Relay&lt;/strong&gt;, and &lt;strong&gt;Exit&lt;/strong&gt;. In order to route multiple relays, Tor has flow-multiplexing capabilities where the following occurs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A single Tor circuit can transport multiple TCP connections. &lt;/li&gt;
&lt;li&gt;Each node knows only the source and destination pair for a circuit; i.e., it doesn't know the entire route. &lt;/li&gt;
&lt;li&gt;Next, we'll look at hidden services. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are hidden services?
&lt;/h2&gt;

&lt;p&gt;Tor allows a website to hide its IP address from its users. Such sites are called &lt;strong&gt;onion services&lt;/strong&gt; or &lt;strong&gt;hidden services&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Hidden services are those sites that can only be accessed by being connected to Tor because they are sites hosted within the Tor network itself. Most of these sites are usually illegal sites because the protection of being inside the Tor network attracts the people who set up such sites. &lt;/p&gt;

&lt;p&gt;According to the Tor Project's statistics, there are over 60,000 onion services running at the time of writing: &lt;a href="https://metrics.torproject.org/hidserv-dironions-seen.html" rel="noopener noreferrer"&gt;https://metrics.torproject.org/hidserv-dironions-seen.html&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Hidden services provide a mechanism where the anonymity and the confidentiality of data are preserved safely. However, it sacrifices other aspects in terms of performance since it is quite expensive to build the circuits involved between the client and the server. For this reason, hidden services in Tor are slow. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important Note&lt;/strong&gt;&lt;br&gt;
To maintain proper use of the Tor network, the user and the onion service that they wish to access must assemble complete Tor circuits. For this reason, there will be six nodes between the user and the service provider. This makes the connection slower and explains why onion services generally use very simple and lightweight websites. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now that you understand the basics of the Tor Project and what hidden services are, let's move on and learn about the main tools we can use to connect to the Tor network. &lt;/p&gt;

&lt;h1&gt;
  
  
  Tools for anonymity in the Tor network
&lt;/h1&gt;

&lt;p&gt;In this section, you will learn about the main tools that provide anonymity in the Tor network. We'll do this by learning how to connect to the &lt;strong&gt;Tor Browser&lt;/strong&gt; and introducing other tools for controlling our Tor instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to the Tor network
&lt;/h2&gt;

&lt;p&gt;The easy way to navigate through the Tor network is to use the Tor Browser, which is a modified version of Firefox that includes extensions such as &lt;strong&gt;Torbutton&lt;/strong&gt;, &lt;strong&gt;NoScript&lt;/strong&gt;, and &lt;strong&gt;HTTPS Everywhere&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;The Tor Browser is configured to obtain the different routes and servers that we can connect to automatically. In addition to allowing you to browse with a high degree of anonymity, by closing a browsing session, confidential user data related to cookies and browsing history will be automatically deleted. &lt;/p&gt;

&lt;p&gt;To connect to the Tor network, all you need to do is the following: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download the Tor Browser Bundle from &lt;a href="https://www.torproject.org" rel="noopener noreferrer"&gt;https://www.torproject.org&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;Unzip it. &lt;/li&gt;
&lt;li&gt;Run the start-tor-browser script in the unzipped directory. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In Debian-based distributions such as Ubuntu and Linux Mint, we can also install it through the torbrowser-launcher package to get the latest version of the browser. For example, here, we can find the latest version of the Ubuntu distribution:&lt;br&gt;&lt;br&gt;
&lt;a href="https://packages.ubuntu.com/bionic/torbrowser-launcher" rel="noopener noreferrer"&gt;https://packages.ubuntu.com/bionic/torbrowser-launcher&lt;/a&gt; &lt;br&gt;
We can install it with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install torbrowser-launcher 
$ torbrowser-launcher 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can execute torbrowser-launcher to download the Tor Browser and follow the auto installer's instructions. &lt;/p&gt;

&lt;p&gt;Once installed and connected successfully, the Tor Browser will launch and point to &lt;a href="http://check.torproject.org" rel="noopener noreferrer"&gt;http://check.torproject.org&lt;/a&gt;, which will confirm you are browsing anonymously. If you see something similar to the following, then this means you have successfully configured Tor and can navigate through the internet anonymously: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9nxmsc8r20cxfkjhvi4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9nxmsc8r20cxfkjhvi4.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 2 – Prompt that shows the connection to the Tor Browser was successful &lt;/p&gt;

&lt;p&gt;The initial Tor check page not only validates that you are using the Tor network but also displays your current IP address. Remember that you may be exiting the Tor network from an exit node in another country, and specific sites try to visit the site in the native language of that country. &lt;/p&gt;

&lt;p&gt;An interesting feature offered by the Tor Browser is the &lt;strong&gt;Use new identity&lt;/strong&gt; option. This functionality allows us to browse with a different IP. Just remember that when you use Tor, you are really browsing through your network, but the router that we go to the internet through is always the same. This means that you use the same IP, unless you change it with the aforementioned option. This IP changes dynamically with each request you make. &lt;/p&gt;

&lt;p&gt;When browsing the Tor Browser, our IP will be the IP of the last router that we have passed within the Tor network, which will always be the same as long as we do not provide the option to change IP addresses. In addition to this, once we enter the Tor network, the path that the packets will follow to the last node or router in the Tor network will always be different, so tracking a user's data flow is almost impossible. In addition to this, connection data is only stored for a certain amount of time (less than an hour). &lt;/p&gt;

&lt;p&gt;The Tor community develops various projects, some of which can be found at &lt;a href="https://2019.www.torproject.org/projects/projects" rel="noopener noreferrer"&gt;https://2019.www.torproject.org/projects/projects&lt;/a&gt;. Let's take a brief look at two of the most popular ones: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tails&lt;/strong&gt;, &lt;a href="https://tails.boum.org" rel="noopener noreferrer"&gt;https://tails.boum.org&lt;/a&gt;, is an operating system that you can carry on a USB stick that makes all its connections through Tor, preserving the anonymity of its users. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orbot&lt;/strong&gt;, &lt;a href="https://guardianproject.info/apps/orbot" rel="noopener noreferrer"&gt;https://guardianproject.info/apps/orbot&lt;/a&gt;, is the official application for Android. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are several others, the main one being the Tor Browser: &lt;a href="https://www.torproject.org/projects/torbrowser.html.en" rel="noopener noreferrer"&gt;https://www.torproject.org/projects/torbrowser.html.en&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Node types in the Tor network
&lt;/h2&gt;

&lt;p&gt;None of the intermediate nodes know the origin or destination of the message. They also do not know what position they occupy in the network. These nodes are spread all over the world so that anonymity is achieved. The intermediate nodes are resources donated by anonymous people from all over the world. If we look at the &lt;strong&gt;TorMap&lt;/strong&gt; service, &lt;a href="https://tormap.void.gr" rel="noopener noreferrer"&gt;https://tormap.void.gr&lt;/a&gt;, we'll see a map showing all these nodes. &lt;/p&gt;

&lt;p&gt;Due to the way the Tor network works, not all the nodes that make it up are the same. Depending on its characteristics and configuration, a node can fulfill certain functions: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Entry nodes (guard relays)&lt;/strong&gt;: These communicate with Tor clients and connect users to the rest of the Tor network. They have generally been in use for a long time and have generous bandwidths. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Middle nodes (middle relays)&lt;/strong&gt;: These only communicate with other nodes, so their traffic never leaves the Tor network and represents the most comfortable, fast, and secure option for configuring nodes. &lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Output nodes (exit relays)&lt;/strong&gt;: These are the endpoints within the Tor network. They take the requests, send them to their recipients, receive their responses, and send them back to the network so that they reach the original requestor. They are usually maintained by institutions and other actors and have the capacity to face the possible legal consequences of what users look up using the Tor network if their connections leave through these nodes. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bridge nodes (bridge relays)&lt;/strong&gt;: These are normal relays that are not listed within the Tor directory, which means they can be considerably more difficult to block. We can use bridge relays when our ISP is blocking the use of Tor but we still want to connect to our network. The only difference between normal and bridge relays is that normal relays are listed in a public directory, whereas bridge relays are not. You can get a list of bridge nodes at the following URL: &lt;a href="https://bridges.torproject.org" rel="noopener noreferrer"&gt;https://bridges.torproject.org&lt;/a&gt;. We can access &lt;a href="https://bridges.torproject.org/bridges" rel="noopener noreferrer"&gt;https://bridges.torproject.org/bridges&lt;/a&gt; to get random bridge data. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we understand how the Tor network works, let's learn how to install the Tor service on our machines. &lt;/p&gt;
&lt;h2&gt;
  
  
  Installing the Tor service
&lt;/h2&gt;

&lt;p&gt;One of the ways we can control a Tor instance is through a service that we can install on our machine. The objective of installing this service is to allow us to customize the way in which we can control our instance and send commands to, for example, change our identity when we are surfing anonymously. &lt;/p&gt;

&lt;p&gt;Installing the Tor service in Debian/Ubuntu-based distributions is easy – just run the following Terminal commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt-get update 
$ sudo apt-get install tor 
$ sudo /etc/init.d/tor restart 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To start the Tor service from a Terminal, enter the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo service tor start 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can verify that the Tor service has been started correctly with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ service tor status 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command should give us the following output: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb4m78liyabityyxryuy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb4m78liyabityyxryuy.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 3 – Checking the Tor service's status &lt;/p&gt;

&lt;p&gt;We can also verify that the Tor network works and provides anonymous connectivity. For this, we can call Tor routing using the following proxychains command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ proxychains firefox www.whatismyip.com 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ProxyChains&lt;/strong&gt; (&lt;a href="https://github.com/haad/proxychains" rel="noopener noreferrer"&gt;https://github.com/haad/proxychains&lt;/a&gt;) is a tool with the ability to connect to various proxies through the HTTP(S), SOCKS4, and SOCKS5 protocols. It also has the ability to resolve DNS addresses through the proxy server. By using this application with Tor, it becomes very difficult for others to detect our real IP. &lt;/p&gt;

&lt;p&gt;A whois search of that IP address from a Terminal window indicates that the transmission is now leaving a Tor exit node. You can also verify that Tor is working properly by accessing the &lt;a href="https://check.torproject.org" rel="noopener noreferrer"&gt;https://check.torproject.org&lt;/a&gt; and &lt;a href="https://browserleaks.com/ip" rel="noopener noreferrer"&gt;https://browserleaks.com/ip&lt;/a&gt; services. &lt;/p&gt;

&lt;p&gt;You can control the Tor service by configuring the torrc file to enable the ControlPort option. In this way, we can control the Tor service from our Python programs. &lt;/p&gt;

&lt;p&gt;In the following screenshot, we can see the SOCKSPort configuration located in this torrc file: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijmk7nk2far6s2uhx6v0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijmk7nk2far6s2uhx6v0.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 4 – Torrc file configuration &lt;/p&gt;

&lt;p&gt;In the preceding image, we can see how the service is listening on port 9050. By default, the Tor client uses port 9050 for SOCKS traffic. If we need a special configuration, we need to change the configuration of the torrc file. The Tor Project documentation (&lt;a href="https://support.torproject.org/tbb/tbb-47/" rel="noopener noreferrer"&gt;https://support.torproject.org/tbb/tbb-47/&lt;/a&gt;) shows the SOCKS proxy configuration we can establish in the Tor Browser's network settings.  &lt;/p&gt;

&lt;p&gt;Depending on the Tor configuration, the Tor client will listen on two ports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ControlPort 9051: This is the port where Tor will accept the connections and allow the Tor process to be managed using the Tor Control Protocol. &lt;/li&gt;
&lt;li&gt;SocksPort 9050: This port waits for connections from other applications and determines which port number the SOCKS proxy will listen on for incoming connections from external applications. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Configuring the torrc file is similar to launching the Tor service in that you have to establish the aforementioned arguments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ tor --SocksPort 9050 --ControlPort 9051 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the following screenshot, we can see the startup process for the Tor service in more detail:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr18jraesslwh1998g7v5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr18jraesslwh1998g7v5.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 5 – Starting the Tor service &lt;/p&gt;

&lt;p&gt;In the following screenshot, we can see the startup process and the different steps that must be taken to initialize Tor to establish a circuit in more detail: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5q2e8wkdcsmws9cm5mvs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5q2e8wkdcsmws9cm5mvs.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 6 – Initializing Tor to establish a circuit &lt;/p&gt;

&lt;p&gt;As we can see, the process of establishing a circuit follows four different phases, as follows: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the first phase, the machine tries to connect to the directory server that is responsible – through a non-encrypted link – for providing you with a complete list of nodes that make up the Tor network. &lt;/li&gt;
&lt;li&gt;Next, a handshake with the directory server is attempted and an encrypted directory connection is established. &lt;/li&gt;
&lt;li&gt;In the third step, the network status consensus is loaded and authorization to load certificate keys is provided. &lt;/li&gt;
&lt;li&gt;Finally, information related to the relay descriptors is gathered before the Tor circuit is established.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, we'll take a look at two different services: ExoneraTor and Nyx. &lt;/p&gt;

&lt;h2&gt;
  
  
  ExoneraTor and Nyx
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;ExoneraTor&lt;/strong&gt; service (&lt;a href="https://exonerator.torproject.org" rel="noopener noreferrer"&gt;https://exonerator.torproject.org&lt;/a&gt;) maintains a database of IP addresses that have been part of the Tor network. It offers a service where, by entering an IP address and a date, you can find out if that address has been used as a relay node in the Tor network. &lt;/p&gt;

&lt;p&gt;This service can store more than one IP address per relay if the nodes use a different IP address to go out to the internet rather than registering with the Tor network, and it stores information on whether a node allows Tor traffic to go to the internet. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nyx&lt;/strong&gt; (&lt;a href="https://nyx.torproject.org" rel="noopener noreferrer"&gt;https://nyx.torproject.org&lt;/a&gt;) is another interesting project that allows you to gather detailed real-time information about relays, such as their bandwidth usage, event logs, and connections.&lt;/p&gt;

&lt;p&gt;The following screenshot shows some output from a Tor configuration. Here, we can see the parameters associated with the Tor instance: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbwnpoizu9usk583zgsr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbwnpoizu9usk583zgsr.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 7 – Tor configuration and parameters &lt;/p&gt;

&lt;p&gt;Nyx also allows us to view the connections and circuits that have been established from the Tor instance, the instance's options and their configuration, and the content of the torrc file: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flicxsvdf9x12o4223om2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flicxsvdf9x12o4223om2.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 8 – Tor connections and circuits established &lt;/p&gt;

&lt;p&gt;The connection data provided by Nyx is similar to the netstat or top commands but is correlated with the information in the Tor relays. &lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;In this article we've looked at how Tor provides online users with a high degree of anonymity, by setting up virtual circuits between the nodes that make up the Tor network. As well as having insight into what Tor is and how it works, the knowledge you now have makes a solid foundation for learning about tools that can help with, for example, automating the process of searching and finding hidden services. &lt;/p&gt;

&lt;p&gt;This article is part of José Manuel Ortega's book &lt;a href="https://packt.live/3cE15Kj" rel="noopener noreferrer"&gt;Mastering Python for Networking and Security&lt;/a&gt;, a guide to overcoming security and networking issues using Python scripts and libraries. Check it out now to read more about leveraging Python packages to build a secure network. &lt;/p&gt;

</description>
      <category>python</category>
      <category>cybersecurity</category>
      <category>networkautomation</category>
      <category>pythonhacking</category>
    </item>
    <item>
      <title>Integration Concepts for Technical Architects </title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Tue, 27 Apr 2021 13:02:04 +0000</pubDate>
      <link>https://dev.to/packt/integration-concepts-for-technical-architects-487i</link>
      <guid>https://dev.to/packt/integration-concepts-for-technical-architects-487i</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PCDfBG9---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xvmufafg4ldwctohfuh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PCDfBG9---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xvmufafg4ldwctohfuh9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
In today's world, we rarely come across completely isolated Salesforce CRM implementations. In most cases, the fact that Salesforce will become the heart of the enterprise's business transformation process means that it has to be connected with dozens of other applications. Integration cost is sometimes overlooked or underestimated, despite the various studies that point out that around 25-35% of the total project cost will likely be spent on integration. The Salesforce Architect is expected to be able to design a secure and scalable integrated solution. &lt;/p&gt;

&lt;p&gt;In this article, we will review key architectural principles, concepts, and options for Salesforce integration, and indeed for integration more generally. &lt;/p&gt;

&lt;h1&gt;
  
  
  Integration in the enterprise
&lt;/h1&gt;

&lt;p&gt;The digital enterprise landscape is becoming ever more sophisticated. Today's enterprises have hundreds, if not thousands, of different, in addition to all the legacy systems that still survive the axe. Nowadays, it is very common to find that an enterprise has dozens of websites, multiple instances of ERP systems, and many other departmental applications, in addition to several data warehouses or lakes. &lt;/p&gt;

&lt;p&gt;One of the reasons why enterprises end up in such situations is because of the complexity associated with building business applications. Building a single application that runs all business processes is nearly impossible. Spreading business functions into smaller chunks of applications provides the business with enough flexibility and agility to move at the pace they need, rather than being bound by the technical boundaries of a bigger all-in-one solution. Moreover, it gives the business the opportunity to pick and choose the best suite of applications that best serve their needs – the best &lt;strong&gt;customer relationship management (CRM)&lt;/strong&gt; system, the best &lt;strong&gt;order management system (OMS)&lt;/strong&gt;, and the best &lt;strong&gt;enterprise resource planning (ERP)&lt;/strong&gt; solution. &lt;/p&gt;

&lt;p&gt;In the past 20 or so years, we've seen vendors offering applications focusing on specific core functions. We’ve also noticed the continuous addition of functionalities to applications, causing &lt;em&gt;functionality spillover&lt;/em&gt;. For example, we have seen customer care software getting extensions to include a limited billing functionality, due to the difficulty in drawing clear functional separations between systems. (If a customer raises a dispute for a bill, will that be considered something to be handled by customer care or the billing applications?) &lt;/p&gt;

&lt;p&gt;On the other hand, users do not really care about these boundaries or the systems that are involved behind the scenes. They just expect a business function to be executed, regardless of the system or systems involved in delivering it. For example, when a customer places an order online, this will likely require coordination between several systems to deliver several functionalities, such as checking the history or credit score of the customer, checking inventory, computing tax, creating the order, fulfilling the order, handling shipment, etc. These processes can span across multiple systems, but from the customer's perspective, it was a single transaction. To support such distributed functionalities, which are expected to work as a coherent business process, applications need to be integrated in an efficient, secure, scalable, and reliable fashion. &lt;/p&gt;

&lt;p&gt;Typical &lt;em&gt;enterprise integration needs&lt;/em&gt; are as follows: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Get the right information&lt;/strong&gt;: Get precise knowledge of a particular piece of information created by different systems and enterprise business processes, structured in a consumable way that can support other business needs. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Get that information to the right place&lt;/strong&gt;: This requires mechanisms to handle information transactions across heterogeneous environments. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Get that information at the right time&lt;/strong&gt;: Ideally, this requires distributing the information in real-time to reflect the actual state of a particular data entity. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility and change-readiness&lt;/strong&gt;: To adapt to external factors, such as market demand, a shift in customer behavior, new legislation, or a shift in social philosophy. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coordinate business processes&lt;/strong&gt;: This is a challenging operation that may require modeling the enterprise business's processes, how they are interlinked, and what kind of information they exchange. This may require a deep understanding of the business. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Good integration architecture depends on understanding and applying a number of design principles and concepts. Let’s now look at those. &lt;/p&gt;

&lt;h2&gt;
  
  
  Integration architecture design principles
&lt;/h2&gt;

&lt;p&gt;As with any complex technical architectural topic, there are various considerations and consequences that you need to keep in mind while designing the target integration strategy and architecture. &lt;/p&gt;

&lt;p&gt;The main decision points are usually as follows: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Native integration&lt;/strong&gt;: If you can develop a single standalone application that can fulfill all business needs on its own then you can avoid a lot of complexity driven by integration requirements. However, in reality, this is not something you can normally achieve. Many attempts to extend an application so that it includes other functionalities can end up creating a complex system that is hard to manage and maintain or develop to meet new business requirements. This problem exists in the Salesforce world as well, although in a more limited way. Many Salesforce products are natively integrated, with many considered the best in the market. For example, Salesforce Communities provides a native solution to exposing a customer portal that is natively integrated with your CRM. Salesforce Communities offer a very good sharing model and an easy way to control the look and feel of the community itself. It makes sense to favor that over a solution where you need to build a custom-made customer portal over some technology, then figure out a way to integrate it with Salesforce in a secure and compliant way. Moreover, using the native integration keeps the doors open for using other natively integrated features in the future. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity&lt;/strong&gt;: This goes beyond the integration architecture. Avoid complicated solutions as much as possible and always keep the golden &lt;strong&gt;80-20 rule&lt;/strong&gt; in mind. Fulfilling 80% of use cases using a simplified architecture should be preferred over targeting a solution that covers 100% of the use cases using an over-complicated architecture. Keep your integration code simple and tidy. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application dependencies&lt;/strong&gt;: Integrated applications should have minimal dependencies on each other. This allows solutions to evolve independently, and also allows us to replace an application completely without it impacting the other integrated systems. Tightly coupled applications rely on many assumptions regarding how each of them works. When an application is modified, the assumptions could change, which would, in turn, break the integration. In a loosely coupled integration, the integration interface is specific enough to deliver a particular functionality but generic enough to allow for change if needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timing&lt;/strong&gt;: Ideally, the integration architecture should aim to minimize the duration between the moment an application is sending data and another application is receiving it. The aim should be to share small chunks of data as frequently as possible, rather than wait to exchange a huge block of data that may not necessarily be related. Data sharing latency should be taken into consideration while designing the architecture. The longer a data exchange process takes, the more likely it will become prone to other challenges, such as a change in the data's state. Bulk data exchanges can still be used for the right use cases, such as archiving operational data. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Synchronous versus asynchronous&lt;/strong&gt;: In a synchronous process, a procedure waits until all its sub-procedures finish executing. However, in an integrated environment, where the integrated applications might be on different networks or might not necessarily be available at the same time, you may find more use cases where the procedure doesn't have to wait for all of its sub-procedures to conclude. It simply invokes the sub-procedure and then lets it execute asynchronously in the background, making use of the multi-threading ability of many of today's applications. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration technology&lt;/strong&gt;: Selecting the right technology is essential. Depending on the integration techniques available, there might be higher dependencies on specific skillsets, hardware, or software, which could impact the speed and agility of your project. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data formats&lt;/strong&gt;: Data exchanged between different applications must follow a pre-agreed format. In the enterprise world, this is unlikely, so the integration process must have an intermediate step where the data is translated from one format into another. Another related challenge is the natural evolution of data formats. Flexibility to accommodate the changes to a data format is a key aspect of the overall flexibility of an integration architecture. &lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Data versus functionality&lt;/strong&gt;: Integration is not necessarily about sharing data. The integrated applications could be looking to share functionality. Think of the use case where one application needs to invoke a particular functionality in another system, such as checking for a particular customer's credit score. There will likely be a set of parameters that is sent to satisfy the logic of the remote process. Invoking remote functionalities can be difficult and may have a significant impact on how reliable the integration is. As a Salesforce architect, you need to be aware of specific integration patterns, and you need to understand the limitations of the platform.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Having considered some important general principles and issues, let's now have a look at modern integration approaches and the kinds of middleware tool that are available so that as an architect you can select the right options for your particular project. &lt;/p&gt;

&lt;h1&gt;
  
  
  Modern integration tools
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Extract, transform, and load (ETL)
&lt;/h2&gt;

&lt;p&gt;In this method of data integration, the data is &lt;em&gt;copied&lt;/em&gt; from one or more data sources into destination data store that does not necessarily share the same structure as the data source(s): &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data extraction&lt;/strong&gt; involves accessing one or more data sources and extracting data from them. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data transformation&lt;/strong&gt; includes all the data processing that takes place before the data is delivered to its final destination, including data cleansing, data formatting, data enrichment, data validation, and data augmentation. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data loading&lt;/strong&gt; includes the processes required to access and load the data into the final target data store. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ETL tools may &lt;em&gt;stage&lt;/em&gt; the data into a &lt;em&gt;staging area&lt;/em&gt; or &lt;em&gt;staging data store&lt;/em&gt; to run complex transformations on it, such as de-duplication, custom logic, or data enrichment by looking up external reference data. Normally, the staging data store would co-exist with the ETL tool on the same server to provide the quickest possible response times. The three ETL processes take time, so it is common to have them scheduled or running in an asynchronous fashion. Most of the modern ETL tools can be scheduled to run a particular job every few minutes. Some ETL tools can also expose a triggerable endpoint, which is simply an &lt;em&gt;HTTP listener&lt;/em&gt; that can receive a message from specific authorized senders in order to trigger one or more ETL jobs. For example, a listener can be exposed to receive a specific type of outbound message from a particular Salesforce instance. Once that outbound message is received, the listener triggers one or more ETL jobs to retrieve or update data in Salesforce, as well as other systems. &lt;/p&gt;

&lt;p&gt;Most of today's ETL tools come with &lt;em&gt;built-in connectors&lt;/em&gt; for different types of application databases, such as Salesforce, Microsoft Azure, Amazon Redshift, Amazon S3, SAP, and many others. In addition, they come with adapters to generic database APIs, such as &lt;strong&gt;Open Database Connectivity (ODBC)&lt;/strong&gt; and &lt;strong&gt;Java Database Connectivity (JDBC)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Some even provide connectors for the &lt;strong&gt;File Transfer Protocol (FTP)&lt;/strong&gt; and &lt;strong&gt;SSH File Transfer Protocol (SFTP)&lt;/strong&gt;. These connectors allow us to access a particular application database in an optimized fashion. For example, the Salesforce connector could be built to automatically switch between using the Salesforce's standard REST API, the Salesforce SOAP API, or the Salesforce BULK API, depending on the operation that's been executed and the amount of data being dealt with. &lt;/p&gt;

&lt;p&gt;Today, several ETL products are provided in a SaaS fashion. In this case, you need to understand how the ETL tools can connect to a database behind a firewall. The enterprise's hosted applications and database would normally reside behind the enterprise firewall. Most enterprises have strict regulations that prevent such resources from being exposed. These are known as &lt;strong&gt;demilitarized zones (DMZs)&lt;/strong&gt;. A DMZ is a physical or logical subnetwork that is used by the enterprise to expose external-facing materials and contents – mainly to the public, who are untrusted users. Resources in these DMZs can be accessed by cloud-based applications. However, this is not how cloud-based ETL tools get access to the enterprise's locally hosted applications. One of the most popular ways to achieve this is by installing a &lt;em&gt;client application&lt;/em&gt; on the enterprise's local network. This is a trusted application provided by the ETL tool product provider, and its main duty is to facilitate communication between the enterprise's local applications and databases and the cloud-based ETL tool. The security team will still need to configure the firewall to allow the client to communicate back and forth with the cloud-based ETL tool. &lt;/p&gt;

&lt;p&gt;ETL tools are very suitable for &lt;em&gt;data replication&lt;/em&gt; operations. They are designed and built to provide a robust and scalable service since they can deal with millions – even billions – of records. ETL tools are also ideal for data replications that require a lot of time, such as replicating media files. They are very flexible and easy to work with. &lt;/p&gt;

&lt;p&gt;As a Salesforce architect, you need to know about some of the popular ETLs that are used today. You also need to understand the limitations of out-of-the-box tools such as Salesforce Data Loader, which is too simple to be categorized as an ETL tool. Some of the most popular ETL tools that are used with Salesforce today are Informatica PowerCenter, Informatica Cloud, Talend, Jitterbit, and MuleSoft. &lt;/p&gt;

&lt;h2&gt;
  
  
  Enterprise Service Bus
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Enterprise Service Bus (ESB)&lt;/strong&gt; is a name given to a particular method of data integration where the different applications are integrated via a communication bus. Each different application communicates with the bus only. This decouples the applications and reduces dependencies; systems can communicate without knowing the details of how other systems operate. ESB tools support different architectural concepts such as microservices, API-led connectivity, and event-driven architectures. We will cover all these concepts later on in this article. &lt;/p&gt;

&lt;p&gt;ESBs support both synchronous and asynchronous types of communication, which makes it ideal for integrations operating on the business logic layer, where RPI is a key capability to look for. ESBs also utilize &lt;em&gt;built-in connectors&lt;/em&gt; to connect to different types of applications and data stores, similar to ETL tools. The connector here would also transform the data from the source system format into the bus format. Considering that ESBs are usually stateless, the state of each message in the bus is included as part of the message itself. While the data is traveling through the bus, it is considered to be in a &lt;strong&gt;canonical data format&lt;/strong&gt;. A canonical data format is simply a model of the data that supersets all the other models of the same data in the landscape. This canonical data is normally translated into target data models. The &lt;strong&gt;Cloud Information Model (CIM)&lt;/strong&gt; is a good example of a canonical data model. Describing CIM is beyond the scope of this article, but becoming familiar with it is strongly recommended. &lt;/p&gt;

&lt;p&gt;ESBs can handle complex orchestrations. For example, an application A might send customer details to the ESB, which, in turn, would communicate with multiple external applications to do a real-time credit check, followed by an invocation to the CRM system, to start a customer onboarding journey. The customer onboarding journey then generates a unique customer ID that is returned to application A with a success message. ESBs can handle complex orchestrations and can use a supporting database as a temporary storage or as a cache for some data. The database would normally co-exist with the ESB tool on the same server to provide the quickest possible response time. &lt;/p&gt;

&lt;p&gt;The ESB also handles any kind of required data cleansing, data formatting, data enrichment, data validation, and data augmentation, as well as translations from/to different data formats. For example, you can imagine an application A sending data in the &lt;strong&gt;Intermediate Document (IDoc)&lt;/strong&gt; format to the ESB, which receives it, augments it with other data coming from a lookup/reference data source, and then translates that into the formats expected by the recipients, such as XML, CSV, JSON, and others. &lt;/p&gt;

&lt;p&gt;ESBs can also provide multiple interfaces for the same component, which is particularly useful for providing backward compatibility, especially for web services. ESBs are normally designed to be very scalable and capable of handling a very high load of traffic, and several modern ESBs are offered today in a SaaS fashion with an option to host them locally. Due to their stateless nature, ESBs are not considered ideal for long-running operations such as replicating a massive amount of data between systems or moving large media files. &lt;/p&gt;

&lt;p&gt;As a Salesforce architect, you need to know some of the popular ESBs that are in use today. You also need to understand when and why to propose utilizing an ESB. ESBs and ETLs are very common in Salesforce solutions, so make sure you fully understand the differences between ESBs and ETLs, and which is good for what. Also, make sure you understand why, in most cases, enterprises should utilize middleware of some sort instead of P2P connections. And, make sure you understand the &lt;em&gt;ideal&lt;/em&gt; use cases for ESBs in order to recognize whether they are utilized optimally in a given implementation or not. Some of the popular ESB tools that are used with Salesforce today are MuleSoft webMethods Integration Server, IBM Integration Bus, TIBCO ActiveMatrix Service Bus, and WSO2 Enterprise Integrator. &lt;/p&gt;

&lt;h2&gt;
  
  
  Reverse proxies
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;reverse proxy&lt;/strong&gt; is the opposite of a forward proxy: while a forward proxy is used as an intermediary the client uses to connect to a server, a reverse proxy is something the server (or servers) would put between itself and potential clients. For the end client, any retrieved resources, in this case, would appear as if they were &lt;em&gt;originated by the proxy server itself&lt;/em&gt;, rather than the server or servers that sit behind it. A reverse proxy is often used to provide a more secure interface to deal with untrusted clients (such as unauthorized internet users), as well as shield the other applications behind it that might lack the ability to handle excessive load or be unable to provide the required security measures (such as an inability to support HTTPS). A reverse proxy can provide capabilities such as transforming HTTPS requests into HTTP, handling cookies and session data, transforming one request into multiple requests behind the scenes, and then combining the responses and buffering incoming requests to protect the shielded servers from excessive load. Some of the providers of reverse proxy products are VMware, Citrix Systems, and F5 Networks. &lt;/p&gt;

&lt;h2&gt;
  
  
  API gateways
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;API gateways&lt;/strong&gt; are historically used to protect your internal web services (or APIs – remember that we are using the two terms interchangeably in the web context as most modern APIs are offered as web services). The enterprise's internal APIs might not be designed to handle issues such as authentication and scalability, and an API gateway can provide a layer on top to protect the APIs, as well as enable other functionalities, such as monetizing the APIs, providing real-time analytics, and protecting against &lt;strong&gt;denial of service (DoS)&lt;/strong&gt; attacks. &lt;/p&gt;

&lt;p&gt;API gateways are very similar in concept to reverse proxies. In fact you can think of an API gateway as a special type of reverse proxy. On some occasions, you might have both of them in your landscape, with the API gateway sitting behind the reverse proxy, which handles load balancing. API gateways can normally be configured via an API or UI. On the other hand, reverse proxies are normally configured via a config file and require a restart so that they can use a new set of configurations. API gateways also provide advanced API functionalities such as rate limiting, quotas, and service discovery. As a Salesforce architect, you need to know about some of the popular API gateways in use today. Some of the most popular ones used with Salesforce today are MuleSoft, Microsoft's Azure API Management, Google (Apigee), and IBM API Management. &lt;/p&gt;

&lt;h2&gt;
  
  
  Stream-processing platforms
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stream-processing platforms&lt;/strong&gt; are systems designed to make the most out of parallel processing, to fully utilize the computational capabilities of their server. Ideally, they are utilized in event-driven integrations. We will cover event-driven integration shortly.&lt;/p&gt;

&lt;p&gt;Stream-processing platforms can handle huge amounts of incoming data since they are designed to make use of elastic scaling. They are also normally easy to encapsulate in containers, which makes them easy to deploy on different platforms, including Cloud, on-premises, or hybrid environments. Stream-processing platforms are often ideal when there is a need for a massively scalable messaging platform, such as an IoT server. Some of the most popular stream-processing tools in use today are Apache Kafka, Amazon Kinesis, Redis, and RabbitQ. Salesforce Heroku supports some of these technologies, such as Kafka. &lt;/p&gt;

&lt;p&gt;With that, we have covered the different middleware options, with different values-driven out of each. We will now outline some of the main modern approaches to application integration, such as service-oriented architecture, microservices, API-led connectivity, and event-driven architectures. &lt;/p&gt;

&lt;h1&gt;
  
  
  Exploring modern integration approaches
&lt;/h1&gt;

&lt;p&gt;As a Salesforce architect, you are dealing with modern and changing tools and technologies every day. It is very important to align the knowledge we covered earlier with today's modern integration approaches. Some of these approaches are becoming less popular, but their concepts are still the basis of other more modern approaches. To fully understand modern integration approaches and be able to lead discussions with your client, enterprise architects, and integration architects about the most appropriate integration strategy, you need to have a wide knowledge of modern integration approaches, in addition to a solid understanding of their basis. In my experience, technology enthusiasts can sometimes get carried away with new concepts and terminologies. While staying up to date with the latest and greatest market trends is very important, it is your duty as a senior architect to understand which of these approaches are most suitable for your client. &lt;/p&gt;

&lt;h2&gt;
  
  
  Service-oriented architecture
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Service-oriented architecture (SOA)&lt;/strong&gt; is an approach of software development that aims to encapsulate business logic into a &lt;em&gt;service&lt;/em&gt; that makes the most out of reusable code. Each service contains the code and data integrations required to fulfill a particular business use case; for example, placing a shopping order or onboarding a new customer.  &lt;/p&gt;

&lt;p&gt;These services are loosely coupled and utilize an &lt;em&gt;enterprise service bus&lt;/em&gt; to communicate with each other. This means that developers can save time by reusing existing SOA services across the enterprise.&lt;/p&gt;

&lt;p&gt;SOA services are logical representations of particular business activities with clear specific outcomes. They are provided more or less as a &lt;em&gt;black box&lt;/em&gt; for consumers who don't need to worry about how these services are working. Services can consist of multiple other underlying services. &lt;/p&gt;

&lt;p&gt;SOA emerged in the late 1990s and was the base for other modern integration approaches such as microservices and event-driven architecture. Some critics of SOA mention challenges regarding its performance, maintainability, and the difficulties associated with designing it to the right level of granularity. &lt;/p&gt;

&lt;p&gt;A simplified SOA-based architecture would look as follows: &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--waAstI3L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vuzzbkd3xqhc0poz1ilw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--waAstI3L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vuzzbkd3xqhc0poz1ilw.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1 – An example of a SOA-based architecture &lt;/p&gt;

&lt;h2&gt;
  
  
  Microservices
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Microservice architectures&lt;/strong&gt; are a modern interpretation of SOA. They are also made up of loosely coupled and reusable components with clear functionality and outcome. Rather than communicating through an ESB, microservices communicate with each other directly. The services can use different technologies and protocols.  &lt;/p&gt;

&lt;p&gt;The microservices architecture is very much geared toward the Cloud. It utilizes &lt;strong&gt;Development and Operations (DevOps)&lt;/strong&gt; concepts to allow small decentralized teams to take complete ownership of a particular service, deliver its functionality using their preferred technology, and rapidly release it to the enterprise using lightweight containers.  &lt;/p&gt;

&lt;p&gt;Microservices are typically used as building blocks for other enterprise applications. They are finely grained services, and they have access to their own data stores that provide access to all the data they need. Microservices are never supposed to access the same data store/database as this would create a dependency between them and every other data store. The microservices principles favor encapsulation and independence over reusability; redundant code is considered an acceptable side effect. &lt;/p&gt;

&lt;p&gt;Microservices became popular since their introduction in 2014 due to their relationship with DevOps concepts. Due to the similarity between SOA and microservices, it is good to understand some of the key differences between these particular integration approaches: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Synchronous calls&lt;/strong&gt;: Reusable SOA services should be available throughout the enterprise via the use of synchronous protocols such as SOAP or REST APIs. Synchronous calls are less preferred in the microservices architecture as they may create real-time dependencies, which may cause latency. An asynchronous approach is preferred, such as publish/subscribe, which would enhance the resilience and availability of the services. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Communication&lt;/strong&gt;: SOA services utilize the ESB to communicate, which can make ESB a performance bottleneck. Microservices are developed independently, and they communicate directly using different protocols and standards. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reuse&lt;/strong&gt;: SOA is all about increasing reuse, whereas in the microservices architecture, this is less important – especially considering that achieving some reusability at runtime could create dependencies between the microservices, which reduces agility and resilience. With microservices, duplicating code by copying and pasting it is considered an accepted side effect of avoiding dependencies. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data duplication&lt;/strong&gt;: In SOA, the services can directly access and change data in a particular data source or application. This means that multiple SOA services would likely be accessing the same data store. Microservices always aim to reduce dependencies. A microservice should ideally have local access to all the data it needs to deliver its expected functionality. This means that there might be a need to duplicate some data, and also that the data could be out of sync between the different services. Data duplication adds a considerable amount of complexity to the design and potential usage of microservices, which has to be balanced against the expected gains from the microservice's independence. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Granularity&lt;/strong&gt;: Microservices are designed to do one specific task; they are very specialized and therefore finely grained. On the other hand, SOA services reflect business services, so they can range from small to bigger enterprise-wide services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed&lt;/strong&gt;: As we mentioned earlier, speed is one of the weaker sides of SOA, due to several factors. Microservices are lightweight, more specialized, and usually utilize lightweight communication protocols such as REST. They generally run faster than SOA services. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simplified microservices-based architecture would look as follows: &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MHSk1bFC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7beoift0st3fgszd4k4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MHSk1bFC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7beoift0st3fgszd4k4.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 2 – An example of a microservices-based architecture  &lt;/p&gt;

&lt;h2&gt;
  
  
  API-led architecture
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;API-led architecture&lt;/strong&gt; is a strategy where all external and internal services are exposed as managed APIs, regardless of how they were implemented (microservices, SOA services, web services driven out of a monolithic application, or based on other architectures). &lt;strong&gt;Managed APIs&lt;/strong&gt; in today's terms do more than just provide governance capabilities such as security policies, throttling, versioning, and automatic service discovery. The principle has extended beyond that to include developer portals where developers can experiment with APIs before using them, productivity tools, and a mechanism to register and pay for API usage. In this approach, APIs are usually organized on three different layers: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;System APIs&lt;/strong&gt; are meant to access core systems and services. They provide a simplified insulating layer between the service consumer and the underlying system or service. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Process APIs&lt;/strong&gt; are meant to interact, transform, and shape the data coming from the underlying system APIs or from other process APIs, effectively breaking down data silos. They have no dependency on the source systems where the data came from, nor on the target systems where the data will be delivered. Both system APIs and process APIs can be used to connect to existing microservices, as well as other enterprise services, depending on the use case. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Experience APIs&lt;/strong&gt; are meant to allow easy access and data consumption for the end user or application. They typically communicate with one or more process APIs to deliver a specific functionality. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Microservices are known to create many endpoints, which are normally difficult to control and monetize. The API-led architecture aims to create an API strategy that governs the way the different enterprise services interact between each other, as well as with external consumers, by utilizing the capabilities of lightweight standards such as REST and combining them with modern API gateway capabilities. &lt;/p&gt;

&lt;p&gt;Previously, we mentioned that microservices are typically &lt;em&gt;consumed by applications&lt;/em&gt;. The API-led architecture aims to turn these applications into a smaller and lighter set of APIs. This can help enterprises take steps toward the &lt;em&gt;API economy&lt;/em&gt;. For example, an enterprise could create a set of APIs on top of their rich set of services, which are built with different technologies and based on different architectures, and then utilize an API manager to expose these services externally and internally with different applicable policies and subscription mechanisms. Moreover, this approach is seen as an enabler for rapid application development, since you can reuse APIs that are built on top of different business processes. MuleSoft Anypoint Platform is a tool that enables enterprises to deliver API-led integration architecture. &lt;/p&gt;

&lt;h2&gt;
  
  
  Event-driven architecture
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Event-driven architecture&lt;/strong&gt; is an approach of software development that utilizes &lt;em&gt;events&lt;/em&gt; to communicate and trigger actions in integrated and decoupled applications. An &lt;em&gt;event&lt;/em&gt; is simply a change in status of a particular object. For example, a change in the customer status value could fire a &lt;em&gt;customer status change&lt;/em&gt; event that would, in turn, trigger a set of actions in integrated systems, such as starting a particular marketing journey.&lt;/p&gt;

&lt;p&gt;The event-driven architecture inherits some principles from the &lt;em&gt;messaging&lt;/em&gt; integration style, as we mentioned earlier. The event-driven architecture has three main components: event producers, event routers, and event consumers. The producers publish events to the router, the routers handle filtering and pushing the events to the subscribed consumers, and the consumers receive the event, parse it and transform it into a format suitable for their needs, and then use it, typically to update their own version of the data or to fire subsequent logic. Stream-processing platforms, modern ESBs, or event routing buses such as CometD are usually used as routers. &lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;That concludes our overview of modern application integration. Hopefully, the understanding you have acquired of some of the main integration architecture principles and tools will stand you in good stead when, in your capacity as a technical architect, you come to your next big integration project.   &lt;/p&gt;

&lt;p&gt;This article is part of Tameem Bahri's book &lt;a href="https://packt.live/3m3nrYI"&gt;Becoming a Salesforce Certified Technical Architect&lt;/a&gt;. Tameem condenses his Salesforce wisdom into an easy-to-follow guide to help you get started on your journey as a CTA. Check it out to discover how the book will help you develop architectural knowledge and the soft skills required to create end-to-end solution presentations. &lt;/p&gt;

</description>
      <category>salesforce</category>
      <category>salesforcecta</category>
      <category>salesforcearchitects</category>
      <category>trailblazercommunity</category>
    </item>
    <item>
      <title>Robotic Process Automation with Automation Anywhere </title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Tue, 20 Apr 2021 14:51:26 +0000</pubDate>
      <link>https://dev.to/packtpub/robotic-process-automation-with-automation-anywhere-1o30</link>
      <guid>https://dev.to/packtpub/robotic-process-automation-with-automation-anywhere-1o30</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0awy6byl24gn3fwnbo8v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0awy6byl24gn3fwnbo8v.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Robotic Process Automation (RPA)&lt;/strong&gt; has attracted significant investment from many corporate organizations in recent years. This has opened up many opportunities for using RPA, whether you are an experienced developer wanting to gain additional valuable skills or you're thinking about starting your career as an RPA developer.  &lt;/p&gt;

&lt;p&gt;In this overview, we explain what Robot Process Automation is. You’ll learn about Automation Anywhere (AA) and what it does, and get some initial insights into AA’s RPA tool. A number of versions of AA are available, and you will learn about their differences. Our focus will be on the latest Community Edition A2019. Besides being the latest version, there are several other reasons for learning RPA with this version. We will explain why this version is ideal for gaining actual hands-on experience and starting your journey in building software robots (bots). &lt;/p&gt;

&lt;p&gt;Along with building bots, AA has a number of additional features and components. These include IQ Bot, Bot Insight, Bot Store, Mobile Bot, and Automation Anywhere University. We will show how you can benefit from these features and components. &lt;/p&gt;

&lt;h1&gt;
  
  
  Technical requirements
&lt;/h1&gt;

&lt;p&gt;To use AA A2019 Community Edition, you will need the following: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Windows OS version 7 or higher &lt;/li&gt;
&lt;li&gt;A processor with a minimum speed of 3 GHz &lt;/li&gt;
&lt;li&gt;A minimum of 4 GB RAM &lt;/li&gt;
&lt;li&gt;Internet Explorer v10 or higher, or Chrome v49 or higher &lt;/li&gt;
&lt;li&gt;An internet connection with a minimum speed of 10 Mb/second &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  What is robotic process automation?
&lt;/h1&gt;

&lt;p&gt;You probably already know what RPA is, but we will quickly review it here. The words &lt;em&gt;automation&lt;/em&gt; or &lt;em&gt;robot&lt;/em&gt; usually conjure up images of a physical machine performing repetitive tasks. We began to see this type of robotic automation years ago, particularly in manufacturing. Physical robotic machines were built to help automate tasks usually done by humans. This form of industrial manufacturing automation was later adopted by many other industries including logistics, distribution, and packaging. This also led to automation being taught in universities at the postgraduate level. Many new technology jobs were created from this, including roles such as robotics engineer, designer, and maintenance operative, as well as automated programmable manufacturing tools such as CNC machinery. Since the widespread adoption of the internet, we have seen the concept of web-controlled automation being introduced. As an example, large buildings often deploy internet-enabled CCTV, heating controls, and security systems, where all these systems can be managed remotely over the internet. You could have a very fulfilling career as a developer or engineer working in automation.&lt;/p&gt;

&lt;p&gt;We can see the same thing happening with RPA. RPA is specifically designed to automate tasks that are performed by humans on desktops. Most jobs have an element that involves tasks that are high-volume, repetitive, and tedious. Such tasks tend to drain the enjoyment out of our jobs. RPA can be applied to automate these types of tasks. &lt;/p&gt;

&lt;p&gt;We can build bots to perform these types of tasks, and this is specifically what RPA bots have been designed for. Having a bot can give you more time to spend on the tasks that you actually enjoy and excel at. This in turn would deliver more job satisfaction. &lt;/p&gt;

&lt;p&gt;You may be thinking, &lt;em&gt;well, what's the difference between RPA and traditional software development?&lt;/em&gt; Well, with traditional development, the developer needs to be proficient in developing the application with it being automated as well. For example, to automate a task in Excel you would expect the developer to have skills in VBA. To develop web applications, the developer may need skills in Java or HTML. The developer needs to understand how the application is executing the tasks as well as what the user needs to do. It would usually also involve a greater learning curve to master these skills and would involve writing lines of code to build the solutions. RPA is different. It doesn't really matter what application you are working with as it interacts with the user interface. The user only needs to understand how to operate the application they are working with without necessarily understanding how the application executes the task, and this is all that RPA needs to know. So, no specific expertise is needed to work on multiple applications. It also does not require writing lines of code, as you can build a solution by designing a workflow or using pre-defined drag and drop commands. This makes it an ideal technology to rapidly learn how to build bots; it doesn't require years of learning to become a bot developer. See the following comparison: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgcr99uhlvph23fohg1v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgcr99uhlvph23fohg1v.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1 – Comparison of traditional automation against RPA &lt;/p&gt;

&lt;p&gt;You can clearly see the benefits of having an RPA bot as opposed to building a new traditional-style software solution. So, what sorts of tasks can a bot perform? Bots can pretty much do most tasks that involve a human using the desktop. This includes the automation of the tasks shown in the following diagram: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbo75xpg8723cpp0t2tsd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbo75xpg8723cpp0t2tsd.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 2 – Tasks that can be performed with RPA &lt;/p&gt;

&lt;p&gt;You should now have a good understanding of what RPA is. This is a growing market with great demand for RPA skills. We know we can learn these skills far more quickly and easily than those required for traditional development. The range of tasks that can be automated with RPA is vast and not limited to specific industries. &lt;/p&gt;

&lt;p&gt;The number of RPA vendors on the market is growing. As in most industries, only a few become recognized and reputable as market leaders, although we have seen a handful of industry leaders emerging over the last few years. One of the key players has been Automation Anywhere. &lt;/p&gt;

&lt;h1&gt;
  
  
  Overview of Automation Anywhere
&lt;/h1&gt;

&lt;p&gt;The list of vendors that provide RPA tools is growing constantly. There are three main leaders in this automation technology. These are UiPath, Blue Prism, and &lt;strong&gt;Automation Anywhere (AA)&lt;/strong&gt;. All these vendors provide RPA tools with pretty much the same functionality. You can see the top 10 RPA vendors of 2020 at the following link, created by Horses for Sources: &lt;a href="https://www.horsesforsources.com/RPA_Top10_2020_012920" rel="noopener noreferrer"&gt;https://www.horsesforsources.com/RPA_Top10_2020_012920&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Although the aforementioned top three do provide similar functionalities, there are some key differences. The following table shows a breakdown of the features available from each provider: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysd4qb9w8zjqoqwu91f7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysd4qb9w8zjqoqwu91f7.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 3 – Top vendors' features comparison &lt;/p&gt;

&lt;p&gt;We can see that AA and UiPath have the most comprehensive tools and features when compared to Blue Prism. &lt;/p&gt;

&lt;p&gt;We will use AA, as they were the first to release a fully cloud-based RPA tool. This eliminates the need to install AA on your desktops to build, manage, and deploy bots. AA has won a number of prestigious technology awards and was recently named the &lt;em&gt;market leader&lt;/em&gt; in RPA by a Forrester report. &lt;/p&gt;

&lt;p&gt;AA also runs a number of annual events, including the &lt;em&gt;Bot Games&lt;/em&gt;. Here, developers from around the world are challenged against each other to build specific bots. Maybe, once you have gained enough confidence in your own bot development skills, you can be part of these Bot Games. &lt;/p&gt;

&lt;p&gt;The mission statement of AA, as published on their website at &lt;a href="https://www.automationanywhere.com/company/about-us" rel="noopener noreferrer"&gt;https://www.automationanywhere.com/company/about-us&lt;/a&gt;, is: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To enable companies to operate with unprecedented productivity and efficiency by automating any part of the enterprise that can be automated with the most intelligent and intuitive robotic process automation platform we call - The Intelligent Digital Workforce&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can break this statement down into three distinct elements: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;What AA offers&lt;/strong&gt;: Giving organizations the opportunity to increase productivity and efficiency. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How they can offer this&lt;/strong&gt;: Creating the opportunity to automate any process within the organization by the deployment of intelligent RPA. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The outcome&lt;/strong&gt;: This results in building bots that make up the &lt;strong&gt;Digital Workforce&lt;/strong&gt;.  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When designing and building an RPA solution, it is essential that a statement relates to why RPA is needed. The Digital Workforce has to add value within the organization. This can be measured in terms of cost savings, time reduction, or the reduction of effort. As a developer, understanding why automation is needed can help in designing a robust, intelligent automation solution. &lt;/p&gt;

&lt;p&gt;We will now take a closer look at some of the additional features and components available with AA. This will show how AA stands out from the crowd of its competitors. We will look at the following features and components: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Digital Workforce &lt;/li&gt;
&lt;li&gt;IQ Bot &lt;/li&gt;
&lt;li&gt;Bot Insight &lt;/li&gt;
&lt;li&gt;Bot Store &lt;/li&gt;
&lt;li&gt;Mobile Bot &lt;/li&gt;
&lt;li&gt;Automation Anywhere University &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's take a look at these in more detail. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Digital Workforce
&lt;/h2&gt;

&lt;p&gt;A bot is referred to by AA as a &lt;strong&gt;Digital Worker&lt;/strong&gt; as it clones the actions of a human to perform a given task. A Digital Worker is a member of the team designed to carry out a process just the same as any human worker. As more bots are built within an organization, you can see a Digital Workforce being created. These bots can work side by side with a human or can be deployed to run on their own. Decision-making is a key aspect when using RPA. RPA has the ability to perform condition-based decisions when the outcome is purely based on a single condition or set of conditions. &lt;/p&gt;

&lt;p&gt;For example, a condition-based decision could be, &lt;em&gt;do we order some keyboards?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We would check our stock levels in the stock database, and if it is below our re-ordering threshold, then yes, we do; otherwise, we don't. &lt;/p&gt;

&lt;p&gt;In some cases, condition-based decisions are not sufficient to get the correct outcome. There are occasions when decisions have to be made using &lt;strong&gt;Artificial Intelligence (AI)&lt;/strong&gt; or by applying machine learning algorithms. This is where RPA needs to be used in conjunction with AI. AA allows us to train an RPA bot to perform complex decisions involving AI and machine learning algorithms. This is achievable using the IQ Bot feature of AA. &lt;/p&gt;

&lt;h2&gt;
  
  
  IQ Bot
&lt;/h2&gt;

&lt;p&gt;As well as utilizing condition-based decisions, more and more processes require a certain level of cognitive intelligence to make decisions. An example of this would be when dealing with unstructured data. A common scenario involves invoices, which all tend to have the same type of data such as supplier, items, costings, and dates, but the layout and format vary between different suppliers. AA has developed a product called &lt;strong&gt;IQ Bot&lt;/strong&gt;. This bot uses cognitive automation with RPA to learn how to handle unstructured data. This enables such processes to be automated from end to end without human intervention. It integrates AI technologies such as fuzzy logic, &lt;strong&gt;Natural Language Processing (NLP)&lt;/strong&gt;, computer vision, and &lt;strong&gt;Machine Learning (ML)&lt;/strong&gt;, all without the help of data scientists or highly trained experts. &lt;/p&gt;

&lt;h2&gt;
  
  
  Bot Insight
&lt;/h2&gt;

&lt;p&gt;Designing and building bots is not the complete story. AA has also developed a platform that produces real-time analytics about your Digital Workforce, processes, and business-level processes This is all a part of the Bot Insight tool, the RPA analytics tool for AA. Bot Insight is broken down into two categories: operational analytics and business intelligence. &lt;/p&gt;

&lt;p&gt;As bots are deployed, as well as executing tasks, they also process data. This data is related to each specific process and can provide valuable insight. Bot Insight analyzes this data and transforms it into meaningful insights. It also captures operational data such as how well the bot is performing, tracking data as it is being processed. All this data can be presented in various formats including graphs, charts, and tables. It can also predict possible bot failures. It can be integrated seamlessly with other leading business intelligence platforms such as Tableau, ThoughtSpot, and QlikView. As an independent tool, Bot Insight provides a complete analytics solution without the need to integrate with other tools. It's simple to use; all it requires is tagging the data items that need to be analyzed and Bot Insight will do the rest for you. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can learn more about Bot Insight at &lt;a href="https://www.automationanywhere.com/products/bot-insight" rel="noopener noreferrer"&gt;https://www.automationanywhere.com/products/bot-insight&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Bot Store
&lt;/h2&gt;

&lt;p&gt;AA is the first RPA vendor to have a fully operational Bot Store. Bot Store is an online store with a collection of Digital Workers. The bots available here are built by independent developers from all around the world, as well as by AA themselves. AA Bot Store won the Silver award in the 2019 Edison Awards for developing the world's first and largest enterprise automation marketplace. &lt;/p&gt;

&lt;p&gt;These are complete bots out of the box that will perform a specific task or role. They are available as bots for specific applications, categories, or business processes. These applications include Microsoft, Google Cloud, CyberArk, and LinkedIn. You can pick specific bots for particular tasks, such as NLP bots for converting speech to text or bots for converting a QR code image to text. The bots on offer are continuously growing as more of them are added. Many of these bots are available for free, but there are some you will have to pay for. &lt;/p&gt;

&lt;p&gt;Once you have mastered bot development, maybe you can submit your bots to be hosted on Bot Store. This is a great way to promote your skills as well as having the opportunity to sell your bots. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can learn more about Bot Store at &lt;a href="https://www.automationanywhere.com/products/botstore" rel="noopener noreferrer"&gt;https://www.automationanywhere.com/products/botstore&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Mobile Bot
&lt;/h2&gt;

&lt;p&gt;AA has also released a mobile app to work with your bots. It allows you to manage your Digital Workers from your mobile device. Bot Insight is available on the mobile app. This app will give you live alerts on bot performance as well as business insights on bot data. You can control your bots from the app including starting and stopping them. It also provides a platform for you to connect with the wider AA RPA community. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can learn more about Mobile Bot at &lt;a href="https://www.automationanywhere.com/products/apps" rel="noopener noreferrer"&gt;https://www.automationanywhere.com/products/apps&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Automation Anywhere University
&lt;/h2&gt;

&lt;p&gt;AA also has an online university that provides many learning paths and opportunities to earn a globally recognized certificate. You can gain many accreditation badges approved by AA by completing the online assessments. These assessments usually consist of multiple-choice questionnaires. To gain the Certified Master Professional accreditation, you will have to build three bots and submit them to the university. These will then be assessed to determine whether you qualify or not. There are many areas of AA that you can gain accreditation badges for, including Bot Developer, Business Analyst, IQ Bot Developer, Control Room Administrator, Solutions Architect, Technical Support Specialist, and RPA Program Manager. &lt;/p&gt;

&lt;p&gt;You can attempt the accreditation badge assessments for free, but there is a cost for certification. These range from 50 USD to 100 USD depending on the certificate. These certifications are great ways to promote your RPA skills; I would recommend the Automation Anywhere Certified Advanced RPA Professional certification.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can learn more about the AA University at &lt;a href="https://university.automationanywhere.com/" rel="noopener noreferrer"&gt;https://university.automationanywhere.com/&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Hopefully, you now have a better insight into AA’s features. There is a distinct advantage to using AA for RPA over its competitors. We know that the AA platform offers far more than just bot development. It allows data analytics, a platform to showcase and generate revenue from our bots, and a tool specifically designed to incorporate AI in our bots, as well as a path to gain recognized certifications for our skills. &lt;/p&gt;

&lt;p&gt;Along with these features, three versions of AA are available. We will now look at the differences between them. &lt;/p&gt;

&lt;h1&gt;
  
  
  Automation Anywhere versions
&lt;/h1&gt;

&lt;p&gt;Each AA version is designed with a different user in mind. The following table summarizes the main differences: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqirawdfqc6cv8fc5jzrm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqirawdfqc6cv8fc5jzrm.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 4 – AA versions&lt;/p&gt;

&lt;p&gt;Community Edition A2019 is totally free. The other two versions come with a 30-day free trial, after which you have to purchase an AA license to continue using them. Community Edition A2019 is specifically designed for students and developers. There is no limit to the number of bots you can build, and nor is there any limited functionality. &lt;/p&gt;

&lt;p&gt;You can now see the benefits of using Community Edition A2019, as well as understand what additional capabilities the other versions have to offer. In the next section, we’ll take a closer look at Community Edition A2019 as well as walk through how to register with AA in order to start using it. &lt;/p&gt;

&lt;h2&gt;
  
  
  Community Edition A2019
&lt;/h2&gt;

&lt;p&gt;AA Community Edition A2019 is the latest free version and was released in November 2019. The version prior to this, AA v11.x, used a client-server architecture where the management was done through the web-based Control Room app while bot development was done through a client application installed on the desktop. &lt;/p&gt;

&lt;p&gt;Community Version A2019 is a fully cloud-based solution. Bot management and building are all done through the web application; no development client is installed on your desktop. You need to download and install a &lt;strong&gt;Bot agent&lt;/strong&gt; on each device that is to run the bot. Once installed, you build your bot, connect to your device using a Bot agent, and then deploy. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Registration with Automation Anywhere&lt;/strong&gt;&lt;br&gt;
As Community Edition A2019 is free, you can start using it once you’ve registered with AA. &lt;/p&gt;

&lt;p&gt;To register, follow these instructions: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;a href="https://www.automationanywhere.com/products/community-edition" rel="noopener noreferrer"&gt;https://www.automationanywhere.com/products/community-edition&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Complete the appropriate details, including your &lt;strong&gt;First Name&lt;/strong&gt;, &lt;strong&gt;Last Name&lt;/strong&gt;, &lt;strong&gt;Email Address&lt;/strong&gt;, &lt;strong&gt;Country&lt;/strong&gt;, &lt;strong&gt;Phone Number&lt;/strong&gt;, and &lt;strong&gt;Company Name&lt;/strong&gt; &lt;/li&gt;
&lt;li&gt;Then submit your details. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You will shortly get a welcome email including your login credentials. The key details to note are the following: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your Control Room URL &lt;/li&gt;
&lt;li&gt;Your username &lt;/li&gt;
&lt;li&gt;Your password &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will need these credentials every time you launch AA so keep a note of them. You need to change the password when you first log in. &lt;/p&gt;

&lt;p&gt;You are now ready to start your RPA journey using AA. &lt;/p&gt;

&lt;h1&gt;
  
  
  In conclusion
&lt;/h1&gt;

&lt;p&gt;You now have a good understanding of what RPA is and how AA is positioned in the RPA space. You also have some understanding of AA’s capabilities. Having registered with AA to use the free Community Edition A2019, you are all set to get AA up and running on your machine. &lt;/p&gt;

&lt;p&gt;This article on robotic process automation with Automation Anywhere is part of Husan Mahey's book of the same name. To continue reading about RPA and AA and to learn more about the recent developments in process automation, check out the book &lt;a href="https://packt.live/3cARTW5" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>rpa</category>
      <category>clourpa</category>
      <category>automation360</category>
      <category>processautomation</category>
    </item>
    <item>
      <title>From Gradient Boosting to XGBoost  </title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Thu, 15 Apr 2021 14:51:03 +0000</pubDate>
      <link>https://dev.to/packt/from-gradient-boosting-to-xgboost-2ba2</link>
      <guid>https://dev.to/packt/from-gradient-boosting-to-xgboost-2ba2</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdz3yhg6woa7fni34zzc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdz3yhg6woa7fni34zzc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gradient boosting is a machine learning method for combining, in an iterative fashion, a number of weak predictive models into a single strong predictive model. XGBoost is a unique form of gradient boosting with several distinct advantages. To understand the advantages of XGBoost over traditional gradient boosting, you must first learn how traditional gradient boosting works. The general structure and hyperparameters of traditional gradient boosting are incorporated in XGBoost.  &lt;/p&gt;

&lt;p&gt;In this article, you will discover the power behind gradient boosting, which is at the core of XGBoost. You will build gradient boosting models from scratch before comparing gradient boosting models and errors with previous results. In particular, you will focus on the &lt;strong&gt;learning rate&lt;/strong&gt; hyperparameter to build powerful gradient boosting models that include XGBoost. Finally, you will preview a case study on exoplanets which highlights the need for faster algorithms, a critical need in the world of big data that is satisfied by XGBoost. &lt;/p&gt;

&lt;p&gt;We will be covering the following topics: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From bagging to boosting &lt;/li&gt;
&lt;li&gt;How gradient boosting works &lt;/li&gt;
&lt;li&gt;Modifying gradient boosting hyperparameters &lt;/li&gt;
&lt;li&gt;Approaching big data – gradient boosting versus XGBoost with 
Python&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Technical requirements
&lt;/h1&gt;

&lt;p&gt;The code for this article is available at &lt;a href="https://github.com/PacktPublishing/Hands-On-Gradient-Boosting-with-XGBoost-and-Scikit-learn/tree/master/Chapter04" rel="noopener noreferrer"&gt;https://github.com/PacktPublishing/Hands-On-Gradient-Boosting-with-XGBoost-and-Scikit-learn/tree/master/Chapter04&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;Some familiarity with Python programming is assumed.&lt;/p&gt;

&lt;h1&gt;
  
  
  From bagging to boosting
&lt;/h1&gt;

&lt;p&gt;Ensemble machine learning algorithms such as &lt;strong&gt;random forests&lt;/strong&gt; to make better predictions by combining many machine learning models into one. Random forests are classified as bagging algorithms because they take the aggregates of bootstrapped samples (decision trees). &lt;/p&gt;

&lt;p&gt;Boosting, by contrast, learns from the mistakes of individual trees. The general idea is to adjust new trees based on the errors of previous trees. &lt;/p&gt;

&lt;p&gt;In boosting, correcting errors for each new tree is a distinct approach from bagging. In a bagging model, new trees pay no attention to previous trees. Also, new trees are built from scratch using bootstrapping, and the final model aggregates all individual trees. In boosting, however, each new tree is built from the previous tree. The trees do not operate in isolation; instead, they are built on top of one another. &lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing AdaBoost
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AdaBoost&lt;/strong&gt; is one of the earliest and most popular boosting models. In AdaBoost, each new tree adjusts its weights based on the errors of the previous trees. More attention is paid to predictions that went wrong by adjusting weights that affect those samples at a higher percentage. By learning from its mistakes, AdaBoost can transform weak learners into strong learners. A weak learner is a machine learning algorithm that barely performs better than chance. By contrast, a stronger learner has learned a considerable amount from data and performs quite well.&lt;/p&gt;

&lt;p&gt;The general idea behind boosting algorithms is to transform weak learners into strong learners. A weak learner is hardly better than random guessing. But there is a purpose behind the weak start. Building on this general idea, boosting works by focusing on iterative error correction, not by establishing a strong baseline model. If the base model is too strong, the learning process is necessarily limited, thereby undermining the general strategy behind boosting models. &lt;/p&gt;

&lt;p&gt;Weak learners are transformed into strong learners through hundreds of iterations. In this sense, a small edge goes a long way. In fact, for the past couple of decades boosting has been one of the best general machine learning strategies in terms of producing optimal results. &lt;/p&gt;

&lt;p&gt;Like many scikit-learn models, it's straightforward to implement AdaBoost in practice. The AdaBoostRegressor and AdaBoostClassifier algorithms may be downloaded from the sklearn.ensemble library and fit to any training set. The most important AdaBoost hyperparameter is n_estimators, the number of trees (iterations) required to create a strong learner.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For further information on AdaBoost, check out the official documentation: &lt;a href="https://scikit-%20&amp;lt;br&amp;gt;%0Alearn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html" rel="noopener noreferrer"&gt;Classifiers&lt;/a&gt; and &lt;a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostRegressor.html" rel="noopener noreferrer"&gt;Regressors&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We will now move on to gradient boosting, a strong alternative to AdaBoost with a slight edge in performance. &lt;/p&gt;

&lt;h2&gt;
  
  
  Distinguishing gradient boosting
&lt;/h2&gt;

&lt;p&gt;Gradient boosting uses a different approach than AdaBoost. While gradient boosting also adjusts based on incorrect predictions, it takes this idea one step further: gradient boosting fits each new tree entirely on the basis of the errors of the previous tree's predictions. That is, for each new tree, gradient boosting looks at the mistakes and then builds a new tree completely around these mistakes. The new tree doesn't care about the predictions that are already correct. &lt;/p&gt;

&lt;p&gt;Building a machine learning algorithm that solely focuses on the errors requires a comprehensive method that sums errors to make accurate final predictions. This method leverages residuals, the difference between the model's predictions and actual values. Here is the general idea: &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Gradient boosting computes the residuals of each tree's predictions and sums all the residuals to score the model.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;It's essential to understand &lt;strong&gt;computing and summing residuals&lt;/strong&gt; as this idea is at the core of XGBoost, an advanced version of gradient boosting. When you build your own version of gradient boosting, the process of computing and summing residuals will become clear. In the next section, you will build your own version of a gradient boosting model. First, let's learn in detail how gradient boosting works. &lt;/p&gt;

&lt;h1&gt;
  
  
  How gradient boosting works
&lt;/h1&gt;

&lt;p&gt;In this section, we’ll look under the hood of gradient boosting and build a gradient boosting model from scratch by training new trees on the errors of the previous trees. The key mathematical idea here is the residual. Next, we will obtain the same results using scikit-learn's gradient boosting algorithm. &lt;/p&gt;

&lt;h2&gt;
  
  
  Residuals
&lt;/h2&gt;

&lt;p&gt;The residuals are the difference between the errors and the predictions of a given model. In statistics, residuals are commonly analyzed to determine how good a given linear regression model fits the data. &lt;/p&gt;

&lt;p&gt;Consider the following examples: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Bike rentals&lt;br&gt;
a. &lt;em&gt;Prediction&lt;/em&gt;: 759&lt;br&gt;
b. &lt;em&gt;Result&lt;/em&gt;: 799&lt;br&gt;
c. &lt;em&gt;Residual&lt;/em&gt;: 799 - 759 = 40&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Income&lt;br&gt;
a. &lt;em&gt;Prediction&lt;/em&gt;: 100,000&lt;br&gt;
b. &lt;em&gt;Result&lt;/em&gt;: 88,000&lt;br&gt;
c. &lt;em&gt;Residual&lt;/em&gt;: 88,000 - 100,000 = -12,000&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As you can see, residuals tell you how far the model's predictions are from reality, and they may be positive or negative. &lt;/p&gt;

&lt;p&gt;Here is a visual example displaying the residuals of a &lt;strong&gt;linear regression&lt;/strong&gt; line: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxgh4ucd5u3srwk7d2u4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxgh4ucd5u3srwk7d2u4.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1 – Residuals of a linear regression line&lt;/p&gt;

&lt;p&gt;The goal of linear regression is to minimize the square of the residuals. As the graph reveals, a visual of the residuals indicates how well the line fits the data. In statistics, linear regression analysis is often performed by graphing the residuals to gain deeper insight into the data.&lt;/p&gt;

&lt;p&gt;In order to build a gradient boosting algorithm from scratch, we will compute the residuals of each tree and fit a new model to the residuals. Let's do this now.&lt;/p&gt;
&lt;h2&gt;
  
  
  Learning how to build gradient boosting models from scratch
&lt;/h2&gt;

&lt;p&gt;Building a gradient boosting model from scratch will provide you with a deeper understanding of how gradient boosting works in code. Before building a model, we need to access data and prepare it for machine learning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Processing the bike rentals dataset&lt;/strong&gt;&lt;br&gt;
We continue with the bike rentals dataset to compare new models with the previous models:&lt;/p&gt;

&lt;p&gt;1) We will start by importing pandas and numpy. We will also add a line to silence any warnings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Now, load the bike_rentals_cleaned dataset and view the first five rows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df_bikes = pd.read_csv('bike_rentals_cleaned.csv')
df_bikes.head()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your output should look like this:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F824vhwdyk6fytgajwhol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F824vhwdyk6fytgajwhol.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 2 – First five rows of Bike Rental Dataset&lt;/p&gt;

&lt;p&gt;3) Now, split the data into X and y. Then, split X and y into training and test sets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X_bikes = df_bikes.iloc[:,:-1]
y_bikes = df_bikes.iloc[:,-1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_
bikes, y_bikes, random_state=2)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's time to build a gradient boosting model from scratch!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building a gradient boosting model from scratch&lt;/strong&gt;&lt;br&gt;
Here are the steps for building a gradient boosting machine learning model from scratch:&lt;/p&gt;

&lt;p&gt;1) Fit the data to the decision tree: You may use a decision tree stump, which has a max_depth value of 1, or a decision tree with a max_depth value of 2 or 3. The initial decision tree, called a &lt;strong&gt;base learner&lt;/strong&gt;, should not be fine-tuned for accuracy. We want a model that focuses on learning from errors, not a model that relies heavily on the base learner. Initialize a decision tree with max_depth=2 and fit it on the training set as tree_1, since it's the first tree in our ensemble:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.tree import DecisionTreeRegressor
tree_1 = DecisionTreeRegressor(max_depth=2, random_
state=2)
tree_1.fit(X_train, y_train)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Make predictions with the training set: Instead of making predictions with the test set, predictions in gradient boosting are initially made with the training set. Why? To compute the residuals, we need to compare the predictions while still in the training phase. The test phase of the model build comes at the end, after all the trees have been constructed. The predictions of the training set for the first round are obtained by adding the predict method to tree_1 with X_train as the input:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;y_train_pred = tree_1.predict(X_train)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) Compute the residuals: The residuals are the differences between the predictions and the target column. The predictions of X_train, defined here as y_train_pred, are subtracted from y_train, the target column, to compute the residuals:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;y2_train = y_train - y_train_pred
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The residuals are defined as y2_train because they are the new target column for the next tree.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;4) Fit the new tree on the residuals: Fitting a new tree on the residuals is different than fitting a model on the training set. The primary difference is in the predictions. In the bike rentals dataset, when fitting a new tree on the residuals, we should progressively get smaller numbers.&lt;/p&gt;

&lt;p&gt;Initialize a new tree and fit it on X_train and the residuals, y2_train:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tree_2 = DecisionTreeRegressor(max_depth=2, random_
state=2)
tree_2.fit(X_train, y2_train)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5) Repeat steps 2-4: As the process continues, the residuals should gradually approach 0 from the positive and negative direction. The iterations continue for the number of estimators, n_estimators.&lt;/p&gt;

&lt;p&gt;Let's repeat the process for a third tree as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;y2_train_pred = tree_2.predict(X_train)
y3_train = y2_train - y2_train_pred
tree_3 = DecisionTreeRegressor(max_depth=2, random_
state=2)
tree_3.fit(X_train, y3_train)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This process may continue for dozens, hundreds, or thousands of trees. Under normal circumstances, you would certainly keep going. It will take more than a few trees to transform a weak learner into a strong learner. Since our goal is to understand how gradient boosting works behind the scenes, however, we will move on now that the general idea has been covered.&lt;/p&gt;

&lt;p&gt;6) Sum the results: Summing the results requires making predictions for each tree with the test set as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;y1_pred = tree_1.predict(X_test)
y2_pred = tree_2.predict(X_test)
y3_pred = tree_3.predict(X_test)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since the predictions are positive and negative differences, summing the predictions should result in predictions that are closer to the target column as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;y_pred = y1_pred + y2_pred + y3_pred
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7) Lastly, let's compute the &lt;strong&gt;mean squared error (MSE)&lt;/strong&gt; to obtain the results as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.metrics import mean_squared_error as MSE
MSE(y_test, y_pred)**0.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;911.0479538776444
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not bad for a weak learner that isn't yet strong! Now let's try to obtain the same result using scikit-learn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a gradient boosting model in scikit-learn
&lt;/h2&gt;

&lt;p&gt;Let's see whether we can obtain the same result as in the previous section using scikit-learn's GradientBoostingRegressor. This may be done through a few hyperparameter adjustments. The advantage of using GradientBoostingRegressor is that it's much faster to build and easier to implement:&lt;/p&gt;

&lt;p&gt;1) First, import the regressor from the sklearn.ensemble library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.ensemble import GradientBoostingRegressor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) When initializing GradientBoostingRegressor, there are several important hyperparameters. To obtain the same results, it's essential to match max_depth=2 and random_state=2. Furthermore, since there are only three trees, we must have n_estimators=3. Finally, we must set the learning_rate=1.0 hyperparameter. We will have much to say about learning_rate shortly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gbr = GradientBoostingRegressor(max_depth=2, n_
estimators=3, random_state=2, learning_rate=1.0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) Now that the model has been initialized, it can be fit on the training data and scored against the test data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gbr.fit(X_train, y_train)
y_pred = gbr.predict(X_test)
MSE(y_test, y_pred)**0.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;911.0479538776439
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is the same to 11 decimal places!&lt;/p&gt;

&lt;p&gt;Recall that the point of gradient boosting is to build a model with enough trees to transform a weak learner into a strong learner. This is easily done by changing n_estimators, the number of iterations, to a much larger number.&lt;/p&gt;

&lt;p&gt;4) Let's build and score a gradient boosting regressor with 30 estimators:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gbr = GradientBoostingRegressor(max_depth=2, n_
estimators=30, random_state=2, learning_rate=1.0)
gbr.fit(X_train, y_train)
y_pred = gbr.predict(X_test)
MSE(y_test, y_pred)**0.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;857.1072323426944
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The score is an improvement. Now let's look at 300 estimators:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gbr = GradientBoostingRegressor(max_depth=2, n_
estimators=300, random_state=2, learning_rate=1.0)
gbr.fit(X_train, y_train)
y_pred = gbr.predict(X_test)
MSE(y_test, y_pred)**0.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;936.3617413678853
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a surprise! The score has gotten worse! Have we been misled? Is gradient boosting not all that it's cracked up to be?&lt;/p&gt;

&lt;p&gt;Whenever you get a surprise result, it's worth double-checking the code. Now, we changed learning_rate without saying much about it. So, what happens if we remove learning_rate=1.0 and use the scikit-learn defaults?&lt;/p&gt;

&lt;p&gt;Let's find out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gbr = GradientBoostingRegressor(max_depth=2, n_estimators=300,
random_state=2)
gbr.fit(X_train, y_train)
y_pred = gbr.predict(X_test)
MSE(y_test, y_pred)**0.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;653.7456840231495
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Incredible! By using the scikit-learn default for the learning_rate hyperparameter, the score has changed from 936 to 654.&lt;/p&gt;

&lt;p&gt;In the next section, we will learn more about the different gradient boosting hyperparameters with a focus on the learning_rate hyperparameter.&lt;/p&gt;

&lt;h1&gt;
  
  
  Modifying gradient boosting hyperparameters
&lt;/h1&gt;

&lt;p&gt;In this section, we will focus on the learning_rate, the most important gradient boosting hyperparameter, with the possible exception of n_estimators, the number of iterations or trees in the model. We will also survey some tree hyperparameters, and subsample, which results in &lt;em&gt;stochastic gradient boosting&lt;/em&gt;. In addition, we will use RandomizedSearchCV and compare results with XGBoost.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;learning_rate&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the last section, changing the learning_rate value of GradientBoostingRegressor from 1.0 to scikit-learn's default, which is 0.1, resulted in enormous gains.&lt;/p&gt;

&lt;p&gt;learning_rate, also known as the &lt;em&gt;shrinkage&lt;/em&gt;, shrinks the contribution of individual trees so that no tree has too much influence when building the model. If an entire ensemble is built from the errors of one base learner, without careful adjustment of hyperparameters, early trees in the model can have too much influence on subsequent development. learning_rate limits the influence of individual trees. Generally speaking, as n_estimators, the number of trees, goes up, learning_rate should go down.&lt;/p&gt;

&lt;p&gt;Determining an optimal learning_rate value requires varying n_estimators. First, let's hold n_estimators constant and see what learning_rate does on its own. learning_rate ranges from 0 to 1. A learning_rate value of 1 means that no adjustments are made. The default value of 0.1 means that the tree's influence is weighted at 10%.&lt;/p&gt;

&lt;p&gt;Here is a reasonable range to start with:&lt;/p&gt;

&lt;p&gt;learning_rate_values = [0.001, 0.01, 0.05, 0.1, 0.15, 0.2,&lt;br&gt;
0.3, 0.5, 1.0]&lt;/p&gt;

&lt;p&gt;Next, we will loop through the values by building and scoring a new GradientBoostingRegressor to see how the scores compare:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for value in learning_rate_values:
    gbr = GradientBoostingRegressor(max_depth=2, n_
    estimators=300, random_state=2, learning_rate=value)
    gbr.fit(X_train, y_train)
    y_pred = gbr.predict(X_test)
    rmse = MSE(y_test, y_pred)**0.5
    print('Learning Rate:', value, ', Score:', rmse)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The learning rate values and scores are as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Learning Rate: 0.001 , Score: 1633.0261400367258
Learning Rate: 0.01 , Score: 831.5430182728547
Learning Rate: 0.05 , Score: 685.0192988749717
Learning Rate: 0.1 , Score: 653.7456840231495
Learning Rate: 0.15 , Score: 687.666134269379
Learning Rate: 0.2 , Score: 664.312804425697
Learning Rate: 0.3 , Score: 689.4190385930236
Learning Rate: 0.5 , Score: 693.8856905068778
Learning Rate: 1.0 , Score: 936.3617413678853
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see from the output, the default learning_rate value of 0.1 gives the best score for 300 trees.&lt;/p&gt;

&lt;p&gt;Now let's vary n_estimators. Using the preceding code, we can generate learning_rate plots with n_estimators of 30, 300, and 3,000 trees, as shown in the following figure:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7rj914wy5urm0lhors6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7rj914wy5urm0lhors6.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 3 – learning_rate plot for 30 trees&lt;/p&gt;

&lt;p&gt;As you can see, with 30 trees, the learning_rate value peaks at around 0.3.&lt;/p&gt;

&lt;p&gt;Now, let's take a look at the learning_rate plot for 3,000 trees:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud3kdsos2n6fj7ohwg10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud3kdsos2n6fj7ohwg10.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 4 – learning_rate plot for 3,000 trees&lt;/p&gt;

&lt;p&gt;With 3,000 trees, the learning_rate value peaks at the second value, which is given as 0.05.&lt;/p&gt;

&lt;p&gt;These graphs highlight the importance of tuning learning_rate and n_estimators together.&lt;/p&gt;
&lt;h2&gt;
  
  
  Base learner
&lt;/h2&gt;

&lt;p&gt;The initial decision tree in the gradient boosting regressor is called the &lt;strong&gt;base learner&lt;/strong&gt; because it's at the base of the ensemble. It's the first learner in the process. The term &lt;em&gt;learner&lt;/em&gt; here is indicative of a &lt;em&gt;weak learner&lt;/em&gt; transforming into a &lt;em&gt;strong learner&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Although base learners need not be fine-tuned for accuracy, it's certainly possible to tune base learners for gains in accuracy.&lt;/p&gt;

&lt;p&gt;For instance, we can select a max_depth value of 1, 2, 3, or 4 and compare results as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;depths = [None, 1, 2, 3, 4]
for depth in depths:
    gbr = GradientBoostingRegressor(max_depth=depth, n_
estimators=300, random_state=2)
    gbr.fit(X_train, y_train)
    y_pred = gbr.predict(X_test)
    rmse = MSE(y_test, y_pred)**0.5
    print('Max Depth:', depth, ', Score:', rmse)
depths = [None, 1, 2, 3, 4]
for depth in depths:
    gbr = GradientBoostingRegressor(max_depth=depth, n_
estimators=300, random_state=2)
    gbr.fit(X_train, y_train)
    y_pred = gbr.predict(X_test)
    rmse = MSE(y_test, y_pred)**0.5
    print('Max Depth:', depth, ', Score:', rmse)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Max Depth: None , Score: 867.9366621617327
Max Depth: 1 , Score: 707.8261886858736
Max Depth: 2 , Score: 653.7456840231495
Max Depth: 3 , Score: 646.4045923317708
Max Depth: 4 , Score: 663.048387855927
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A max_depth value of 3 gives the best results.&lt;/p&gt;

&lt;p&gt;Other base learner hyperparameters may be tuned in a similar manner.&lt;/p&gt;

&lt;h2&gt;
  
  
  subsample
&lt;/h2&gt;

&lt;p&gt;subsample is a subset of samples. Since samples are the rows, a subset of rows means that all rows may not be included when building each tree. By changing subsample from 1.0 to a smaller decimal, trees only select that percentage of samples during the build phase. For example, subsample=0.8 would select 80% of samples for each tree.&lt;/p&gt;

&lt;p&gt;Continuing with max_depth=3, we try a range of subsample percentages to improve results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;samples = [1, 0.9, 0.8, 0.7, 0.6, 0.5]
for sample in samples:
    gbr = GradientBoostingRegressor(max_depth=3, n_
estimators=300, subsample=sample, random_state=2)
    gbr.fit(X_train, y_train)
    y_pred = gbr.predict(X_test)
    rmse = MSE(y_test, y_pred)**0.5
    print('Subsample:', sample, ', Score:', rmse)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Subsample: 1 , Score: 646.4045923317708
Subsample: 0.9 , Score: 620.1819001443569
Subsample: 0.8 , Score: 617.2355650565677
Subsample: 0.7 , Score: 612.9879156983139
Subsample: 0.6 , Score: 622.6385116402317
Subsample: 0.5 , Score: 626.9974073227554
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A subsample value of 0.7 with 300 trees and max_depth of 3 produces the best score yet.&lt;/p&gt;

&lt;p&gt;When subsample is not equal to 1.0, the model is classified as &lt;strong&gt;stochastic gradient descent&lt;/strong&gt;, where &lt;em&gt;stochastic&lt;/em&gt; indicates that some randomness is inherent in the model.&lt;/p&gt;

&lt;h2&gt;
  
  
  RandomizedSearchCV
&lt;/h2&gt;

&lt;p&gt;We have a good working model, but we have not yet performed a grid search. Our preliminary analysis indicates that a grid search centered around max_depth=3, subsample=0.7, n_estimators=300, and learning_rate = 0.1 is a good place to start. We have already shown that as n_estimators goes up, learning_rate should go down:&lt;/p&gt;

&lt;p&gt;1) Here is a possible starting point:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;params={'subsample':[0.65, 0.7, 0.75],
        'n_estimators':[300, 500, 1000],
        'learning_rate':[0.05, 0.075, 0.1]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since n_estimators is going up from the starting value of 300, learning_rate is going down from the starting value of 0.1. Let's keep max_depth=3 to limit the variance.&lt;/p&gt;

&lt;p&gt;With 27 possible combinations of hyperparameters, we use RandomizedSearchCV to try 10 of these combinations in the hopes of finding a good model.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While 27 combinations are feasible with GridSearchCV, at some point, you will end up with too many possibilities and RandomizedSearchCV will become essential. We use RandomizedSearchCV here for practice and to speed up computations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;2) Let's import RandomizedSearchCV and initialize a gradient boosting model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.model_selection import RandomizedSearchCV
gbr = GradientBoostingRegressor(max_depth=3, random_
state=2)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) Next, initialize RandomizedSearchCV with gbr and params as inputs in addition to the number of iterations, the scoring, and the number of folds. Recall that n_jobs=-1 may speed up computations and random_state=2 ensures the consistency of results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rand_reg = RandomizedSearchCV(gbr, params, n_iter=10,
scoring='neg_mean_squared_error', cv=5, n_jobs=-1,
random_state=2)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4) Now fit the model on the training set and obtain the best parameters and scores:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rand_reg.fit(X_train, y_train)
best_model = rand_reg.best_estimator_
best_params = rand_reg.best_params_
print("Best params:", best_params)
best_score = np.sqrt(-rand_reg.best_score_)
print("Training score: {:.3f}".format(best_score))
y_pred = best_model.predict(X_test)
rmse_test = MSE(y_test, y_pred)**0.5
print('Test set score: {:.3f}'.format(rmse_test))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Best params: {'learning_rate': 0.05, 'n_estimators': 300,
'subsample': 0.65}
Training score: 636.200
Test set score: 625.985
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From here, it's worth experimenting by changing parameters individually or in pairs. Even though the best model currently has n_estimators=300, it's certainly possible that raising this hyperparameter will obtain better results with careful adjustment of the learning_rate value. subsample may be experimented with as well.&lt;/p&gt;

&lt;p&gt;5) After a few rounds of experimentation, we obtained the following model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gbr = GradientBoostingRegressor(max_depth=3, n_
estimators=1600, subsample=0.75, learning_rate=0.02,
random_state=2)
gbr.fit(X_train, y_train)
y_pred = gbr.predict(X_test)
MSE(y_test, y_pred)**0.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;596.9544588974487
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With a larger value for n_estimators at 1600, a smaller learning_rate value at 0.02, a comparable subsample value of 0.75, and the same max_depth value of 3, we obtained the best &lt;strong&gt;Root Mean Square Error (RMSE)&lt;/strong&gt; yet at 597.&lt;/p&gt;

&lt;p&gt;It may be possible to do better. We encourage you to try!&lt;/p&gt;

&lt;p&gt;Now, let's see how XGBoost differs from gradient boosting using the same hyperparameters covered thus far.&lt;/p&gt;

&lt;h2&gt;
  
  
  XGBoost
&lt;/h2&gt;

&lt;p&gt;XGBoost is an advanced version of gradient boosting with the same general structure, meaning that it transforms weak learners into strong learners by summing the residuals of trees.&lt;/p&gt;

&lt;p&gt;The only difference in hyperparameters from the last section is that XGBoost refers to learning_rate as eta.&lt;/p&gt;

&lt;p&gt;Let's build an XGBoost regressor with the same hyperparameters to compare the results.&lt;/p&gt;

&lt;p&gt;Import XGBRegressor from xgboost, and then initialize and score the model as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from xgboost import XGBRegressor
xg_reg = XGBRegressor(max_depth=3, n_estimators=1600, eta=0.02,
subsample=0.75, random_state=2)
xg_reg.fit(X_train, y_train)
y_pred = xg_reg.predict(X_test)
MSE(y_test, y_pred)**0.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;584.339544309016
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The score is better. &lt;/p&gt;

&lt;p&gt;Accuracy and speed are the two most important concepts when building machine learning models, and we have shown multiple times that XGBoost is very accurate. XGBoost is preferred over gradient boosting in general because it consistently delivers better results and because it's faster, as demonstrated by the following case study.&lt;/p&gt;

&lt;h1&gt;
  
  
  Approaching big data – gradient boosting versus XGBoost
&lt;/h1&gt;

&lt;p&gt;In the real world, datasets can be enormous, with trillions of data points. Limiting work to one computer can be disadvantageous due to the limited resources of one machine. When working with big data, the cloud is often used to take advantage of parallel computers.&lt;/p&gt;

&lt;p&gt;Datasets are big when they push the limits of computation. In this section, we examine &lt;strong&gt;exoplanets&lt;/strong&gt; over time. The dataset has 5,087 rows and 3,189 columns that record light flux at different times of a star's life cycle. Multiplying columns and rows together results in 1.5 million data points. Using a baseline of 100 trees, we need 150 million data points to build a model.&lt;/p&gt;

&lt;p&gt;In this section, my 2013 MacBook Air had wait times of about 5 minutes. New computers should be faster. I have chosen the exoplanet dataset so that wait times play a significant role without tying up your computer for a very long time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing the exoplanet dataset
&lt;/h2&gt;

&lt;p&gt;The exoplanet dataset is taken from Kaggle and dates from around 2017: &lt;a href="https://www.kaggle.com/keplersmachines/kepler-labelled-time-series-data" rel="noopener noreferrer"&gt;https://www.kaggle.com/keplersmachines/kepler-labelled-time-series-data&lt;/a&gt;. The dataset contains information about the light of stars. Each row is an individual star and the columns reveal different light patterns over time. In addition to light patterns, an exoplanet column is labeled 2 if the star hosts an exoplanet; otherwise, it is labeled 1.&lt;/p&gt;

&lt;p&gt;The dataset records the light flux from thousands of stars. &lt;strong&gt;Light flux&lt;/strong&gt;, often referred to as &lt;strong&gt;luminous flux&lt;/strong&gt;, is the perceived brightness of a star.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The perceived brightness is different than actual brightness. For instance, an incredibly bright star very far away may have a small luminous flux (looks dim), while a moderately bright star that is very close, like the sun, may have a large luminous flux (looks bright).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When the light flux of an individual star changes periodically, it is possible that the star is being orbited by an exoplanet. The assumption is that when an exoplanet orbits in front of a star, it blocks a small fraction of the light, reducing the perceived brightness by a very slight amount.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finding exoplanets is rare. The predictive column, on whether a star hosts an exoplanet or not, has very few positive cases, resulting in an imbalanced dataset. Imbalanced datasets require extra precautions. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next, let's access the exoplanet dataset and prepare it for machine learning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preprocessing the exoplanet dataset
&lt;/h2&gt;

&lt;p&gt;The exoplanet dataset has been uploaded to our GitHub page at &lt;a href="https://github.com/PacktPublishing/Hands-On-Gradient-Boosting-with-XGBoost-and-Scikit-learn/tree/master/Chapter04" rel="noopener noreferrer"&gt;https://github.com/PacktPublishing/Hands-On-Gradient-Boosting-with-XGBoost-and-Scikit-learn/tree/master/Chapter04&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here are the steps to load and preprocess the exoplanet dataset for machine learning:&lt;/p&gt;

&lt;p&gt;1) Download exoplanets.csv in the same folder as your Jupyter Notebook. Then, open the file and take a look:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df = pd.read_csv('exoplanets.csv')
df.head()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The DataFrame will look as shown in the following figure:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83rulm333rg3qgzvp0vm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83rulm333rg3qgzvp0vm.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Figure 5 – Exoplanet DataFrame&lt;/p&gt;

&lt;p&gt;Not all columns are shown due to space limitations. The flux columns are floats, while the Label column is 2 for an exoplanet star and 1 for a non-exoplanet star.&lt;/p&gt;

&lt;p&gt;2) Let's confirm that all columns are numerical with df.info():&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df.info()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;class 'pandas.core.frame.DataFrame'&amp;gt;
RangeIndex: 5087 entries, 0 to 5086
Columns: 3198 entries, LABEL to FLUX.3197
dtypes: float64(3197), int64(1)
memory usage: 124.1 MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see from the output, 3197 columns are floats and 1 column is an int, so all columns are numerical.&lt;/p&gt;

&lt;p&gt;3) Now, let's confirm the number of null values with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df.isnull().sum().sum()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output reveals that there are no null values.&lt;/p&gt;

&lt;p&gt;4) Since all columns are numerical with no null values, we may split the data into training and test sets. Note that the 0th column is the target column, y, and all other columns are the predictor columns, X:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X = df.iloc[:,1:]
y = df.iloc[:,0]
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=2)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's time to build a gradient boosting classifier to predict whether stars host exoplanets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building gradient boosting classifiers
&lt;/h2&gt;

&lt;p&gt;Gradient boosting classifiers work in the same manner as gradient boosting regressors. The difference is primarily in the scoring.&lt;/p&gt;

&lt;p&gt;Let's start by importing GradientBoostingClassifer and XGBClassifier in addition to accuracy_score so that we may compare both models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.ensemble import GradientBoostingClassifier
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need a way to compare models using a timer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Timing models
&lt;/h2&gt;

&lt;p&gt;Python comes with a time library that can be used to mark time. The general idea is to mark the time before and after a computation. The difference between these times tells us how long the computation took.&lt;/p&gt;

&lt;p&gt;The time library is imported as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import time
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within the time library, the .time() method marks time in seconds.&lt;/p&gt;

&lt;p&gt;As an example, see how long it takes to run df.info() by assigning start and end times before and after the computation using time.time():&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;start = time.time()
df.info()
end = time.time()
elapsed = end - start
print('\nRun Time: ' + str(elapsed) + ' seconds.')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;class 'pandas.core.frame.DataFrame'&amp;gt;
RangeIndex: 5087 entries, 0 to 5086
Columns: 3198 entries, LABEL to FLUX.3197
dtypes: float64(3197), int64(1)
memory usage: 124.1 MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The runtime is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Run Time: 0.0525362491607666 seconds.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your results will differ from ours, but hopefully, it's in the same ballpark.&lt;/p&gt;

&lt;p&gt;Let's now compare GradientBoostingClassifier and XGBoostClassifier with the exoplanet dataset for its speed using the preceding code to mark time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Jupyter Notebooks come with magic functions, denoted by the % sign before a command. %timeit is one such magic function. Instead of computing how long it takes to run the code once, %timeit computes how long it takes to run code over multiple runs. See &lt;a href="https://ipython.readthedocs.io/en/stable/interactive/magics.html" rel="noopener noreferrer"&gt;https://ipython.readthedocs.io/en/stable/interactive/magics.html&lt;/a&gt; for more information on magic functions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Comparing speed
&lt;/h2&gt;

&lt;p&gt;It's time to race GradientBoostingClassifier and XGBoostClassifier with the exoplanet dataset. We have set max_depth=2 and n_estimators=100 to limit the size of the model. Let's start with GradientBoostingClassifier:&lt;/p&gt;

&lt;p&gt;1) First, we will mark the start time. After building and scoring the model, we will mark the end time. The following code may take around 5 minutes to run depending on the speed of your computer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;start = time.time()
gbr = GradientBoostingClassifier(n_estimators=100, max_
depth=2, random_state=2)
gbr.fit(X_train, y_train)
y_pred = gbr.predict(X_test)
score = accuracy_score(y_pred, y_test)
print('Score: ' + str(score))
end = time.time()
elapsed = end - start
print('\nRun Time: ' + str(elapsed) + ' seconds')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Score: 0.9874213836477987
Run Time: 317.6318619251251 seconds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GradientBoostingRegressor took over 5 minutes to run on my 2013 MacBook Air. Not bad for 150 million data points on an older computer.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While a score of 98.7% percent is usually outstanding for accuracy, this is not the case with imbalanced datasets.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;2) Next, we will build an XGBClassifier model with the same hyperparameters and mark the time in the same manner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;start = time.time()
xg_reg = XGBClassifier(n_estimators=100, max_depth=2,
random_state=2)
xg_reg.fit(X_train, y_train)
y_pred = xg_reg.predict(X_test)
score = accuracy_score(y_pred, y_test)
print('Score: ' + str(score))
end = time.time()
elapsed = end - start
print('Run Time: ' + str(elapsed) + ' seconds')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Score: 0.9913522012578616
Run Time: 118.90568995475769 seconds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On my 2013 MacBook Air, XGBoost took under 2 minutes, making it more than twice as fast. It's also more accurate by half a percentage point.&lt;/p&gt;

&lt;p&gt;When it comes to big data, an algorithm twice as fast can save weeks or months of computational time and resources. This advantage is huge in the world of big data.&lt;/p&gt;

&lt;p&gt;In the world of boosting, XGBoost is the model of choice due to its unparalleled speed and impressive accuracy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I recently purchased a 2020 MacBook Pro and updated all software. The difference in time using the same code is staggering:&lt;/p&gt;

&lt;p&gt;Gradient Boosting Run Time: 197.38 seconds&lt;/p&gt;

&lt;p&gt;XGBoost Run Time: 8.66 seconds&lt;/p&gt;

&lt;p&gt;More than a 10-fold difference!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;In this article, you learned the difference between bagging and boosting. You learned how gradient boosting works by building a gradient boosting regressor from scratch. You implemented a variety of gradient boosting hyperparameters, including learning_rate, n_estimators, max_depth, and subsample, which results in stochastic gradient boosting. Finally, you used big data to predict whether stars have exoplanets by comparing the times of GradientBoostingClassifier and XGBoostClassifier, with XGBoostClassifier emerging as twice to over ten times as fast and more accurate.&lt;/p&gt;

&lt;p&gt;The advantage of learning these skills is that you now understand when to apply XGBoost rather than similar machine learning algorithms such as gradient boosting. You can now build stronger XGBoost and gradient boosting models by properly taking advantage of core hyperparameters, including n_estimators and learning_rate. Furthermore, you have developed the capacity to time all computations instead of relying on intuition.&lt;/p&gt;

&lt;p&gt;Learn how to build powerful XGBoost models with Python and sci-kit learn and discover expert insights from XGBoost Kaggle masters in Corey Wade's book &lt;a href="https://packt.live/2P3oz2j" rel="noopener noreferrer"&gt;Hands-On Gradient Boosting with XGBoost&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>dataanalytics</category>
      <category>python</category>
    </item>
    <item>
      <title>Editing with VS Code </title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Sat, 13 Feb 2021 20:58:21 +0000</pubDate>
      <link>https://dev.to/packt/editing-with-vs-code-1m6c</link>
      <guid>https://dev.to/packt/editing-with-vs-code-1m6c</guid>
      <description>&lt;p&gt;&lt;a href="http://packt.live/37g6IeO"&gt;Visual Studio Code&lt;/a&gt; or, as it is generally known, VS Code, is one of the most popular coding tools today. Being neither a pure editor nor a full-blown IDE, but something in between, it manages to be lightweight and fast, yet provides support for several languages through a set of easily installable VS Code extensions. It's cross-platform too, supporting Windows, macOS, and Linux operating systems, and provides a range of features covering the complete development life-cycle.  &lt;/p&gt;

&lt;p&gt;The philosophy underlying VS Code could be said to take what you need. The extension framework provides flexibility and has created a marketplace for an extensive range of tooling features. It also supports the automation of build processes, offering strong integration with Git for version control management. All this allows developers to install and build an environment tailored to their specific needs. &lt;/p&gt;

&lt;p&gt;In this short introduction, we’ll take you through just some of the features that make VS Code such a compelling proposition for professional software developers.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up VS Code&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;To start using VS Code visit &lt;a href="https://code.visualstudio.com"&gt;https://code.visualstudio.com&lt;/a&gt;, from where you can download the version for your OS. If you would like to download for another OS, you can click on other platforms or press the down arrow button that shows next to the OS name, as illustrated in the following screenshot: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---3hc7ago--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6bxegw4md12vmrzl2tnd.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---3hc7ago--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6bxegw4md12vmrzl2tnd.JPG" alt="1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 1 – Different versions of VS Code for different OSes&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;There are two versions available for download. The &lt;strong&gt;stable&lt;/strong&gt; version is updated every month with new features, whereas the &lt;strong&gt;insiders&lt;/strong&gt; version is a nightly build that provides an early peak into the upcoming features. Both versions can run side by side on the same machine. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Launching VS Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once you’ve downloaded VS Code, follow the guided installation wizard to set up your environment. Next, let's look at some of the different ways of launching VS Code and explore some command-line options. &lt;/p&gt;

&lt;p&gt;The simplest way to start VS Code is by running the code . command.  &lt;/p&gt;

&lt;p&gt;This will open up a new instance of VS Code. If this command does not work in your macOS installation, you can follow the next steps. For Linux, visit &lt;a href="https://code.visualstudio.com"&gt;https://code.visualstudio.com&lt;/a&gt; and look for &lt;strong&gt;Setup | Linux&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up the command line for macOS&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;If you already have a Bash profile, you can skip Steps 1 and 2. Otherwise, proceed as follows: &lt;/p&gt;

&lt;p&gt;1) Write the cd ~/ command to go to your home folder. &lt;/p&gt;

&lt;p&gt;2) Write the touch .bash_profile command to create a new file. &lt;/p&gt;

&lt;p&gt;3) Then, on the terminal window, write the following commands: &lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile &lt;/p&gt;

&lt;p&gt;#Add Visual Studio Code (code) &lt;/p&gt;

&lt;p&gt;Export PATH=''\$PATH:/Applications/Visual Studio Code. &lt;/p&gt;

&lt;p&gt;app/Contents/Resources/app/bin &lt;/p&gt;

&lt;p&gt;EOF &lt;/p&gt;

&lt;p&gt;4) Close the terminal window and reopen to check whether the code . command works. &lt;/p&gt;

&lt;p&gt;Now that your command line is set up and working, let's look at some different ways of launching VS Code. &lt;/p&gt;

&lt;p&gt;The following command launches the code and opens the directory in VS Code where this is run: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;code&lt;/strong&gt; . &lt;/p&gt;

&lt;p&gt;The –r variant allows you to open the specified workspace file in an already loaded VS Code instance; you can replace the workspace file with any file you would like to edit, as illustrated in the following code snippet: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;code -r ''c:\My VS Code Projects\my project workspace.codeworkspace''&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The –n addition allows you to open a new instance of VS Code, as illustrated in the following code snippet: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;code -n ''c:\My VS Code Projects\Project 1''&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;If you would like to open a particular file and go to a specific row and column, use the –g addition. The following command will launch a new instance of VS Code; open the launch.json file and place the cursor on row 5 and column 10: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;code -n -g ''c:\My VS Code Projects\Project 1.vscode\launch.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;json'':5:10Exploring VS Code Layout&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;In this section, we will explore the editor's layout and the different panes. To get you familiar with the editor, we will go through the different sections of the editor and explain their utility. The layout of the editor can be seen in the following screenshot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lszLYhX---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1f40gi9bpp6xnv22yozu.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lszLYhX---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1f40gi9bpp6xnv22yozu.JPG" alt="2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 2 – VS Code editor layout&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;The most prominent section of the editor is the big pane on the right. This is where you edit code. Files selected from the EXPLORER are opened in this pane. You can open multiple files for editing at the same time. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Editing basics&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;VS Code comes with some great options to enable you to code faster. Let's look at some of them. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generating HTML&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The Emmet 2.0 extension is built into the editor, helping you to quickly write HTML code. For example, to create a table with five rows having two columns in each row, you can use the following statement: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;table&amp;gt;tr*5&amp;gt;td*3&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;This is what the table looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aV_vykWB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/920msmi5tmiwmgmpe7mf.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aV_vykWB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/920msmi5tmiwmgmpe7mf.JPG" alt="3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 3 – Generate HTML using the Emmet extension&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;As shown in the preceding screenshot, you will notice that VS Code starts showing you the HTML it will generate. Press &lt;em&gt;Enter&lt;/em&gt; to generate the HTML. &lt;/p&gt;

&lt;p&gt;In case you want to add a class attribute to the tag, use a dot (.) after the tag name. An example of this can be seen in the following code snippet: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;table&amp;gt;tr*5.myclass&amp;gt;td*2&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;This will generate the same table as in &lt;em&gt;Figure 3&lt;/em&gt;, with myclass placed as a class attribute for the &lt;/p&gt;
&lt;tr&gt;&lt;/tr&gt; tag. The cursor will be placed between the first &lt;td&gt;&lt;/td&gt; tag; use &lt;em&gt;Tab&lt;/em&gt; to navigate to the next &lt;td&gt;&lt;/td&gt; tags for faster editing.

&lt;p&gt;&lt;strong&gt;Multi-cursor editing&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;One of the most important features of VS Code is the multi-select cursor. It has several variants, which we will explore now: &lt;/p&gt;

&lt;p&gt;1) &lt;em&gt;Alt/Option + Ctrl/Command + Down Arrow Key&lt;/em&gt;: This option will allow you to place a cursor on multiple lines at the same place. In the following screenshot, notice how the cursor spans five lines:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---C9fR8tc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gids1vhuumtflt55mwxb.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---C9fR8tc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gids1vhuumtflt55mwxb.JPG" alt="4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 4 – Multi-cursor editing on the same position&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;2) &lt;em&gt;Ctrl/Command + Shift + L&lt;/em&gt;: This command will allow you to select multiple occurrences of the same text and edit at the same time. Pressing the arrow key will keep the cursors active and allow quick multi-cursor editing. Press &lt;em&gt;Esc&lt;/em&gt; to remove the cursors. This is illustrated in the following screenshot: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Rpyh3AIF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ns9jo1flhc5ll2s81tn3.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rpyh3AIF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ns9jo1flhc5ll2s81tn3.JPG" alt="5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 5 – Multi-cursor editing of the same text&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;3) &lt;em&gt;Alt/Option + mouse click&lt;/em&gt;: This option allows you to place cursors at specific points and edit at the same time. Press &lt;em&gt;Esc&lt;/em&gt; to remove the cursor. This is illustrated in the following screenshot: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lEbNv2RC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3davfj81pvfeqm743kgf.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lEbNv2RC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3davfj81pvfeqm743kgf.JPG" alt="6"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 6 – Multi-cursor editing by placing the cursor on mouse clicks&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;4) &lt;em&gt;Ctrl/Command + D&lt;/em&gt;: Highlight a text item and then, on each press of &lt;em&gt;Ctrl/Command+D&lt;/em&gt;, the system moves the cursor to select the same text item. To skip a specific selection, press &lt;em&gt;Ctrl/Command + K&lt;/em&gt; and then continue with &lt;em&gt;Ctrl/Command + D&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;5) &lt;em&gt;Alt/Option + Shift + Down Arrow Key&lt;/em&gt;: Place the cursor on a particular line and press the command to duplicate the same line. &lt;/p&gt;

&lt;p&gt;6) &lt;em&gt;Alt/Option + Down Arrow Key&lt;/em&gt;: Place the cursor on a particular line and press the command to move the line below. The same works for multiple selected lines. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code refactoring&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;While writing code, there are usually opportunities to further optimize the code you’ve written to increase readability and maintainability. Readable code is easier to change. It allows other developers to easily find the required code section and make changes as required. In a software development life-cycle, having to frequently maintain or change code is more time consuming than first writing it. &lt;/p&gt;

&lt;p&gt;&lt;a href="http://packt.live/37g6IeO"&gt;VS Code&lt;/a&gt; provides some nice code refactoring features. Support for JavaScript and TypeScript is built into the tool. Let's go through a few examples. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extracting to a constant&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Once you select a section of code, a light bulb appears on the left. On clicking it, you will see a context menu open up with some refactoring options. Once you select the Extract to constant in module scope option, VS Code will create a constant and replace it with the string literals—for example, 'Product List!', shown as follows: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hszPQkDq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fs2v8ud7o1inwxwtkow3.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hszPQkDq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fs2v8ud7o1inwxwtkow3.JPG" alt="7"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 7 – Extract to constant&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;The following screenshot shows the refactored code. Here, string literals have been replaced with a constant variable:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cYGkU40r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1b6uck5jmmey0lcxvwav.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cYGkU40r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1b6uck5jmmey0lcxvwav.JPG" alt="8"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 8 – Extract to constant refactored code&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Apart from extract to constant, another option is extracting code to a method. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extracting to a method&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Often while writing code you will see a piece of code that can be reused. Extracted that code into a method to avoid code duplication. This enhances code maintainability since future changes are required in just one place rather than several.  &lt;/p&gt;

&lt;p&gt;VS Code provides a very easy-to-use and quick way to extract your code into reusable methods. &lt;/p&gt;

&lt;p&gt;In the following example, the displayed success message is selected. A light bulb appears on the left and shows an &lt;strong&gt;Extract to method in class&lt;/strong&gt; option: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lIxS7EEE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xk8urh4goxugjwwgj2av.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lIxS7EEE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xk8urh4goxugjwwgj2av.JPG" alt="9"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 9 – Reusable code can be extracted to a method&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Once you select this option, VS Code will ask for a method name as an input, as illustrated in the following screenshot: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y1vFsF1j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r7bnd9qpp7gmvuav7sa9.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y1vFsF1j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r7bnd9qpp7gmvuav7sa9.JPG" alt="10"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 10 – Reusable code can be extracted to a method&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;On entering the method name, and as shown in the preceding screenshot, the selected piece of code will be extracted into a method, and a method call will be placed in the same section. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Renaming symbols&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;While writing code, you might feel the need to rename some variables. These variables are not infrequently used in multiple places, and Find and Replace may feel like a laborious option. &lt;/p&gt;

&lt;p&gt;VS Code here provides a quick &lt;strong&gt;Rename Symbol&lt;/strong&gt; option. &lt;/p&gt;

&lt;p&gt;Select the variable to be changed and press &lt;em&gt;F2&lt;/em&gt;. VS Code will ask for the new variable, as illustrated in the following screenshot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CpVdmkWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b6ptuf4cvho7jxpb7ktx.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CpVdmkWF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b6ptuf4cvho7jxpb7ktx.JPG" alt="11"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 11 – Rename variable using F2&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;On pressing &lt;em&gt;Enter&lt;/em&gt;, the variable name is replaced throughout the file: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rJb-jb0D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pfgsx1emm3v6zxgw182h.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rJb-jb0D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pfgsx1emm3v6zxgw182h.JPG" alt="12"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 12 – Variable name changed to newVariableName&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;The preceding screenshot shows pageTitle changed to newVariableName. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refactoring extensions&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Furthermore, with extensions VS Code supports refactoring for other languages. You can search the refactoring extension for your language on the VS Code extension marketplace. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snippets&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;To help you code faster VS Code offers code snippets – predefined code templates that VS Code suggests while you write code. Here’s an example of a method template—on pressing Tab, the code on the right will be inserted: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M4CAbzMY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jfgomnc4yctinzlk9al9.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M4CAbzMY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jfgomnc4yctinzlk9al9.JPG" alt="15"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 13 – Code snippets&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Apart from the predefined templates, several extensions are available that come with their own code snippets. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom snippets&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Snippets help you to be quick in writing repetitive code, and in case the out-of-the-box snippets do not satisfy your requirements, VS Code provides you with the option to define your own custom snippets, called &lt;strong&gt;User Snippet.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Let's create a user snippet for HTML code. &lt;/p&gt;

&lt;p&gt;Go to &lt;strong&gt;File&lt;/strong&gt; | &lt;strong&gt;Preferences&lt;/strong&gt; | &lt;strong&gt;User Snippet&lt;/strong&gt;. The command pallet will pop up. Enter html and press &lt;em&gt;Enter&lt;/em&gt;, as illustrated in the following screenshot: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xjg-T_o9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pir6zep7bwpzy8u205lo.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xjg-T_o9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pir6zep7bwpzy8u205lo.JPG" alt="16"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 14 – Create custom HTML code snippets&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;The html.json file contains the HTML user snippets. We will create a snippet to generate a table. The list that follows details the different parts of a user snippet. &lt;/p&gt;

&lt;p&gt;In our example, the following applies: &lt;/p&gt;

&lt;p&gt;1) table-snippet: This is the name of the snippet. &lt;/p&gt;

&lt;p&gt;2) prefix: The name mentioned in this section is used to call the &lt;br&gt;
   snippet. &lt;/p&gt;

&lt;p&gt;3) description: This will be displayed in the pop-up window. &lt;/p&gt;

&lt;p&gt;4) body: This is the place where you write your code template. &lt;br&gt;
   It's an array that will contain the line of code to be inserted &lt;br&gt;
   when you call the snippet. \t is used for code indentation. &lt;/p&gt;

&lt;p&gt;Copy and paste the following code into an html.json file: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mN46zPHQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fpkq2bwd4y3i5yfgvhtn.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mN46zPHQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fpkq2bwd4y3i5yfgvhtn.JPG" alt="Capturxxe"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, our snippet is created. Next, we will call it. If the following code does not work for you, restart VS Code. &lt;/p&gt;

&lt;p&gt;As soon as you start typing mytable, VS Code suggests the code snippet you created, as illustrated in the following screenshot: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cyD4q1zy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o0jzuqiv6lb8as7povoq.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cyD4q1zy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o0jzuqiv6lb8as7povoq.JPG" alt="17"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 15 – Calling the custom HTML code snippet created earlier&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;On pressing &lt;em&gt;Tab&lt;/em&gt;, the code will be inserted, as illustrated in the following screenshot: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ix1f1SLc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dou1y5kcbhe2pr4mnxkq.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ix1f1SLc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dou1y5kcbhe2pr4mnxkq.JPG" alt="18"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 16 – Code created from the custom HTML code snippet&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;The preceding screenshot shows the table generated from &lt;strong&gt;User Snippet.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code navigation&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;While writing code, developers are often working with several files at the same time. Quickly navigating between files or within the same file is an important capability.  &lt;/p&gt;

&lt;p&gt;VS Code offers multiple options for code navigation—let's explore them one by one. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go to line&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;To move the cursor to a particular line and column, use the Goto (:) Line command, as illustrated in the following screenshot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k7nvG4-5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1207vlk699tqjpex3abp.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k7nvG4-5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1207vlk699tqjpex3abp.JPG" alt="19"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 17 – Using the command palette to move cursor to a line and column&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go to symbol&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;To show the symbols used in your code and to navigate directly to a selected symbol, you can use &lt;strong&gt;Go to Symbol (@) in a File.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Also, while you are traversing, VS Code will highlight the code section, as illustrated in the following screenshot: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kT4VFuko--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h7wn3xve0n34c3yq8awk.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kT4VFuko--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h7wn3xve0n34c3yq8awk.JPG" alt="20"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 18 – Using the command palette to jump to a section of code&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;In the case of JavaScript, the command palette will show methods, as illustrated in the following screenshot: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xsVn3BkH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/31pr422x7sig6md3jdl2.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xsVn3BkH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/31pr422x7sig6md3jdl2.JPG" alt="22"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 19 – Jump to code section in a JavaScript example&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using commands for navigation&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Here are some useful commands for quick code navigation: &lt;/p&gt;

&lt;p&gt;1) &lt;em&gt;Ctrl/Command + F12:&lt;/em&gt; Use this command to select an object and jump to its implementation. &lt;/p&gt;

&lt;p&gt;2) &lt;em&gt;Ctrl/Command + Shift + F12:&lt;/em&gt; This is a quick way to peek into an implementation. The pop-up window also allows code editing. This is illustrated in the following screenshot: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kiVkMAiV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/go6eexjqulkruac6rueq.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kiVkMAiV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/go6eexjqulkruac6rueq.JPG" alt="24"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 20 – Peek into an implementation&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;The preceding screenshot shows an example where the ProductDetailComponent implementation is open in &lt;strong&gt;Quick Peek&lt;/strong&gt; mode. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Navigating between files&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;VS Code provides multiple options for navigating between different files. Let's assume you have a bunch of files open in your editor. To select a particular file from currently opened files, hold &lt;em&gt;Ctrl&lt;/em&gt; and then press &lt;em&gt;Tab.&lt;/em&gt; VS Code will open a list of files, and you can move your selection by holding &lt;em&gt;Ctrl&lt;/em&gt; and pressing &lt;em&gt;Tab.&lt;/em&gt; An example list of files can be seen in the following screenshot: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z0a07EPl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wyf2ytmmzbvjz28aexfz.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z0a07EPl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wyf2ytmmzbvjz28aexfz.JPG" alt="27"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 21 – Files listed for navigation&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Apart from using &lt;em&gt;Ctrl&lt;/em&gt; (hold) + &lt;em&gt;Tab&lt;/em&gt;, you can also sequentially switch between multiple files opened in the editor. &lt;/p&gt;

&lt;p&gt;Use &lt;em&gt;Alt/Option&lt;/em&gt; + &lt;em&gt;Right Arrow&lt;/em&gt; or &lt;em&gt;Alt/Option&lt;/em&gt; + &lt;em&gt;Left Arrow&lt;/em&gt; to switch from one file to other opened files. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Commenting code&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Last but definitely not least, you can quickly comment and uncomment code by using the following keyboard shortcuts: &lt;/p&gt;

&lt;p&gt;1) &lt;em&gt;Ctrl/Command + K&lt;/em&gt; then &lt;em&gt;C&lt;/em&gt;: to comment a line &lt;/p&gt;

&lt;p&gt;2) &lt;em&gt;Ctrl/Command + K&lt;/em&gt; then &lt;em&gt;U&lt;/em&gt;: to uncomment a line &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;In this guide, we’ve looked at the basic features of VS Code. We explored multiple ways of launching VS Code and also covered several command-line options. We then moved on to showcase some of its editing and code refactoring features, and finished by looking at code navigation functionality. Now that you know what’s involved in using VS Code as an editor, and what it can offer, why not try it out for your next coding project? &lt;/p&gt;

&lt;p&gt;Buy Developing Multi-platform apps with Visual Studio Code: &lt;a href="http://packt.live/37g6IeO"&gt;http://packt.live/37g6IeO&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GIbF1jOR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dgnfati9228roy6hpcc8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GIbF1jOR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dgnfati9228roy6hpcc8.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Flaws in the System: An Inventory of Network Vulnerabilities to Avoid</title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Mon, 08 Feb 2021 09:33:45 +0000</pubDate>
      <link>https://dev.to/packt/flaws-in-the-system-an-inventory-of-network-vulnerabilities-to-avoid-2k4b</link>
      <guid>https://dev.to/packt/flaws-in-the-system-an-inventory-of-network-vulnerabilities-to-avoid-2k4b</guid>
      <description>&lt;p&gt;One question students frequently ask at the beginning of their cybersecurity learning journey is: how are hackers able to break into a system or network? The simple answer is that hackers and other threat actors look for vulnerabilities on a target system. A vulnerability is a security weakness or flaw in a system that could be exploited by a threat. The competition between security researchers and hackers is an ongoing one — a race to discover security flaws first. Security researchers are always looking for new vulnerabilities to help software and product vendors to fix and close security weaknesses, while hackers are looking to exploit and gain access to their victims' systems.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Tip &lt;br&gt;
Nessus is one of the most popular vulnerability assessment tools within the cybersecurity industry. Further information on Nessus can be found at &lt;a href="https://www.tenable.com/products/nessus"&gt;https://www.tenable.com/products/nessus&lt;/a&gt;. &lt;/b&gt;&lt;/p&gt;

&lt;p&gt;A vulnerability may take the form of a weakness or flaw in a configuration, security policy, or even something technological in nature. For example, a network device such as a router is configured to use Telnet and not SSH as the preferred method for remote access management. Telnet is an unsecured protocol that transfers data in plaintext whereas SSH encrypts all traffic. TCP/IP is the language all devices speak when connected to an Ethernet network, so you may think the TCP/IP protocol suite offers good security. In reality, however, this is not necessarily the case.&lt;/p&gt;

&lt;p&gt;Many vulnerabilities exist in the various protocols within TCP/IP. Those protocols include Internet Protocol (IP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol (HTTP), and even Simple Network Management Protocol (SNMP). If the IP was not designed with good security, an attacker can simply spoof the IP address of another device on the network. SNMPv1 does not support user authentication, so this means an attacker is able to remotely connect to an SNMP-enabled device and gather sensitive information. Attackers can take advantage of various weaknesses within these protocols and capture sensitive information while network traffic is traveling along a network.&lt;/p&gt;

&lt;p&gt;Hackers are always looking for a way inside your network and devices, and your network components provide an easy way in if they are not updated and secured properly.&lt;br&gt;
The following screenshot shows the NMap tool, which has found the EternalBlue vulnerability on a Windows system:&lt;/p&gt;

&lt;p&gt;Figure 1 – EternalBlue&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LEkcYcoT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/42hcgf9ir8sygdn54gsb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LEkcYcoT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/42hcgf9ir8sygdn54gsb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this screenshot, NMap reports that the target system is vulnerable to the EternalBlue exploit, which will allow an attacker to exploit the vulnerability in Server Message Block (SMB) version 1 to execute remote code. Furthermore, NMap reports that the risk on the target is high, and it provides reference URLs for additional research.&lt;/p&gt;

&lt;p&gt;Also, some enterprise network devices such as routers and switches support network security functions to help prevent various malicious threats and attacks on your network. Sometimes, a misconfiguration on a router can give an attacker remote access into the management pane of the device.&lt;/p&gt;

&lt;p&gt;Each device requires a firmware or an operating system in order to work and perform its functions. Operating system vendors are always researching for vulnerabilities within their products, so that they can quickly release updates and security patches that fix any issues for their customers. Many organizations do not update their computers' operating systems for many months, however, and this increases the risk of their being compromised. Imagine if a new threat came about and the operating system vendor released a security patch to fix the issues but the organization ignored the updates and patches by the vendor; their systems would be vulnerable to the threat until security patching occurred on their network. Remember, every day hackers are looking for ways into your systems; operating system vendors release updates very frequently in order to help protect you.&lt;br&gt;
Many configuration vulnerabilities exist on networks. They can exist within user account management, network services configuration, and default device configurations. When logging in to a system, your user credentials may be sent across the network via an unsecured protocol.&lt;/p&gt;

&lt;p&gt;The following screenshot shows that a Windows user credential was captured as it was sent to the Active Directory server on the network:&lt;/p&gt;

&lt;p&gt;Figure 2 – User account details&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3765VfbA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jjlplg4jc61oyje2f4ot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3765VfbA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jjlplg4jc61oyje2f4ot.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the preceding screenshot, we can see that user Bob has entered his username and password on a Windows 10 system to authenticate himself on the network. However, in this scenario, the Active Directory server (Windows Server) is using the default directory query protocol, Lightweight Directory Access Protocol (LDAP). LDAP does not encrypt user information by default; only the user's password is hashed, using NTLMv2, and sent across the network. In the preceding screenshot, the hash was captured, allowing the attacker to perform offline decryption of the hash to retrieve Bob's password. This is an example of an unsecured user account and insecure protocols on a network.&lt;/p&gt;

&lt;p&gt;Configuration vulnerabilities also exist when an administrator configures weak or insecure passwords for user accounts. Such vulnerability enables a hacker to easily compromise user accounts and quickly gain access. Another vulnerability occurs if default configurations are used on a system or network device. Default configurations are applied on a device at the point it leaves the manufacturer; they allow us to easily get the device up and working quickly without having to spend too much time trying to figure out how to get it working. Default configurations often contain many configuration weaknesses, such as security features being absent and remote access being enabled for all. It's important to ensure that default configurations are never used on systems and devices on a production network.&lt;/p&gt;

&lt;p&gt;Human vulnerabilities&lt;br&gt;
One major vulnerability we often overlook when designing a secure network is the human factor. Humans are also vulnerable to various online and offline cyber-attacks, such as so-called social engineering attacks. Social engineering is simply when an attacker is able to manipulate a person to reveal sensitive information or perform a certain task.&lt;/p&gt;

&lt;p&gt;Important note&lt;br&gt;
Social engineering is usually non-technical in nature. This means a computer is not required to perform various types of social engineering attacks on a victim. The attack usually exploits the trust and social behavior of the victim.&lt;/p&gt;

&lt;p&gt;The following are various types of cyber-attack that target human vulnerabilities:&lt;br&gt;
• Phishing is a form of social engineering carried out using a computer; the attacker creates and sends a fake email to a potential victim. The email is crafted to look and sound as if it comes from a legitimate source, such as a financial institution. The message usually contains some instructions and a malicious link embedded in the message. The instructions might say, Your user account has been hacked; click the following link to reset it. If the user follows these instructions, they'll end up downloading malware and infecting the system, visiting a site that allows the attacker to capture the victim's username and password.&lt;/p&gt;

&lt;p&gt;• Another type of social engineering is spear phishing. In a spear phishing attack, the attacker makes a fake message or email look more legitimate and believable. This type of attack is usually focused on a specific group of people. An example would be an attacker who crafts an email that looks like it originates from Bank X and sends it to everyone associated with that bank. People who have an account with Bank X will be more susceptible to the scam, click any malicious links, or follow any instructions with the message whereas a person who does not have an account with Bank X will simply block, delete, or ignore the message.&lt;/p&gt;

&lt;p&gt;• Whaling is a type of phishing attack that focuses on high-profile people within an organization, such as the CEO or a director. The objective of the attack is to compromise a high-profile person's account and use the account to conduct transactions. Imagine if the CEO's email account were compromised: the attacker could send emails to the accounting department requesting confidential financial details about the organization. People within the accounting department would see the email as originating from the CEO and therefore trust it to be the actual CEO requesting the information. In such an attack, trust between the employee and the CEO is being exploited.&lt;/p&gt;

&lt;p&gt;• Social engineering attacks can be carried out during a telephone conversation—this is known as vishing. In vishing, the attacker calls the potential victim while pretending to be someone with authority or a person the victim may trust. During the conversation, the attacker may also try to build or improve the trust between the victim and the attacker and take advantage of that trust. In vishing attacks, the attacker may pretend to be calling from the victim's bank and request the victim's online banking user credentials, or perhaps request their credit card number and pin.&lt;/p&gt;

&lt;p&gt;• Social engineering can also be done using Short Message Service (SMS), a form of attack known as smishing. This is when an attacker attempts to perform social engineering using the text messaging service on mobile phones.&lt;/p&gt;

&lt;p&gt;• Sometimes an attacker will take a more aggressive approach to getting victims to visit a compromised website. Hackers are able to compromise vulnerable Domain Name System (DNS) servers and can modify the DNS records, for example, by changing the DNS A record for a hostname to point to a compromised website rather than the legitimate IP address. This means that any device requesting the IP address of a certain website will be redirected to a malicious website. This type of social engineering is known as pharming.&lt;/p&gt;

&lt;p&gt;• It is important to build a fortress around and within your organization to protect it from both internal and external cyber-attacks and threats. Sometimes, when an attacker realizes he/she is unable to compromise the target's network, the attacker may attempt to perform a waterhole attack. In a waterhole attack, the attacker will attempt to compromise a site or location that the employees of the target organization are known to commonly visit, such as a local coffee shop. By compromising the coffee shop Wi-Fi network, any device connected to that network will download a payload and become infected. When an employee connects their infected mobile device to the corporate network, it will compromise the organization. However, anyone who connects to the Wi-Fi network, or the waterhole, will be infected, not just the target users who belong to the organization.&lt;br&gt;
Next let's learn about password vulnerabilities and management.&lt;/p&gt;

&lt;p&gt;Password vulnerabilities and management&lt;br&gt;
To prove our identity to a system, we must provide a valid username and password. People often create simple and easy-to-remember passwords for their online accounts. While these are simple to remember, a simple password is a security vulnerability which may allow a hacker to gain access to the victim's account. Creating a secure and complex password is important and prevents hackers and other threat actors from compromising user accounts and gaining access to sensitive information.&lt;/p&gt;

&lt;p&gt;When creating secure and complex passwords, follow the following guidelines:&lt;br&gt;
• Passwords should at least 8 characters in length.&lt;br&gt;
• Ensure that the password includes a combination of uppercase and lowercase letters, numbers, special characters, and symbols.&lt;br&gt;
• Ensure the password is not used on another account you own.&lt;br&gt;
• Passwords should not be regular words you would find in the dictionary.&lt;br&gt;
• Passwords should not contain personal details such as birthdays or relative names.&lt;br&gt;
• Passwords should be changed frequently.&lt;br&gt;
• Passwords should not be written down anywhere around the workplace.&lt;br&gt;
Using a password manager can help you to create, store, and manage secure passwords. There are many free password managers available on the internet.&lt;br&gt;
The following screenshot shows a secure password generated by the LastPass password manager:&lt;/p&gt;

&lt;p&gt;Figure 3 – Secure password&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jtq69RiF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1d4us7ln1yv2k5opcwe6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jtq69RiF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1d4us7ln1yv2k5opcwe6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Passwords are still breakable by a hacker who has a lot of time and computing power. Using Multifactor Authentication (MFA) adds an extra layer of security to our user accounts; the user has to provide multiple sets of information to prove his/her identity.&lt;br&gt;
Sometimes, after a username and password combination has been validated by a system, it requests a second form of authentication to validate your identity. This is sometimes referred to as 2-Factor Authentication (2FA). Authenticator apps on your smartphone can be associated with a supported website. Cisco is an example of this. Its user accounts support 2FA, which allows you to add a third-party authenticator, such as Google Authenticator, on your Cisco user account. Each time you attempt to log in to the Cisco website, a unique code is required from the authenticator app. This code changes approximately every 30 seconds, making it difficult for a hacker to guess the sequence of codes being generated each time.&lt;br&gt;
Rather than using passwords, you can use biometrics. Biometrics allows you to use a part of your body to authenticate to a system. Most new smartphones support biometric authentication, which allows a person to unlock their smartphone using their fingerprint. On Microsoft Windows 10, Windows Hello uses facial recognition technology.&lt;/p&gt;

&lt;p&gt;Note&lt;br&gt;
Other forms of biometrics are voice, iris, and retina scans.&lt;br&gt;
Digital certificates are an alternative method to authenticate to a system. Digital certificates are granted by a Certificates Authority (CA), which verifies the identity and authenticity of the requester. The CA functions as a trusted third party who can verify that the holder of the certificate is who they claim to be.&lt;/p&gt;

&lt;p&gt;Exploits&lt;br&gt;
Exploits are the malicious code or actions an attacker uses to take advantage of a vulnerability on a system. Within each operating system, application, and device, there are known and unknown vulnerabilities. Once a hacker has discovered a vulnerability on his/her target system, the next step is to acquire an exploit that will leverage the security flaw. One popular website for finding exploits is Exploit Database (&lt;a href="http://www.exploit-db.com"&gt;www.exploit-db.com&lt;/a&gt;). This website is maintained by Offensive Security, the creators of the popular penetration testing Linux distro, Kali Linux. The purpose of such a website is to share information with other cybersecurity professionals such as penetration testers who require exploits during their jobs.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Tip&lt;br&gt;
To understand how threats, vulnerabilities, and exploits all fit together, remember the following sentence: a threat uses an exploit to take advantage of a vulnerability on a system.&lt;br&gt;
One such vulnerability is known as EternalBlue (MS17-010); this vulnerability is a weakness found in Microsoft Windows operating systems with Microsoft Server Message Block 1.0 (SMBv1). An attacker with an exploit for EternalBlue will be able to perform remote code execution on a vulnerable machine. &lt;/b&gt;&lt;/p&gt;

&lt;p&gt;Note&lt;br&gt;
Further information about the MS17-010 security bulletin can be found at &lt;a href="https://docs.microsoft.com/en-us/security-updates/securitybulletins/2017/ms17-010"&gt;https://docs.microsoft.com/en-us/security-updates/securitybulletins/2017/ms17-010&lt;/a&gt;&lt;br&gt;
The following are the search results for the EternalBlue (MS17-010) vulnerability on Exploit Database:&lt;/p&gt;

&lt;p&gt;Figure 4 – Search results for EternalBlue&lt;br&gt;
Additionally, the attacker or the penetration tester can use an exploitation development framework such as Metasploit to create a custom payload and deliver it to the target. Metasploit allows a cybersecurity professional to build custom payloads that leverage the weaknesses found in applications and operating systems; however, an attacker can do this as well.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Tip&lt;br&gt;
If you want to learn more about Metasploit, please see&lt;br&gt;
 &lt;a href="https://www.offensive-security.com/metasploitunleashed/"&gt;https://www.offensive-security.com/metasploitunleashed/&lt;/a&gt; &lt;/b&gt;&lt;/p&gt;

&lt;p&gt;Once an attacker has gained access to a system, he or she is able to escalate their user privileges on the victim's system and even pivot the attack through the compromised system to all other internal devices on the network.&lt;/p&gt;

&lt;p&gt;Summary&lt;br&gt;
Systems, networks, and people are liable to present a range of security weaknesses, or vulnerabilities, by which malicious actors may seek to gain access to an organization’s systems and information. Understanding what the different kinds of vulnerability are, along with knowledge of the steps which can be taken to mitigate them, allows the cybersecurity professional to greatly reduce the risk of unwanted access to networked information systems and the disruptive consequences which can flow from that. &lt;/p&gt;

&lt;p&gt;To purchase the book on Amazon, click on: &lt;a href="https://packt.live/3rxd9Sp"&gt;https://packt.live/3rxd9Sp&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Creating SPAs using 
ASP.NET Core and React
</title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Mon, 01 Feb 2021 11:05:51 +0000</pubDate>
      <link>https://dev.to/packt/creating-spas-using-asp-net-core-and-react-59a0</link>
      <guid>https://dev.to/packt/creating-spas-using-asp-net-core-and-react-59a0</guid>
      <description>&lt;p&gt;React was Facebook's answer to helping more people work on the Facebook code base and deliver features quicker. React worked so well for Facebook that they eventually open sourced it &lt;a href="https://github.com/facebook/react" rel="noopener noreferrer"&gt; Github.com &lt;/a&gt;. Today React is a mature library for building component-based frontends (client-side code that runs in the browser). It is extremely popular and has a massive – and growing – community and ecosystem. &lt;br&gt;
ASP.NET Core was first released in 2016 and is now a mature open source and cross-platform web application framework. An excellent choice for building backends (application code that runs on the server) that interact with databases such as SQL Server, it also works well in cloud platforms such as Microsoft Azure.&lt;br&gt;
In this overview we'll show you how to create a single-page application (SPA) using &lt;a href="http://packt.live/2MMhB0g" rel="noopener noreferrer"&gt; ASP.NET Core and React &lt;/a&gt; via the standard template in Visual Studio. You'll learn where the entry points of both the ASP.NET Core and React apps are and how they integrate with each other. You'll also learn how Visual Studio runs both the frontend and backend together in development mode, as well as how it packages them up, ready for production. By the end, you'll have gained fundamental knowledge that you can apply to building your own apps using these awesome technologies.&lt;/p&gt;

&lt;p&gt;&lt;b&gt; Technical requirements &lt;/b&gt;&lt;/p&gt;

&lt;p&gt;The tools we’ll be using are:&lt;br&gt;
• Visual Studio 2019: This can be downloaded and installed from &lt;a href="https://visualstudio.microsoft.com/vs/" rel="noopener noreferrer"&gt; &lt;/a&gt;&lt;a href="https://visualstudio.microsoft.com/vs/" rel="noopener noreferrer"&gt;https://visualstudio.microsoft.com/vs/&lt;/a&gt; . Make sure that the following features are selected in the installer:&lt;br&gt;
a) ASP.NET and web development&lt;br&gt;
b) Azure development&lt;br&gt;
c) Node.js development&lt;br&gt;
• .NET 5.0: This can be downloaded from &lt;a href="https://dotnet.microsoft.com/download/dotnet/5.0" rel="noopener noreferrer"&gt; &lt;/a&gt;&lt;a href="https://dotnet.microsoft.com/download/dotnet/5.0" rel="noopener noreferrer"&gt;https://dotnet.microsoft.com/download/dotnet/5.0&lt;/a&gt;  .&lt;br&gt;
• Node.js and npm: These can be downloaded from &lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt; &lt;/a&gt;&lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;https://nodejs.org/&lt;/a&gt; .&lt;/p&gt;

&lt;p&gt;Check out the following video to see the code in action:&lt;br&gt;
&lt;a href="http://bit.ly/2ZpsqaZ" rel="noopener noreferrer"&gt; &lt;/a&gt;&lt;a href="http://bit.ly/2ZpsqaZ" rel="noopener noreferrer"&gt;http://bit.ly/2ZpsqaZ&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;b&gt; SPA architecture &lt;/b&gt;:&lt;/p&gt;

&lt;p&gt;An SPA is a web app that loads a single HTML page that is dynamically updated by JavaScript as the user interacts with the app. Imagine a simple sign-up form where a user can enter a name and an email address. Once the user fills out and submits the form, a whole page refresh doesn't occur. Instead, some JavaScript in the browser handles the form submission with an HTTP POST request and then updates the page with the result of the request. Refer to the following diagram:&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnksu4gsc9j3i9i9x9na8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnksu4gsc9j3i9i9x9na8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1.1: Form in a SPA&lt;/p&gt;

&lt;p&gt;So, after the first HTTP request that returns the single HTML page, subsequent HTTP requests are only for data and not HTML markup. All the pages are rendered in the client's browser by JavaScript.&lt;br&gt;
To build our SPA  we'll use React to render our frontend and ASP.NET Core for the backend API. &lt;/p&gt;

&lt;p&gt;&lt;b&gt; Understanding the backend &lt;/b&gt;&lt;br&gt;
We are going to start by creating an ASP.NET Core and React app using the standard template in Visual Studio. This template is perfect for us to review and understand basic backend components in an ASP.NET Core SPA.&lt;/p&gt;

&lt;p&gt;&lt;b&gt; Creating an ASP.NET Core and React templated app &lt;/b&gt;&lt;br&gt;
Let's open Visual Studio and carry out the following steps to create our templated app:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the start-up dialog, choose Create a new project:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuasxuy12g19lzqbro75x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuasxuy12g19lzqbro75x.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1.2: Visual Studio Start-up dialog&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, choose ASP.NET Core Web Application in the wizard that opens and click the Next button:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F51r9w1hs1py0zz3y6bcy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F51r9w1hs1py0zz3y6bcy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1.3: Creating a new web app in Visual Studio&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Give the project a name of your choice and choose an appropriate location to save the project to. Click the Create button to create the project:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fidwy84wcuii4hxgpwfjy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fidwy84wcuii4hxgpwfjy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1.4: Specifying the new project name and location&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Another dialog will appear that allows us to specify the version of ASP.NET Core we want to use, as well as the specific type of project we want to create. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Select ASP.NET Core 5.0 as the version and React.js in the dialog, and then click the Create button, which will create the project: 
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhd1kta5b20pvemf4h1y0.png" alt="Alt Text"&gt;
Figure 1.5: The project template and ASP.NET Core version&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Important note&lt;br&gt;
If ASP.NET Core 5.0 isn't listed, make sure the latest version of Visual Studio is installed. This can be done by choosing the Check for Updates option on the Help menu.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Now that the project has been created, press F5 to run the app. After a minute or so, the app will appear in a browser:
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnwggptqwqyncj58p570h.png" alt="Alt Text"&gt;
Figure 1.6: The home page of the app&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We'll find out later why the app took so long to run the first time. Great—we've created the ASP.NET Core React SPA. Now, let's inspect the backend code. &lt;/p&gt;

&lt;p&gt;&lt;b&gt; Understanding the backend entry point &lt;/b&gt;&lt;/p&gt;

&lt;p&gt;An ASP.NET Core app is a console app that creates a web server. The entry point for the app is a method called Main in a class called Program, which can be found in the Program.cs file in the root of the project:&lt;br&gt;
public class Program&lt;br&gt;
{&lt;br&gt;
  public static void Main(string[] args)&lt;br&gt;
  {&lt;br&gt;
    CreateHostBuilder(args).Build().Run();&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;public static IHostBuilder CreateHostBuilder(string[] args) =&amp;gt;&lt;br&gt;
    Host.CreateDefaultBuilder(args)&lt;br&gt;
      .ConfigureWebHostDefaults(webBuilder =&amp;gt;&lt;br&gt;
      {&lt;br&gt;
          webBuilder.UseStartup();&lt;br&gt;
      });&lt;br&gt;
}&lt;br&gt;
This method creates a web host using Host.CreateDefaultBuilder, which configures items such as the following:&lt;br&gt;
• The location of the root of the web content&lt;br&gt;
• Where the settings are for items such as the database connection string&lt;br&gt;
• The logging level and where the logs are output&lt;br&gt;
We can override the default builder using fluent APIs, which start with Use. For example, to adjust the root of the web content, we can add the highlighted line in the following snippet:&lt;br&gt;
public static IHostBuilder CreateHostBuilder(string[] args) =&amp;gt;&lt;br&gt;
  Host.CreateDefaultBuilder(args)&lt;br&gt;
    .ConfigureWebHostDefaults(webBuilder =&amp;gt;&lt;br&gt;
    {&lt;br&gt;
      webBuilder.UseContentRoot("some-path");&lt;br&gt;
      webBuilder.UseStartup();&lt;br&gt;
    });&lt;br&gt;
The last thing that is specified in the builder is the Startup class, which we'll look at in the following section.&lt;/p&gt;

&lt;p&gt;&lt;b&gt; Understanding the Startup class &lt;/b&gt;&lt;/p&gt;

&lt;p&gt;The Startup class is found in Startup.cs and configures the services that the app uses, as well as the request/response pipeline.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;The ConfigureServices method &lt;/b&gt;&lt;/p&gt;

&lt;p&gt;Services are configured using a method called ConfigureServices. This method is used to register items such as the following:&lt;br&gt;
• Our controllers that will handle requests&lt;br&gt;
• Our authorization policies&lt;br&gt;
• Our CORS policies&lt;br&gt;
• Our own classes that need to be available in dependency injection&lt;br&gt;
Services are added by calling methods on the services parameter and, generally, start with Add. Notice the call to the AddSpaStaticFiles method in the following code snippet:&lt;br&gt;
public void ConfigureServices(IServiceCollection services)&lt;br&gt;
{0&lt;br&gt;
  services.AddControllersWithViews();&lt;/p&gt;

&lt;p&gt;services.AddSpaStaticFiles(configuration =&amp;gt;&lt;br&gt;
  {&lt;br&gt;
     configuration.RootPath = "ClientApp/build";&lt;br&gt;
  });&lt;br&gt;
}&lt;br&gt;
This is a key part of how the React app is integrated into ASP.NET Core in production, because this specifies the location of the React app.&lt;br&gt;
Important note&lt;br&gt;
It is important to understand that the ASP.NET Core app runs on the server, with the React app running on the client in the browser. The ASP.NET Core app simply serves the files in the ClientApp/Build folder without any interpretation or manipulation.&lt;br&gt;
The ClientApp/Build files are only used in production mode, though. Next we'll find out how the React app is integrated into ASP.NET Core in development mode.&lt;/p&gt;

&lt;p&gt;&lt;b&gt; The Configure method &lt;/b&gt;&lt;/p&gt;

&lt;p&gt;When a request comes into ASP.NET Core, it goes through what is called the request/response pipeline, where some middleware code is executed. This pipeline is configured using a method called Configure. It is this method we will use to define exactly which middleware is executed and in what order. Middleware code is invoked by methods that generally start with Use in the app parameter. So, we would typically specify middleware such as authentication early in the Configure method, and MVC middleware toward the end. The pipeline that the template created is as follows:&lt;br&gt;
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)&lt;br&gt;
{&lt;br&gt;
    ...&lt;br&gt;
    app.UseStaticFiles();&lt;br&gt;
    app.UseSpaStaticFiles();&lt;br&gt;
    app.UseRouting();&lt;br&gt;
    app.UseEndpoints( ... );&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.UseSpa(spa =&amp;gt;
{
    spa.Options.SourcePath = "ClientApp";

    if (env.IsDevelopment())
    {
        spa.UseReactDevelopmentServer(npmScript: "start");
    }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;br&gt;
Notice that a method called UseSpaStaticFiles is called in the pipeline, just before the routing and endpoints are set up. This allows the host to serve the React app, as well as the web API.&lt;br&gt;
Also, notice that a UseSpa method is called after the endpoint middleware. This is the middleware that will handle requests to the React app, which will simply serve the single page in the React app. It is placed after UseEndpoints so that requests to the web API take precedence over requests to the React app.&lt;br&gt;
The UseSpa method has a parameter that is actually a function that executes when the app is first run. This function contains a branch of logic that calls spa.UseReactDevelopmentServer(npmScript: "start") if you're in development mode. This tells ASP.NET Core to use a development server by running npm start. We'll delve into the npm start command a little later. So, in development mode, the React app will be run on a development server rather than ASP.NET Core serving the files from ClientApp/Build. We'll learn more about this development server later.&lt;br&gt;
Custom middleware&lt;br&gt;
We can create our own middleware using a class such as the following one. This middleware logs information about every single request that is handled by the ASP.NET Core app:&lt;br&gt;
public class CustomLogger&lt;br&gt;
{&lt;br&gt;
    private readonly RequestDelegate _next;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public CustomLogger(RequestDelegate next)
{
    _next = next ?? throw new ArgumentNullException(nameof(next));
}

public async Task Invoke(HttpContext httpContext)
{
    if (httpContext == null) throw new 
    ArgumentNullException(nameof(httpContext));

    // TODO - log the request

    await _next(httpContext);

    // TODO - log the response 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;br&gt;
This class contains a method called Invoke, which is the code that is executed in the request/response pipeline. The next method to call in the pipeline is passed into the class and held in the _next variable, which we need to invoke at the appropriate point in our Invoke method. The preceding example is a skeleton class for a custom logger. We would log the request details at the start of the Invoke method and log the response details after the _next delegate has been executed, which will be when the rest of the pipeline has been executed.&lt;br&gt;
The following diagram is a visualization of the request/response pipeline and shows how each piece of middleware in the pipeline is invoked:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0v4engdasytrunuyqtgn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0v4engdasytrunuyqtgn.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Figure 1.7: Visualization of the request / response pipeline&lt;/p&gt;

&lt;p&gt;We make our middleware available as an extension method on the IApplicationBuilder interface in a new source file:&lt;br&gt;
public static class MiddlewareExtensions&lt;br&gt;
{&lt;br&gt;
    public static IApplicationBuilder UseCustomLogger(this &lt;br&gt;
    IApplicationBuilder app)&lt;br&gt;
    {&lt;br&gt;
        return app.UseMiddleware();&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
The UseMiddleware method in IApplicationBuilder is used to register the middleware class. The middleware will now be available in an instance of IApplicationBuilder in a method called UseCustomLogger.&lt;br&gt;
So, the middleware can be added to the pipeline in the Configure method in the Startup class, as follows:&lt;br&gt;
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)&lt;br&gt;
{&lt;br&gt;
    app.UseCustomLogger();&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (env.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
}
else
{
    app.UseExceptionHandler("/Error");
    app.UseHsts();
}

app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseSpaStaticFiles();

app.UseMvc(...);

app.UseSpa(...);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;br&gt;
In the previous example, the custom logger is invoked at the start of the pipeline so that the request is logged before it is handled by any other middleware. The response that is logged in our middleware will have been handled by all the other middleware as well.&lt;br&gt;
So, the Startup class allows us to configure how all requests are generally handled. How can we specify exactly what happens when requests are made to a specific resource in a web API? Let's find out.&lt;/p&gt;

&lt;p&gt;&lt;b&gt; Understanding controllers &lt;/b&gt;&lt;/p&gt;

&lt;p&gt;Web API resources are implemented using controllers. Let's have a look at the controller the template project created by opening WeatherForecastController.cs in the Controllers folder. This contains a class called WeatherForecastController that inherits from ControllerBase with a Route annotation:&lt;br&gt;
[ApiController]&lt;br&gt;
[Route("[controller]")]&lt;br&gt;
public class WeatherForecastController : ControllerBase&lt;br&gt;
{&lt;br&gt;
  ...&lt;br&gt;
}&lt;br&gt;
The annotation specifies the web API resource URL that the controller handles. The [controller] object is a placeholder for the controller name, minus the word Controller. This controller will handle requests to weatherforecast.&lt;br&gt;
The method called Get in the class is called an action method. Action methods handle specific requests to the resource for a specific HTTP method and subpath. We decorate the method with an attribute to specify the HTTP method and subpath the method handles. In our example, we are handling an HTTP GET request to the root path (weatherforecast) on the resource:&lt;br&gt;
[HttpGet]&lt;br&gt;
public IEnumerable Get()&lt;br&gt;
{&lt;br&gt;
  ...&lt;br&gt;
}&lt;br&gt;
Let's have a closer look at the web API at runtime by carrying out the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Run the app in Visual Studio by pressing F5.&lt;/li&gt;
&lt;li&gt; When the app has opened in our browser, press F12 to open the browser developer tools and select the Network panel.&lt;/li&gt;
&lt;li&gt; Select the Fetch data option on the top navigation bar. An HTTP GET request to weatherforecast will be shown:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9y8wcl3t8pmo6dw81lki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9y8wcl3t8pmo6dw81lki.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1.8: Request to weatherforecast endpoint in the browser developer tools&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; An HTTP response with a 200 status code is returned with JSON content:
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fua34y3p81qzdwewkb2d2.png" alt="Alt Text"&gt;
Figure 1.9: Response body for weatherforecast endpoint in the browser developer tools&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If we look back at the Get action method, we are returning an object of the IEnumerable type. The MVC middleware automatically converts this object into JSON and puts it in the response body with a 200 status code for us.&lt;br&gt;
So, that was a quick look at the backend that the template scaffolded for us. Now let’s walk through the React frontend.&lt;/p&gt;

&lt;p&gt;&lt;b&gt; Understanding the frontend &lt;/b&gt;&lt;/p&gt;

&lt;p&gt;In this section, we'll discover where the single HTML page is, that hosts the React app. We'll also understand why it took over a minute to run the app for the first time.&lt;br&gt;
Understanding the frontend entry point&lt;br&gt;
We have a good clue as to where the entry point is from our examination of the Startup class in the ASP.NET Core backend. In the Configure method, the SPA middleware is set up with the source path as ClientApp:&lt;br&gt;
app.UseSpa(spa =&amp;gt;&lt;br&gt;
{&lt;br&gt;
  spa.Options.SourcePath = "ClientApp";&lt;/p&gt;

&lt;p&gt;if (env.IsDevelopment())&lt;br&gt;
  {&lt;br&gt;
    spa.UseReactDevelopmentServer(npmScript: "start");&lt;br&gt;
  }&lt;br&gt;
});&lt;br&gt;
If we look in the ClientApp folder, we'll see a file called package.json. This is a file that is often used in React apps and contains information about the project, its npm dependencies, and the scripts that can be run to perform tasks. &lt;br&gt;
Important note&lt;br&gt;
npm is a popular package manager for JavaScript. The dependencies in package.json reference packages in the npm registry.&lt;br&gt;
If we open the package.json file, we will see react listed as a dependency:&lt;br&gt;
"dependencies": {&lt;br&gt;
  "react": "^16.0.0",&lt;br&gt;
  ...&lt;br&gt;
  "react-scripts": "^3.4.1",&lt;br&gt;
  ...&lt;br&gt;
},&lt;br&gt;
A version is specified against each package name. The versions in your package.json may be different to the above code snippet. The ^ symbol in front of the version means that the latest minor version can be safely installed, according to semantic versioning. &lt;br&gt;
Important Note&lt;br&gt;
A semantic version has three parts, Major.Minor.Patch. A major version increment happens when an API breaking change is made. A minor version increment happens when backwards compatible features are added. A patch version happens when backwards compatible bug fixes are added. More information can be found at &lt;a href="https://semver.org" rel="noopener noreferrer"&gt;https://semver.org&lt;/a&gt;. &lt;br&gt;
So, react 16.14.0 could be safely installed, because this is the latest minor version of React 16 at the time of writing. &lt;br&gt;
The react-scripts dependency gives us a big clue as to how exactly React was scaffolded. react-scripts  is a set of scripts from the popular Create React App (CRA) tool that was built by the developers at Facebook. This tool has done a huge amount of configuration for us, including a development server, bundling, linting, and unit testing. &lt;br&gt;
The root HTML page for an app scaffolded by CRA is index.html, which can be found in the public folder in the ClientApp folder. It is this page that hosts the React app. The root JavaScript file that is executed for an app scaffolded by CRA is index.js, which is in the ClientApp folder. We'll examine both the index.html and index.js files later. &lt;/p&gt;

&lt;p&gt;&lt;b&gt; Running in development mode &lt;/b&gt;&lt;/p&gt;

&lt;p&gt;In the following steps, we'll examine the ASP.NET Core project file to see what happens when the app runs in development mode:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; We can open the project file by right-clicking on the web application project in Solution Explorer and selecting the Edit Project File option:
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnymb589gd9n0z9un6vbn.png" alt="Alt Text"&gt;
Figure 1.10: Opening the project file in Visual Studio&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is an XML file that contains information about the Visual Studio project.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Let's look at the Target element, which has a Name attribute of DebugEnsureNodeEnv:

&amp;lt;!-- Ensure Node.js is installed --&amp;gt;







This executes tasks when the ClientApp/node-modules folder doesn't exist and the Visual Studio project is run in debug mode, which is the mode that's used when we press F5.&lt;/li&gt;
&lt;li&gt; The first task that is run in the Target element is the execution of the following command via an Exec task:
&amp;gt; node --version
This command returns the version of Node that is installed. This may seem like an odd thing to do, but its purpose is to determine whether node is installed. If node is not installed, the command will error and be caught by the Error task, which informs the user that Node needs to the installed and where to install it from.&lt;/li&gt;
&lt;li&gt; The next task in the Target element uses a Message command, which outputs Restoring dependencies using 'npm'. This may take several minutes... to the Output window. We'll see this message when running the project for the first time:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flu01258mtnk0fbdlp3n6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flu01258mtnk0fbdlp3n6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1.11: Restoring npm dependencies message when running a project for the first time&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The final task that is carried out when the project is run in debug mode is another Exec task that executes the following npm command:
&amp;gt; npm install
This command downloads all the packages that are listed as dependencies in package.json into a folder called node_modules:
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwon9idwuhtjlq6jfeipi.png" alt="Alt Text"&gt; 
Figure 1.12: node_modules folder
We can see this in Solution Explorer if the Show All Files option is on. Notice that there are a lot more folders in node_modules than dependencies listed in package.json. This is because the dependencies will have dependencies. So, the packages in node_modules are all the dependencies in the dependency tree.
At the start of this section, we asked ourselves the question: Why did it take such a long time for the project to run the app for the first time? The answer is that this last task takes a while because there are a lot of dependencies to download and install. On subsequent runs, node_modules will have been created, so these sets of tasks won't get invoked.
Earlier we learned that ASP.NET Core invokes an npm start command when the app is in development mode. If we look at the scripts section in package.json, we'll see the definition of this command:
"scripts": {
"start": "rimraf ./build &amp;amp;&amp;amp; react-scripts start",
...
}
This command deletes a folder called build and runs a Webpack development server.
Important note
Webpack is a tool that transforms, bundles, and packages up files for use in a browser. Webpack also has a development server. The CRA tool has configured Webpack for us so that all the transformation and the bundling configuration are already set up for us.
Why would we want to use the Webpack development server when we already have our ASP.NET Core backend running in IIS Express? The answer is a shortened feedback loop, which will increase our productivity. Later, we'll see that we can make a change to a React app running in the Webpack development server and that those changes are automatically loaded. There’s no stopping and restarting the application, making for a really quick feedback loop and great productivity.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;b&gt; Publishing process &lt;/b&gt;&lt;br&gt;
The publishing process is the process of building artifacts to run an application in a production environment. &lt;br&gt;
Let's carry on and inspect the XML ASP.NET Core project file and look at the Target element, which has the following Name attribute: PublishRunWebPack. The following code executes a set of tasks when the Visual Studio project is published:&lt;br&gt;
&lt;br&gt;
  &amp;lt;!-- As part of publishing, ensure the JS resources are freshly built &lt;br&gt;
   in production mode --&amp;gt;&lt;br&gt;
  &lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;&amp;lt;!-- Include the newly-built files in the publish output --&amp;gt;&lt;br&gt;
  &lt;br&gt;
    &lt;br&gt;
    
Figure 1.13: Publishing to a folder



&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Enter a folder to output the content to and click Finish.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fehdtovoggdqwrlfkybls.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fehdtovoggdqwrlfkybls.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1.14: Publish location&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A publish profile is then created. Click the Publish button to start the publishing process on the screen that appears.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9vpesk3mdo8fba2p0xzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9vpesk3mdo8fba2p0xzx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1.15: Publish profile screen&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After a while, we'll see the content appear in the folder we specified, including a ClientApp folder. If we look in this ClientApp folder, we'll see a build folder containing the React app, ready to be run in a production environment. Notice that the build folder contains index.html, which is the single page that will host the React app in production.&lt;br&gt;
Important note&lt;br&gt;
It is important to note that publishing from a developer's machine is not ideal. Instead, it is good practice to carry out this process on a build server to make sure that built applications are consistent, and code committed to the repository goes into the build. &lt;/p&gt;

&lt;p&gt;&lt;b&gt; Understanding the frontend dependencies &lt;/b&gt;&lt;br&gt;
Earlier, we learned that frontend dependencies are defined in package.json. Why not just list all the dependencies as script tags in index.html? Why do we need the extra complexity of npm package management in our project? The answer is that a long list of dependencies is hard to manage. If we used script tags, we'd need to make sure these are ordered correctly. We'd also be responsible for downloading the packages, placing them locally in our project, and keeping them up to date. We have a huge list of dependencies in our scaffolded project already, without starting work on any functionality in our app. For these reasons, managing dependencies with npm has become an industry standard.&lt;br&gt;
Let's open package.json again and look at the dependencies section:&lt;br&gt;
"dependencies": {&lt;br&gt;
  "bootstrap": "^4.1.3",&lt;br&gt;
  "jquery": "3.4.1",&lt;br&gt;
  "merge": "^1.2.1",&lt;br&gt;
  "oidc-client": "^1.9.0", &lt;br&gt;
  "react": "^16.0.0",&lt;br&gt;
  "react-dom": "^16.0.0",&lt;br&gt;
  "react-router-bootstrap": "^0.24.4",&lt;br&gt;
  "react-router-dom": "^4.2.2",&lt;br&gt;
  "react-scripts": "^3.0.1",&lt;br&gt;
  "reactstrap": "^6.3.0",&lt;br&gt;
  "rimraf": "^2.6.2"&lt;br&gt;
},&lt;br&gt;
We've already observed the react dependency, but what is the react-dom dependency? Well, React doesn't just target the web; it also targets native mobile apps. This means that react is the core React library that is used for both web and mobile, and react-dom is the library that's specified for targeting the web.&lt;br&gt;
The react-router-dom package is the npm package for React Router and helps us to manage the different pages in our app in the React frontend without a round-trip to the server. The react-router-bootstrap package allows Bootstrap to work nicely with React Router.&lt;br&gt;
We can see that this React app has a dependency for Bootstrap 4.1 with the bootstrap npm package. So, Bootstrap CSS classes and components can be referenced to build the frontend in our project. The reactstrap package is an additional package that allows us to consume Bootstrap nicely in React apps. Bootstrap 4.1 has a dependency on jQuery, which is the reason for the jquery package dependency. &lt;br&gt;
The merge package contains a function that merges objects together and oidc-client is a package for interacting with OpenID Connect (OIDC) and OAuth2. &lt;br&gt;
The final dependency that we haven't covered yet is rimraf. This simply allows files to be deleted, regardless of the host operating system. We can see that this is referenced in the start script:&lt;br&gt;
"scripts": {&lt;br&gt;
  "start": "rimraf ./build &amp;amp;&amp;amp; react-scripts start",&lt;br&gt;
  ...&lt;br&gt;
}&lt;br&gt;
Earlier we learned that this script is invoked when our app is running in development mode. So, rimraf ./build deletes the build folder and its contents before the development server starts.&lt;br&gt;
If we look further down, we'll see a section called devDependencies. These are dependencies that are only used during development and not in production:&lt;br&gt;
"devDependencies": {&lt;br&gt;
  "ajv": "^6.9.1",&lt;br&gt;
  "cross-env": "^5.2.0",&lt;br&gt;
  "eslint": "^6.8.0",&lt;br&gt;
  "eslint-config-react-app": "^5.2.1",&lt;br&gt;
  "eslint-plugin-flowtype": "^4.6.0",&lt;br&gt;
  "eslint-plugin-import": "^2.20.0",&lt;br&gt;
  "eslint-plugin-jsx-a11y": "^6.2.3",&lt;br&gt;
  "eslint-plugin-react": "^7.18.3"&lt;br&gt;
},&lt;br&gt;
The following is a brief description of these dependencies:&lt;br&gt;
• ajv allows us to validate JSON files.&lt;br&gt;
• cross-env allows us to set environment variables, regardless of the host operating system. If you look at the test script in the scripts section of the package.json file, it uses cross-env to set a CI environment variable.&lt;br&gt;
• The remaining dependencies are all designed to enable linting with ESLint. The linting process checks for problematic patterns in code according to a set of rules. &lt;br&gt;
Let's move on and learn how the single page is served and how the React app is injected into it.&lt;/p&gt;

&lt;p&gt;&lt;b&gt; Understanding how the single page is served &lt;/b&gt; &lt;br&gt;
We know that the single page that hosts the React app is index.html, so let's examine this file. This file is found in the public folder in the ClientApp folder. The React app will be injected into the div tag that has an id of root:&lt;/p&gt;

&lt;p&gt;Let's run our app again in Visual Studio to confirm that this is the case by pressing F5. If we open the developer tools in the browser page that opens and inspect the DOM in the Elements panel, we'll see this div with the React content inside it:&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwldqcqz7wz4dazt5hi0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwldqcqz7wz4dazt5hi0r.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1.16: Root div element and script elements&lt;/p&gt;

&lt;p&gt;Notice the script elements at the bottom of the body element. This contains all the JavaScript code for our React app, including the React library itself. However, these script elements don’t exist in the source index.html file, so how did it get there in the served page? Webpack added them after bundling all the JavaScript together and splitting it up into optimal chunks that can be loaded on demand. If we look in the ClientApp folder and subfolders, the static folder doesn't exist. The JavaScript files don’t exist either. What's going on? These are virtual files that are created by the Webpack development server. Remember that when we run the app with Visual Studio debugger the Webpack development server serves index.html. So the JavaScript files that the Webpack development server creates are virtual.&lt;br&gt;
Now, what happens in production mode when the Webpack development server isn't running? Let's have a closer look at the app we published earlier. Let's look in the index.html file in the Build folder in the ClientApp folder. The script elements at the bottom of the body element will look something like the following: &lt;/p&gt;

&lt;p&gt;!function(e){...}([])&lt;/p&gt;

&lt;p&gt;Carriage returns have been added in the above code snippet to make it more readable. The highlighted part of the filenames may vary each time the app is published. The filenames are unique in order to break browser caching. If we look for these JavaScript files in our project, we'll find that these do exist. So, in production mode, the web server will serve this physical JavaScript file.&lt;br&gt;
If we open this JavaScript file, it contains all the JavaScript for our app. The JavaScript is minified so that the file can be downloaded to the browser nice and quick. &lt;br&gt;
Important note&lt;br&gt;
Minification is the process of removing unnecessary characters in files without affecting how it is processed by the browser. This includes code comments and formatting, unused code, using shorter variable and function names, and so on. &lt;br&gt;
However, the file isn't small and contains a lot of JavaScript. What's going on here? Well, the file contains not only our JavaScript app code but also the code from all the dependencies, including React itself.&lt;/p&gt;

&lt;p&gt;&lt;b&gt; Understanding how components fit together &lt;/b&gt;&lt;br&gt;
Now it's time to start looking at the React app code and how components are implemented. Remember that the root JavaScript file is index.js in the ClientApp folder. Let's open this file and look closely at the following block of code:&lt;br&gt;
const rootElement = document.getElementById('root');&lt;/p&gt;

&lt;p&gt;ReactDOM.render(&lt;br&gt;
  &lt;br&gt;
    &lt;br&gt;
  ,&lt;br&gt;
  rootElement);&lt;br&gt;
The first statement selects the div element we discovered earlier, which has the root ID and stores it in a variable called rootElement.&lt;br&gt;
The next statement extends over multiple lines and calls the render function from the React DOM library. It is this function that injects the React app content into the root div element. The rootElement variable, which contains a reference to the root div element, is passed into this function as the second parameter.&lt;br&gt;
The first parameter that is passed into the render function is more interesting. In fact, it doesn't even look like legal JavaScript! This is, in fact, JSX.&lt;br&gt;
Important note&lt;br&gt;
JSX is transformed into regular JavaScript by Webpack using a tool called Babel. This is one of many tasks that CRA configured for us when our app was scaffolded.&lt;br&gt;
So, the first parameter passes in the root React component called BrowserRouter, which comes from the React Router library. &lt;br&gt;
Nested inside the BrowserRouter component is a component called App. If we look at the top of the index.js file, we can see that the App component is imported from a file called App.js:&lt;br&gt;
import App from './App';&lt;br&gt;
Important note&lt;br&gt;
import statements are used to import items that are exported by another JavaScript module. The module is specified by its file location, with the js extension omitted. &lt;/p&gt;

&lt;p&gt;The import statements that import items from npm packages don't need the path to be specified because CRA has configured a resolver in Webpack to automatically look in the node_modules folder during the bundling process.&lt;br&gt;
So, the App component is contained in the App.js file. Let's have a look at this file. A class called App is defined in this file: &lt;br&gt;
export default class App extends Component {&lt;br&gt;
  static displayName = App.name;&lt;/p&gt;

&lt;p&gt;render () {&lt;br&gt;
    return (&lt;br&gt;
      &lt;br&gt;
        &lt;br&gt;
        &lt;br&gt;
        &lt;br&gt;
      &lt;br&gt;
    );&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
Notice the export and default keywords before the class keyword.&lt;br&gt;
Important note&lt;br&gt;
The export keyword is used to export an item from a JavaScript module. The default keyword defines the export as the default export, which means it can be imported without curly braces. So, a default export can be imported as import App from './App' rather than import {App} from './App'.&lt;br&gt;
A method called render defines the output of the component. This method returns JSX, which in this case references a Layout component in our app code and a Route component from React Router.&lt;br&gt;
So, we are starting to understand how React components can be composed together to form a UI.&lt;br&gt;
Let's now start to go through the React development experience by making a simple change:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Run the app in Visual Studio by pressing F5 if it's not already running.&lt;/li&gt;
&lt;li&gt; Open up the Home.js file, which can be found at ClientApp\src\components. This contains the component that renders the home page.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;With the app still running, in the render method, change the h1 tag in the JSX to render a different string:&lt;br&gt;
render () {&lt;br&gt;
return (&lt;/p&gt;


  &lt;h1&gt;Hello, React!&lt;/h1&gt;
  &lt;p&gt;Welcome to your new single-page application, built with:
  &lt;/p&gt;
  ...


&lt;p&gt;);&lt;br&gt;
}&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save the file and look at the running app:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fifn1420ark9xcwy555ld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fifn1420ark9xcwy555ld.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1.17: Home page is automatically updated in the browser&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The app is automatically updated with our change. The Webpack development server automatically updated the running app with the change when the file was saved. The experience of seeing our changes implemented almost immediately gives us a really productive experience when developing our React frontend.&lt;br&gt;
Understanding how components access the backend web API&lt;br&gt;
The final topic in this overview is how the React frontend consumes the backend web API. If the app isn't running, then run it by pressing F5 in Visual Studio. If we click on the Fetch data option in the top navigation bar in the app that opens in the browser, we'll see a page showing weather forecasts: &lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgoin7bs43cnflnzlgytg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgoin7bs43cnflnzlgytg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1.18: Weather forecast data&lt;br&gt;
If we cast our minds back, to the Understanding controllers section, we looked at the ASP.NET Core controller that surfaced a web API that exposed this data at weatherforecast. So, this is a good opportunity to have a quick look at how a React app can call an ASP.NET Core web API.&lt;br&gt;
The component that renders this page is in FetchData.js. Let's open this file and look at the constructor class:&lt;br&gt;
constructor (props) {&lt;br&gt;
  super(props);&lt;br&gt;
  this.state = { forecasts: [], loading: true };&lt;br&gt;
}&lt;br&gt;
The constructor in a JavaScript class is a special method that automatically gets invoked when a class instance is created. So, it's a great place to initialize class-level variables.&lt;br&gt;
The constructor initializes some component state which contains the weather forecast data and a flag to indicate whether the data is being fetched. &lt;br&gt;
Let's have a look at the componentDidMount method:&lt;br&gt;
componentDidMount() {&lt;br&gt;
  this.populateWeatherData();&lt;br&gt;
}&lt;br&gt;
This method gets invoked by React when the component is inserted into the tree and is the perfect place to load data. This method calls a populateWeatherData method:&lt;br&gt;
async populateWeatherData() {&lt;br&gt;
  const response = await fetch('weatherforecast');&lt;br&gt;
  const data = await response.json();&lt;br&gt;
  this.setState({ forecasts: data, loading: false });&lt;br&gt;
}&lt;br&gt;
Notice the async keyword before the populateWeatherData function name. Notice also the await keywords within the function.&lt;br&gt;
Important note&lt;br&gt;
An await keyword is used to wait for an asynchronous function to complete. A function must be declared as asynchronous in order to use the await keyword within. This is done by placing an async keyword in front of the function name. This is very much like async and await in .NET.&lt;br&gt;
We can see that a function called fetch is used within this method.&lt;br&gt;
Important note&lt;br&gt;
The fetch function is a native JavaScript function for interacting with web APIs. The fetch function supersedes XMLHttpRequest and works a lot nicer with JSON-based web APIs.&lt;br&gt;
The parameter that's passed into the fetch function is the path to the web API resource: weatherforecast. A relative path can be used because the React app and web API are in the same origin.&lt;br&gt;
Once the weather forecast data has been fetched from the web API and the response has been parsed, the data is placed in the component's state.&lt;br&gt;
Hang on a minute, though—the native fetch function isn't implemented in Internet Explorer (IE). Does that mean our app won't work in IE? Well, the fetch function isn't available in IE, but CRA has set up a polyfill for this so that it works perfectly fine. &lt;br&gt;
Important note&lt;br&gt;
A polyfill is a piece of code that implements a feature we expect the browser to provide natively. Polyfills allow us to develop against features that aren't supported in all browsers yet.&lt;br&gt;
Let's now turn our attention to the render method:&lt;br&gt;
render () {&lt;br&gt;
  let contents = this.state.loading&lt;br&gt;
    ? &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Loading...&lt;/em&gt;&lt;/p&gt;
&lt;br&gt;
    : FetchData.renderForecastsTable(this.state.forecasts);

&lt;p&gt;return (&lt;br&gt;
    &lt;/p&gt;
&lt;br&gt;
      &lt;h1 id="tabelLabel"&gt;Weather forecast&lt;/h1&gt;
&lt;br&gt;
      &lt;p&gt;This component demonstrates fetching data from the server.&lt;/p&gt;
&lt;br&gt;
      {contents}&lt;br&gt;
    &lt;br&gt;
  );&lt;br&gt;
}&lt;br&gt;
The code may contain concepts you aren't yet familiar with, but don't worry if this doesn't make sense to you at this point. &lt;br&gt;
We already know that the render method in a React component returns JSX, and we can see that JSX is returned in this render method as well. Notice the {contents} reference in the JSX, which injects the contents JavaScript variable into the markup below the p tag at the bottom of the div tag. The contents variable is set in the first statement in the render method and is set so that Loading... is displayed while the web API request is taking place along with the result of FetchData.renderForecastsTable when the request has finished, which we'll have a quick look at now:&lt;br&gt;
static renderForecastsTable (forecasts) {&lt;br&gt;
  return (&lt;br&gt;
    &lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;br&gt;
      &lt;thead&gt;
&lt;br&gt;
        &lt;tr&gt;
&lt;br&gt;
          &lt;th&gt;Date&lt;/th&gt;
&lt;br&gt;
          &lt;th&gt;Temp. (C)&lt;/th&gt;
&lt;br&gt;
          &lt;th&gt;Temp. (F)&lt;/th&gt;
&lt;br&gt;
          &lt;th&gt;Summary&lt;/th&gt;
&lt;br&gt;
        &lt;/tr&gt;
&lt;br&gt;
      &lt;/thead&gt;
&lt;br&gt;
      &lt;tbody&gt;
&lt;br&gt;
        {forecasts.map(forecast =&amp;gt;&lt;br&gt;
          &lt;tr&gt;
&lt;br&gt;
            &lt;td&gt;{forecast.dateFormatted}&lt;/td&gt;
&lt;br&gt;
            &lt;td&gt;{forecast.temperatureC}&lt;/td&gt;
&lt;br&gt;
            &lt;td&gt;{forecast.temperatureF}&lt;/td&gt;
&lt;br&gt;
            &lt;td&gt;{forecast.summary}&lt;/td&gt;
&lt;br&gt;
          &lt;/tr&gt;
&lt;br&gt;
        )}&lt;br&gt;
      &lt;/tbody&gt;
&lt;br&gt;
    &lt;/table&gt;&lt;/div&gt;
&lt;br&gt;
  );&lt;br&gt;
}&lt;br&gt;
This function returns JSX, which contains an HTML table with the data from the forecasts data array injected into it. The map method on the forecasts array is used to iterate through the items in the array and render tr tags in the HTML table containing the data.&lt;br&gt;
Important note&lt;br&gt;
The map method is a native JavaScript method that is available in an array. It takes in a function parameter that is called for each array element. The return values of the function calls then make up a new array. The map method is commonly used in JSX when iteration is needed.&lt;br&gt;
Notice that we have applied a key attribute to each tr tag. What is this for? This isn't a standard attribute on an HTML table row, is it?&lt;br&gt;
Important note&lt;br&gt;
The key attribute helps React detect when an element changes, or is added or removed. So, it's not a standard HTML table row attribute. Where we output content in a loop, it is good practice to apply this attribute and set it to a unique value within the loop so that React can distinguish it from the other elements. Omitting keys can also lead to performance problems on large datasets as React will unnecessarily update the DOM when it doesn't need to.&lt;br&gt;
Summary&lt;br&gt;
We started off this overview by learning that all pages in an SPA are rendered in JavaScript with the help of a framework such as React, along with requests for data handled by a backend API with the help of a framework such as ASP.NET Core. We now understand that a class called Startup configures services that are used in the ASP.NET Core backend, as well as the request/response pipeline. Requests to specific backend API resources are handled by controller classes.&lt;br&gt;
We also saw how CRA was leveraged by the ASP.NET Core React template to create the React app. This tool did a huge amount of setup and configuration for us, including a development server, bundling, linting, and even key polyfills for IE. We learned that the React app lives in the ClientApp folder in an ASP.NET Core React templated project, with a file called index.html being the single page. A file called package.json defines key project information for the React app, including its dependencies and the tasks that are used to run and build the React app.&lt;br&gt;
Now you’ve seen how all the basic parts of an ASP.NET Core React app work together. Equipped with this knowledge, you are ready to embark on your own app-building adventure with React and ASP.NET Core.&lt;br&gt;
Further reading&lt;br&gt;
The following are some useful links so that you can learn more about the topics that were covered:&lt;br&gt;
• ASP.NET Core startup: &lt;a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/startup" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/aspnet/core/fundamentals/startup&lt;/a&gt;&lt;br&gt;
• ASP.NET Core web API controllers: &lt;a href="https://docs.microsoft.com/en-us/aspnet/core/web-api" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/aspnet/core/web-api&lt;/a&gt;&lt;br&gt;
• Create React app: &lt;a href="https://facebook.github.io/create-react-app/" rel="noopener noreferrer"&gt;https://facebook.github.io/create-react-app/&lt;/a&gt;&lt;br&gt;
• WebPack development server: &lt;a href="https://webpack.js.org/configuration/dev-server/" rel="noopener noreferrer"&gt;https://webpack.js.org/configuration/dev-server/&lt;/a&gt;&lt;br&gt;
• npm: &lt;a href="https://docs.npmjs.com/" rel="noopener noreferrer"&gt;https://docs.npmjs.com/&lt;/a&gt;&lt;br&gt;
• JSX: &lt;a href="https://reactjs.org/docs/introducing-jsx.html" rel="noopener noreferrer"&gt;https://reactjs.org/docs/introducing-jsx.html&lt;/a&gt;&lt;br&gt;
• JavaScript module import: &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import" rel="noopener noreferrer"&gt;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import&lt;/a&gt;&lt;br&gt;
• JavaScript module export: &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/export" rel="noopener noreferrer"&gt;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/export&lt;/a&gt;&lt;br&gt;
• JavaScript fetch: &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API" rel="noopener noreferrer"&gt;https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API&lt;/a&gt;&lt;br&gt;
• JavaScript array map: &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map" rel="noopener noreferrer"&gt;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map&lt;/a&gt;&lt;br&gt;
• React lists and keys: &lt;a href="https://reactjs.org/docs/lists-and-keys.html" rel="noopener noreferrer"&gt;https://reactjs.org/docs/lists-and-keys.html&lt;/a&gt;

&lt;p&gt;Purchase your copy on &lt;a href="http://packt.live/2MMhB0g" rel="noopener noreferrer"&gt; Amazon.com &lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Azure Architect Certification - Exam Changes</title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Fri, 29 Jan 2021 06:40:44 +0000</pubDate>
      <link>https://dev.to/packtpub/azure-architect-certification-exam-changes-1cml</link>
      <guid>https://dev.to/packtpub/azure-architect-certification-exam-changes-1cml</guid>
      <description>&lt;p&gt;The Azure Architect certification is perhaps one of the most sought-after qualifications, certainly in Microsoft’s cloud space.  For the Azure Certification accreditation, you need to have passed TWO exams – AZ300 and AZ301.&lt;/p&gt;

&lt;p&gt;However, as Azure is an ever changing and improving platform, this means the requirements for certification must also evolve.  And many of Microsoft’s certifications have been updated to reflect the ever-changing world of technology.&lt;/p&gt;

&lt;p&gt;With this in mind the AZ300 and AZ301 exams are were updated in September 2020 and were replaced by AZ303 and AZ304.&lt;/p&gt;

&lt;p&gt;With this in mind the question that will come to most people’s minds is ‘what’s different?’.&lt;/p&gt;

&lt;p&gt;&lt;i&gt;&lt;b&gt;AZ300 versus AZ303 Differences&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;

&lt;p&gt;The actual requirements are available on the Azure Certification AZ300 page, and current show BOTH requirements.&lt;/p&gt;

&lt;p&gt;What’s interesting is that actually there’s not much different – in fact more has been removed than added, with a focus more on Infrastructure, identity and security, and much less on programming patterns and to a certain extent networking.&lt;/p&gt;

&lt;p&gt;Networking is perhaps the easiest to understand – configuring VPNs, Express Route, Virtual WAN and onsite connectivity have now all been removed – however VNETs, peering and routing are still there.  I suspect this is more about the fact that it is harder to practice and simulate these in lab environments than any real desire to deprecate the knowledge itself. &lt;/p&gt;

&lt;p&gt;However, I must admit, as a course creator and author this really has made things simpler!&lt;/p&gt;

&lt;p&gt;To me though, the most notable change is the removal of message-based architecture knowledge.  For AZ300 there were a number of requirements around the use of Event Hub, Service Bus, Event Grid and autoscaling patterns (such understanding singleton applications and transient state).&lt;/p&gt;

&lt;p&gt;I don’t think for one minute that this is because Microsoft no longer feels these are important, however I believe it’s because the AZ300 exam is more about using and building components in azure, and the messaged based architecture requirements leaned much more towards software development patterns, which is why these are now covered in the associated design exam – AZ 304: Microsoft Azure Architect Design.&lt;/p&gt;

&lt;p&gt;To help get you to prepare for the newly updated Microsoft accreditations check out this book by Packt Publishing Authored by Brett Hargreaves, Sjoukje Zaal!! &lt;a href="https://packt.live/3ouOawY"&gt;https://packt.live/3ouOawY&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Importing CSV Files into Pandas</title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Mon, 25 Jan 2021 10:47:54 +0000</pubDate>
      <link>https://dev.to/packtpub/importing-csv-files-into-pandas-4hk5</link>
      <guid>https://dev.to/packtpub/importing-csv-files-into-pandas-4hk5</guid>
      <description>&lt;p&gt;Scientific distributions of Python (Anaconda, WinPython, Canopy, and so on) provide analysts with an impressive range of data manipulation, exploration, and visualization tools. One important tool is pandas. Developed by Wes McKinney in 2008, but really gaining in popularity after 2012, pandas is now an essential library for data analysis in Python, along with other popular packages such as numpy, matplotlib, and scipy.&lt;/p&gt;

&lt;p&gt;A key pandas object is the data frame, which represents data as a tabular structure, with rows and columns. In this way, it is similar to the other data stores used by a variety of other applications, including spreadsheets, databases and statistical packages. However, a pandas data frame also has indexing functionality that makes selecting, combining, and transforming data relatively straightforward. In this tutorial we show you how to import data from CSV files.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Technical requirements&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;The code and notebooks described here are available on GitHub at &lt;a href="https://github.com/PacktPublishing/Python-Data-Cleaning-Cookbook"&gt;https://github.com/PacktPublishing/Python-Data-Cleaning-Cookbook&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Importing CSV files&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;The read_csv method of the pandas library can be used to read a file with comma separated values (CSV) and load it into memory as a pandas data frame. In the following recipe, we read a CSV file and address some common issues: creating column names that make sense to us, parsing dates, and dropping rows with critical data missing.&lt;/p&gt;

&lt;p&gt;Raw data is often stored as CSV files. These files have a carriage return at the end of each line of data to demarcate a row, and a comma between each data value to delineate columns. Something other than a comma can be used as the delimiter, such as a tab. Quotation marks may be placed around values, which can be helpful when the delimiter occurs naturally within certain values, which sometimes happens with commas.&lt;/p&gt;

&lt;p&gt;All data in a CSV file are text characters, regardless of the logical data type. This is why it is easy to view a CSV file, presuming it is not too large, in a text editor. The pandas read_csv method will make an educated guess about the data type of each column, but you will need to help it along to ensure that these guesses are on the mark.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Getting ready&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;Create a folder for this tutorial and create a new Python script or Jupyter Notebook file in that folder. Create a data subfolder and place the file landtempssample.csv (which can be found in the data sub-folder of the Chapter01 folder of the GitHub repo mentioned above) file in that subfolder. Alternatively, you could retrieve all the files from the GitHub repository. Here’s a code sample from the beginning of the CSV file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;locationid,year,month,temp,latitude,longitude,stnelev,station,&lt;br&gt;
 countryid,country&lt;br&gt;
 USS0010K01S,2000,4,5.27,39.9,-110.75,2773.7,INDIAN_CANYON,US,United States&lt;br&gt;
 CI000085406,1940,5,18.04,-18.35,-70.333,58.0,ARICA,CI,Chile&lt;br&gt;
 USC00036376,2013,12,6.22,34.3703,91.1242,61.0,SAINT_CHARLES,US,United &lt;br&gt;
 States&lt;br&gt;
 ASN00024002,1963,2,22.93,-34.2833,140.6,65.5,BERRI_IRRIGATION,&lt;br&gt;
 AS,Australia&lt;br&gt;
 ASN00028007,2001,11,,-14.7803,143.5036,79.4,MUSGRAVE,AS,Australia&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;&lt;b&gt;Note&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;This dataset, taken from the Global Historical Climatology Network integrated database, is made available for public use by the United States National Oceanic and Atmospheric Administration at &lt;a href="https://www.ncdc.noaa.gov/data-access/land-based-station-data/land-baseddatasets/global-historical-climatology-networkmonthly-version-4"&gt;https://www.ncdc.noaa.gov/data-access/land-based-station-data/land-baseddatasets/global-historical-climatology-networkmonthly-version-4&lt;/a&gt;. This is just a 100,000-row sample of the full dataset, which is also available in the repository.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;How to do it…&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;We will import a CSV file into pandas, taking advantage of some very useful read_csv options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Import the pandas library and set up the environment to make viewing the output easier:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;`&amp;gt;&amp;gt;&amp;gt; import pandas as pd&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;pd.options.display.float_format = '{:,.2f}'.format&lt;br&gt;
pd.set_option('display.width', 85)&lt;br&gt;
pd.set_option('display.max_columns', 8)`&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;
&lt;br&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Read the data file, set new names for the headings, and parse the date column. Pass an argument of 1 to the skiprows parameter to skip the first row, pass a list of columns to parse_dates to create a pandas datetime column from those columns, and set low_memory to False to reduce the usage of memory during the import process:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;`&amp;gt;&amp;gt;&amp;gt; landtemps = pd.read_csv('data/landtempssample.csv',&lt;br&gt;
 ...&lt;br&gt;
 names=['stationid','year','month','avgtemp','latitude',&lt;br&gt;
 ...&lt;br&gt;
 'longitude','elevation','station','countryid','country'],&lt;br&gt;
 ... skiprows=1,&lt;br&gt;
 ... parse_dates=[['month','year']],&lt;br&gt;
 ... low_memory=False)&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;type(landtemps)&lt;br&gt;
 `&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;
&lt;br&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Get a quick glimpse of the data. View the first few rows. Show the data type for all columns, as well as the number of rows and columns:&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;landtemps.head(7)&lt;br&gt;
    month_year stationid    ...     countryid   country&lt;br&gt;
 0  2000-04-01 USS0010K01S  ...     US      United States&lt;br&gt;
 1  1940-05-01 CI000085406  ...     CI      Chile&lt;br&gt;
 2  2013-12-01 USC00036376  ...     US      United States&lt;br&gt;
 3  1963-02-01 ASN00024002  ...     AS      Australia&lt;br&gt;
 4  2001-11-01 ASN00028007  ...     AS      Australia&lt;br&gt;
 5  1991-04-01 USW00024151  ...     US      United States&lt;br&gt;
 6  1993-12-01 RSM00022641  ...     RS      Russia&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;
&lt;br&gt;
&lt;/blockquote&gt;

&lt;p&gt;[7 rows x 9 columns]&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;landtemps.dtypes&lt;br&gt;
 month_year     datetime64[ns]&lt;br&gt;
 stationid        object&lt;br&gt;
 avgtemp             float64&lt;br&gt;
 latitude            float64&lt;br&gt;
 longitude       float64&lt;br&gt;
 elevation       float64&lt;br&gt;
 station              object&lt;br&gt;
 countryid        object&lt;br&gt;
 country              object&lt;br&gt;
 dtype: object&lt;br&gt;
landtemps.shape&lt;br&gt;
 (100000, 9)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Give the date column a better name and view the summary statistics for average monthly temperature:
landtemps.rename(columns={'month_
year':'measuredate'}, inplace=True)
landtemps.dtypes
measuredate    datetime64[ns]
stationid        object
avgtemp               float64
latitude              float64
longitude       float64
elevation       float64
station                object
countryid        object
country                object
dtype: object
landtemps.avgtemp.describe()
count  85,554.00
mean       10.92
std        11.52
min       -70.70
25%         3.46
50%        12.22
75%        19.57
max        39.95
Name: avgtemp, dtype: float64&lt;/li&gt;
&lt;li&gt;Look for missing values for each column.
Use isnull, which returns True for each value that is missing for each column, and False when not missing. Chain this with sum to count the missings for each column. When working with Boolean values, sum treats True as 1 and False as 0. I discuss method chaining in There's more... (below):
landtemps.isnull().sum()
measuredate        0
stationid      0
avgtemp        14446
latitude           0
longitude      0
elevation      0
station            0
countryid      0
country            5
dtype: int64&lt;/li&gt;
&lt;li&gt;Remove rows with missing data for avgtemp.
Use the subset parameter to tell dropna to drop rows where avgtemp is missing. Set inplace to True. Leaving inplace at its default value of False would display the data frame, but the changes we have made would not be retained. Use the shape attribute of the data frame to get the number of rows and columns:
landtemps.dropna(subset=['avgtemp'], inplace=True)
landtemps.shape
(85554, 9)&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;
&lt;br&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's it! Importing CSV files into pandas is as simple as that.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;How it works...&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;In &lt;b&gt;step 1&lt;/b&gt; we import the pandas library, and refer to it as pd to make it easier to reference later. This is customary. We also use float_format to display float values in a readable way and set_option to make the terminal output wide enough to accommodate the number of variables.&lt;/p&gt;

&lt;p&gt;Much of the work is done by the first line in &lt;b&gt;step 2&lt;/b&gt;. We use read_csv to load a pandas data frame in memory and call it landtemps. In addition to passing a filename, we set the names parameter to a list of our preferred column headings. We also tell read_csv to skip the first row, by setting skiprows to 1, since the original column headings are in the first row of the CSV file. If we do not tell it to skip the first row, read_csv will treat the header row in the file as actual data.&lt;/p&gt;

&lt;p&gt;read_csv also solves a date conversion issue for us. We use the parse_dates parameter to ask it to convert the month and year columns to a date value.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Step 3&lt;/b&gt; runs through a few standard data checks. We use head(7) to print out all columns for the first 7 rows. We use the dtypes attribute of the data frame to show the data type of all columns. Each column has the expected data type. In pandas, character data has the object data type, a data type that allows for mixed values. shape returns a tuple, whose first element is the number of rows in the data frame (100,000 in this case) and whose second element is the number of columns (9).&lt;/p&gt;

&lt;p&gt;When we used read_csv to parse the month and year columns, it gave the resulting column the name month_year. We use the rename method in &lt;b&gt;step 4&lt;/b&gt; to give that column a better name. We need to specify inplace=True to replace the old column name with the new column name in memory. The describe method provides summary statistics on the avgtemp column&lt;br&gt;
.&lt;br&gt;
Notice that the count for avgtemp indicates that there are 85,554 rows that have valid values for avgtemp. This is out of 100,000 rows for the whole data frame, as provided by the shape attribute. The listing of missing values for each column in &lt;b&gt;step 5&lt;/b&gt; (landtemps.isnull().sum()) confirms this: 100,000 – 85,554 = 14,446.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Step 6&lt;/b&gt; drops all rows where avgtemp is NaN. (The NaN value, not a number, is the pandas representation of missing values.) subset is used to indicate which column to check for missings. The shape attribute for landtemps now indicates that there are 85,554 rows, which is what we would expect given the previous count from describe.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;There's more...&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;If the file you are reading uses a delimiter other than a comma, such as a tab, this can be specified in the sep parameter of read_csv. When creating the pandas data frame, an index was also created. The numbers to the far left of the output when head and sample were run are index values. Any number of rows can be specified for head or sample. The default value is 5.&lt;/p&gt;

&lt;p&gt;Setting low_memory to False causes read_csv to parse data in chunks. This is easier on systems with lower memory when working with larger files. However, the full data frame will still be loaded into memory once read_csv completes successfully.&lt;/p&gt;

&lt;p&gt;The landtemps.isnull().sum() statement is an example of chaining methods. First, isnull returns a data frame of True and False values, resulting from testing whether each column value is null. sum takes that data frame and sums the True values for each column, interpreting the True values as 1 and the False values as 0. We would have obtained the same result if we had used the following two steps:&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;checknull = landtemps.isnull()&lt;br&gt;
checknull.sum()&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;
&lt;br&gt;
&lt;/blockquote&gt;

&lt;p&gt;There’s no hard and fast rule for when to chain methods and when not to. I find it helpful to chain when I really think of something I am doing as being a single step, but only two or more steps, mechanically speaking. Chaining also has the side benefit of not creating extra objects that I might not need.&lt;/p&gt;

&lt;p&gt;The dataset used in this recipe is just a sample from the full land temperatures database with almost 17 million records. You can run the larger file if your machine can handle it, with the following code:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; landtemps = pd.read_csv('data/landtemps.zip',&lt;br&gt;
 compression='zip',&lt;br&gt;
 ... names=['stationid','year','month','avgtemp','latitude',&lt;br&gt;
 ...&lt;br&gt;
 'longitude','elevation','station','countryid','country'],&lt;br&gt;
 ... skiprows=1,&lt;br&gt;
 ... parse_dates=[['month','year']],&lt;br&gt;
 ... low_memory=False)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;read_csv can read a compressed ZIP file. We get it to do this by passing the name of the ZIP file and the type of compression.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Points to note&lt;/b&gt;&lt;br&gt;
A significant amount of reshaping of the Global Historical Climatology Network raw data was performed before using it in this recipe. &lt;/p&gt;

&lt;p&gt;You can read a text file that is not delimited, one that is fixed, by using panda’s read_fwf function.&lt;/p&gt;

&lt;p&gt;Purchase your copy on &lt;a href="http://packt.live/39ezuhm"&gt; Amazon &lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Testing Made Easy with ASP.NET Core</title>
      <dc:creator>Packt</dc:creator>
      <pubDate>Fri, 15 Jan 2021 06:33:48 +0000</pubDate>
      <link>https://dev.to/packt/testing-made-easy-with-asp-net-core-34ai</link>
      <guid>https://dev.to/packt/testing-made-easy-with-asp-net-core-34ai</guid>
      <description>&lt;p&gt;It’s no secret that automated testing can be immensely helpful in the creation of better software. Perhaps less widely appreciated, however, are the features that ASP.NET Core offers to support testing. Here we discuss some of them, and see how enjoyable they make testing ASP.NET Core applications compared to old ASP.NET MVC, for instance. But first let’s review some of the different types of test we might wish to automate. &lt;/p&gt;

&lt;p&gt;Automated testing and how it applies to ASP.NET Core&lt;br&gt;
Testing is an integral part of the development process, and automated testing becomes crucial in the long run. You can always run your ASP.NET website, open a browser, and click everywhere to test your features. That's legit, but it is harder to test individual units of code that way. Another downside is the lack of automation. When you first start with a small app containing a few pages, a few endpoints, or a few features, it may be fast to manually run those tests. However, as your app grows, it takes longer, and the likelihood of making a mistake increases. Granted, you need real users to test out your applications, but you may want those tests to focus on the UX, the features content, or on some experimental features that you are building, rather than on bug reports that automated tests could have caught early on.&lt;br&gt;
There are multiple types of test, and developers are very creative at finding new ways to test things. Three broad categories of automated test, however, are:&lt;br&gt;
• Unit tests&lt;br&gt;
• Integration tests&lt;br&gt;
• Functional tests&lt;br&gt;
Unit tests focus on individual units, like testing the outcome of a method. Unit tests should be fast and should not rely on infrastructure such as a database. These are the kind of test that you want the most, because they run fast. Each one should test a precise code path. They should also help you design your application better because you use your code in the tests, so you are becoming your first customer (or first consumer), leading you to find some design flaws and make your classes better. (If you don’t like how you are using your system in your tests, that’s a good indicator that nobody else will.)&lt;/p&gt;

&lt;p&gt;Integration tests focus on interactions between components. Integration tests often require some infrastructure, such as a database, to interact with, which can make them slower to run. You want integration tests, but as a rule of thumb (to which there are sometimes exceptions) you want fewer of them than unit tests. &lt;br&gt;
Functional tests focus on application-wide behaviors, like what happens when a user clicks on a specific button, navigates to a specific page, posts a form, or sends a PUT request to some Web API endpoint. Functional tests focus on testing the whole application from the user’s perspective, from a functional point of view. Usually, functional tests should be run in-memory, using an in-memory database or other resource. This helps to speed things up, but you could run end-to-end (e2e) tests on real infrastructure as well in order to test your application and your deployment.&lt;/p&gt;

&lt;p&gt;There are other types of automated test, and some sub-genres as we could call them. For example, we could do load testing, performance testing, end-to-end testing, regression testing, contract testing, penetration testing, and more. You can automate tests for almost anything that you want to validate, but some kind of tests are harder to automate or more fragile than others, like UI tests. That said, if you can automate a test in a reasonable timeframe: do it! In the long run it should pay off.&lt;br&gt;
One more thing: don’t blindly rely on metrics like code coverage. Those metrics make for cute badges in your GitHub project’s readme.md file, but can lead you off track writing useless tests. Sure, code coverage is a great metric when used correctly, but remember that one good test can be better than a bad test suite covering 100% of your codebase. &lt;br&gt;
Writing good tests is not easy and only comes with practice. &lt;br&gt;
Note&lt;br&gt;
Keep your test suite healthy by periodically adding missing test cases and removing obsolete or useless tests. Think in terms of use-case coverage, not about how many lines of code are covered by your tests.&lt;br&gt;
Testing made easy through ASP.NET Core&lt;/p&gt;

&lt;p&gt;The ASP.NET Core team has made our life easier by designing ASP.NET Core for testability, and in general testing is way easier than before the ASP.NET Core era. Internally, they use xUnit to test .NET 5 (and .NET Core for it) and EF Core. And xUnit happens to be my favorite testing framework; what a happy coincidence! &lt;br&gt;
Creating a xUnit test project&lt;br&gt;
To create a new xUnit test project, you can run the dotnet new xunit command, and the CLI does the job for you by creating a project containing a UnitTest1 class. That command does the same than creating a new xUnit project from Visual Studio like this:&lt;/p&gt;

&lt;p&gt;Figure 1: Create a xUnit project&lt;br&gt;
Basic features of xUnit&lt;br&gt;
In xUnit, the [Fact] attribute is the way to create unique test cases, while the [Theory] attribute is the way to create data-driven test cases.&lt;br&gt;
Any method with no parameter can become a test method by decorating it with a [Fact] attribute, like this:&lt;br&gt;
public class FactTest&lt;br&gt;
{&lt;br&gt;
    [Fact]&lt;br&gt;
    public void Should_be_equal()&lt;br&gt;
    {&lt;br&gt;
        var expectedValue = 2;&lt;br&gt;
        var actualValue = 2;&lt;br&gt;
        Assert.Equal(expectedValue, actualValue);&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
From the Visual Studio Test Explorer, that fact looks like:&lt;/p&gt;

&lt;p&gt;Figure 2: Tests result&lt;br&gt;
Then, for more complex test cases, xUnit offers three options to define the data that a [Theory] should use: [InlineData], [MemberData], and [ClassData]. You are not limited to only one; you can use as many as it suits you to feed a theory with data. You must make sure that the number of values matches the number of parameters defined in the test method.&lt;br&gt;
[InlineData] is the most suitable for constant, literal values, like this:&lt;br&gt;
public class InlineDataTest&lt;br&gt;
{&lt;br&gt;
    [Theory]&lt;br&gt;
    [InlineData(1, 1)]&lt;br&gt;
    [InlineData(2, 2)]&lt;br&gt;
    [InlineData(5, 5)]&lt;br&gt;
    public void Should_be_equal(int value1, int value2)&lt;br&gt;
    {&lt;br&gt;
        Assert.Equal(value1, value2);&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
That yields three test cases in the test explorer, where each can pass or fail individually.&lt;/p&gt;

&lt;p&gt;Figure 3: Tests result&lt;br&gt;
[MemberData] and [ClassData] can be used to simplify the declaration of the test method, to reuse the data in multiple test methods, or to encapsulate the data away from the test class. Here are a few examples of [MemberData] usage:&lt;br&gt;
public class MemberDataTest&lt;br&gt;
{&lt;br&gt;
    public static IEnumerable Data =&amp;gt; new[]&lt;br&gt;
    {&lt;br&gt;
        new object[] { 1, 2, false },&lt;br&gt;
        new object[] { 2, 2, true },&lt;br&gt;
        new object[] { 3, 3, true },&lt;br&gt;
    };&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static TheoryData&amp;lt;int, int, bool&amp;gt; TypedData =&amp;gt; new TheoryData&amp;lt;int, int, bool&amp;gt;
{
    { 3, 2, false },
    { 2, 3, false },
    { 5, 5, true },
};

[Theory]
[MemberData(nameof(Data))]
[MemberData(nameof(TypedData))]
[MemberData(nameof(ExternalData.GetData), 10, MemberType = typeof(ExternalData))]
[MemberData(nameof(ExternalData.TypedData), MemberType = typeof(ExternalData))]
public void Should_be_equal(int value1, int value2, bool shouldBeEqual)
{
    if (shouldBeEqual)
    {
        Assert.Equal(value1, value2);
    }
    else
    {
        Assert.NotEqual(value1, value2);
    }
}

public class ExternalData
{
    public static IEnumerable&amp;lt;object[]&amp;gt; GetData(int start) =&amp;gt; new[]
    {
        new object[] { start, start, true },
        new object[] { start, start + 1, false },
        new object[] { start + 1, start + 1, true },
    };

    public static TheoryData&amp;lt;int, int, bool&amp;gt; TypedData =&amp;gt; new TheoryData&amp;lt;int, int, bool&amp;gt;
    {
        { 20, 30, false },
        { 40, 50, false },
        { 50, 50, true },
    };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;br&gt;
That test case should yield 12 results. If we break that down, the code start by loading three sets of data from the IEnumerable Data property by decorating the test method with the [MemberData(nameof(Data))] attribute. &lt;br&gt;
Then to make it clearer, we replace the IEnumerable by a TheoryData&amp;lt;…&amp;gt; class, making it more readable, which is my preferred way of defining member data. We feed those three sets of data to the test method by decorating it with the [MemberData(nameof(TypedData))] attribute. &lt;br&gt;
Then, three more sets of data are passed to the test method. Those originate from a method on an external type taking 10 as an argument (the start parameter). We specify the MemberType where the method is located, so xUnit knows where to look, which is represented by the [MemberData(nameof(ExternalData.GetData), 10, MemberType = typeof(ExternalData))] attribute.&lt;br&gt;
Finally, we are doing the same for the ExternalData.TypedData property, which is represented by the [MemberData(nameof(ExternalData.TypedData), MemberType = typeof(ExternalData))] attribute.&lt;br&gt;
When running the tests, those [MemberData] attributes yield the following result in the test explorer:&lt;/p&gt;

&lt;p&gt;Figure 4: Tests result&lt;br&gt;
These are only a few examples of what we can do with the [MemberData] attribute.&lt;br&gt;
Now to the [ClassData] attribute, that one gets its data from a class implementing IEnumerable or by inheriting from TheoryData&amp;lt;…&amp;gt;. Here is an example:&lt;br&gt;
public class ClassDataTest&lt;br&gt;
{&lt;br&gt;
    [Theory]&lt;br&gt;
    [ClassData(typeof(TheoryDataClass))]&lt;br&gt;
    [ClassData(typeof(TheoryTypedDataClass))]&lt;br&gt;
    public void Should_be_equal(int value1, int value2, bool shouldBeEqual)&lt;br&gt;
    {&lt;br&gt;
        if (shouldBeEqual)&lt;br&gt;
        {&lt;br&gt;
            Assert.Equal(value1, value2);&lt;br&gt;
        }&lt;br&gt;
        else&lt;br&gt;
        {&lt;br&gt;
            Assert.NotEqual(value1, value2);&lt;br&gt;
        }&lt;br&gt;
    }&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class TheoryDataClass : IEnumerable&amp;lt;object[]&amp;gt;
{
    public IEnumerator&amp;lt;object[]&amp;gt; GetEnumerator()
    {
        yield return new object[] { 1, 2, false };
        yield return new object[] { 2, 2, true };
        yield return new object[] { 3, 3, true };
    }

    IEnumerator IEnumerable.GetEnumerator() =&amp;gt; GetEnumerator();
}

public class TheoryTypedDataClass : TheoryData&amp;lt;int, int, bool&amp;gt;
{
    public TheoryTypedDataClass()
    {
        Add(102, 104, false);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;br&gt;
These are very similar to [MemberData], but instead of pointing to a member, we point to a type. Here is the result in the test explorer:&lt;/p&gt;

&lt;p&gt;Figure 5: Tests result&lt;br&gt;
Now that [Fact] and [Theory] are out of the way, xUnit offers test fixtures to allow developers to inject dependencies into a test class constructor, as parameters. Fixtures allow those dependencies to be reused by all of the test methods of a test class by implementing the IClassFixture interface. That is very helpful for costly dependencies like accessing a database: created once, reused many times.&lt;br&gt;
You can also share a fixture (a dependency) between multiple test classes by using the ICollectionFixture, [Collection], and [CollectionDefinition] instead. We won’t get too deep intothe details here, but you will know where to look if you ever need something similar.&lt;br&gt;
Finally, if you’ve worked with other testing frameworks, you might have encountered Setup and Teardown methods. In xUnit, there are no particular Attribute or mechanisms to handle setup and teardown code. Instead, xUnit uses existing OOP concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  To set up your tests, use the class constructor.&lt;/li&gt;
&lt;li&gt;  To teardown (clean up) your tests, implement IDisposable, and dispose of your resources there. 
That’s it! xUnit is very simple, yet powerful, which is the main reason why I adopted it as my main testing framework several years ago. Now let’s see how to organize those tests.
Organizing tests
There are many ways of organizing test projects inside a solution. One that I like is to create a unit test project for each project in the solution, one or more integration tests project(s), and a single functional tests project. Since most of the time we have more unit tests, and the unit tests are directly related to single units of code, it makes sense to organize them into a one-on-one relationship. Then we should have fewer integration and functional tests, so a single project should be enough to keep our code organized.
Note
Some people may recommend creating a single unit tests project per solution instead of one per project. This approach could save discovery time. I think that for most solutions, it is only a matter of preference. If you need performance and find a better way for yours, by all means, use that approach instead! That said, I find that one unit test project per assembly is more portable and easier to navigate.
Most of the time, at the solution level, I create my application and its related libraries into an src directory, and I create my test projects into a test directory, like this:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Figure 6: The Automated Testing solution explorer, displaying how the projects are organized&lt;br&gt;
In my Automated Testing solution I don't have any integration test, so I haven't created an integration test project. I could have named one IntegrationTests or MyApp.IntegrationTests depending on my approach.&lt;br&gt;
One more detail that I’ve found helps get tests and code get aligned perfectly is to create unit tests in the same namespace as the subject under test. To make it easier in Visual Studio, you can change the default namespace used by Visual Studio when creating a new class in your test project by adding [Project under test namespace] to a PropertyGroup of the test project file (*.csproj), like that:&lt;br&gt;
&lt;br&gt;
  net5.0&lt;br&gt;
  false&lt;br&gt;
  MyApp&lt;br&gt;
&lt;br&gt;
Then I name my test classes [class under test]Test.cs and create them in the same directory as in the original project, like this:&lt;/p&gt;

&lt;p&gt;Figure 7: The Automated Testing solution explorer, displaying how tests are organized&lt;br&gt;
Finding tests is trivial when you follow that simple rule. Sometimes, it is not possible to do that for integration tests or functional tests; in those cases, use your specifications to help you create clear naming conventions that make sense for your tests. Remember, we are testing use cases.&lt;br&gt;
Finally, for each class, I nest one test class per method that inherits from the nested class's parent class, then I create my test cases inside of it using [Fact] and [Theory] attributes. This help organizes tests efficiently by method, ending with a test hierarchy like this:&lt;br&gt;
namespace MyApp.Controllers&lt;br&gt;
{&lt;br&gt;
    public class ValuesControllerTest&lt;br&gt;
    {&lt;br&gt;
        public class Get : ValuesControllerTest&lt;br&gt;
        {&lt;br&gt;
            [Fact]&lt;br&gt;
            public void Should_return_the_expected_strings()&lt;br&gt;
            {&lt;br&gt;
                // Arrange&lt;br&gt;
                var sut = new ValuesController();&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            // Act
            var result = sut.Get();

            // Assert
            Assert.Collection(result.Value,
                x =&amp;gt; Assert.Equal("value1", x),
                x =&amp;gt; Assert.Equal("value2", x)
            );
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;br&gt;
That technique allows you to set up tests step by step. For example, you can create top-level private mocks; then, for each method, you can modify the setup or create other private test elements, then you can do the same per test case, inside of the test method. Don’t go too hard on reusability however; it can make tests hard to follow for an external eye, like a reviewer or another developer that needs to play there. Unit tests should remain clear, small, and easy to read. The [Fact] attribute define the method as a test case, more on that later.&lt;br&gt;
How is it easier?&lt;br&gt;
Microsoft built .NET Core (now .NET 5) from the ground up, so they fixed and improved so many things that I cannot enumerate them all here, including testability. Not everything is perfect, but it is way better than it ever was.&lt;br&gt;
Let’s start by talking about the Program and the Startup classes. Those two classes are the place to define how the application boots and its composition. Based on that model, the ASP.NET Core team created a test server class that allows you to run your application in memory. &lt;br&gt;
They also added WebApplicationFactory in .NET Core 2.1 to make integration and functional testing even easier than before. With that class, you can boot up your ASP.NET Core application in-memory and query it with the supplied HttpClient. All of that in only a few lines of code. There are extension points to configure it, like replace implementations by mocks, stubs, or any other test-specific elements that you may require. TEntry should be your project under test Startup or Program class.&lt;br&gt;
I created a few test cases in the Automated Testing project that exposes this functionality:&lt;br&gt;
namespace FunctionalTests.Controllers&lt;br&gt;
{&lt;br&gt;
    public class ValuesControllerTest : IClassFixture&amp;gt;&lt;br&gt;
    {&lt;br&gt;
        private readonly HttpClient _httpClient;&lt;br&gt;
        public ValuesControllerTest(WebApplicationFactory webApplicationFactory)&lt;br&gt;
        {&lt;br&gt;
            _httpClient = webApplicationFactory.CreateClient();&lt;br&gt;
        }&lt;br&gt;
Here we are injecting a WebApplicationFactory into the constructor by implementing the IClassFixture interface. We could use the factory to configure the test server, but since it was not required here, we can only keep a reference on the HttpClient that is configured to connect to the in-memory test server running the application.&lt;br&gt;
        public class Get : ValuesControllerTest&lt;br&gt;
        {&lt;br&gt;
            public Get(WebApplicationFactory webApplicationFactory) : base(webApplicationFactory) { }&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        [Fact]
        public async Task Should_respond_a_status_200_OK()
        {
            // Act
            var result = await _httpClient.GetAsync("/api/values");

            // Assert
            Assert.Equal(HttpStatusCode.OK, result.StatusCode);
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the test case above, we are using the test HttpClient to query the &lt;a href="http://localhost/api/values"&gt;http://localhost/api/values&lt;/a&gt; URI, accessible through the in-memory server. Then we test the status code of the HTTP response to make sure it was a success (200 OK).&lt;br&gt;
            [Fact]&lt;br&gt;
            public async Task Should_respond_the_expected_strings()&lt;br&gt;
            {&lt;br&gt;
                // Act&lt;br&gt;
                var result = await _httpClient.GetAsync("/api/values");&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            // Assert
            var contentText = await result.Content.ReadAsStringAsync();
            var content = JsonSerializer.Deserialize&amp;lt;string[]&amp;gt;(contentText);
            Assert.Collection(content,
                x =&amp;gt; Assert.Equal("value1", x),
                x =&amp;gt; Assert.Equal("value2", x)
            );
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;br&gt;
This last test does the same but deserializes the body's content as a string[] to ensure the values are the same as what we were expecting. If you worked with an HttpClient before, this should be very familiar to you.&lt;br&gt;
When running those tests, an in-memory web server starts; then, HTTP requests are sent to that server, testing the full application. In that case, the tests are trivial, but you can create more complex test cases as well.&lt;br&gt;
You can run .NET 5 tests within Visual Studio, or using the CLI by running the dotnet test command. In VS Code, you can use the CLI or find an extension to help you out with test runs.&lt;br&gt;
Conclusion&lt;br&gt;
Automated testing is an indispensable element of modern software development practice, and an effective test regime must encompass a wide variety of test types. Here we’ve looked at three of the fundamental test categories: unit tests, integration tests, and functional tests. We also had a quick look at the xUnit testing framework, which provides a highly effective mechanism for both implementing and organizing tests. Finally we’ve seen how ASP.NET Core, by allowing us to mount and run our ASP.NET Core application in memory, makes it easier than ever to test our web applications.&lt;/p&gt;

&lt;p&gt;Purchase your copy on &lt;a href="https://packt.live/3isEKAK"&gt; Amazon.com &lt;/a&gt; &lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
