<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hamamd Tahir</title>
    <description>The latest articles on DEV Community by Hamamd Tahir (@hammadtahirch).</description>
    <link>https://dev.to/hammadtahirch</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hammadtahirch"/>
    <language>en</language>
    <item>
      <title>Real-Time Example: Using Goroutines and Channels</title>
      <dc:creator>Hamamd Tahir</dc:creator>
      <pubDate>Tue, 02 Apr 2024 13:47:37 +0000</pubDate>
      <link>https://dev.to/hammadtahirch/real-time-example-using-goroutines-and-channels-3aha</link>
      <guid>https://dev.to/hammadtahirch/real-time-example-using-goroutines-and-channels-3aha</guid>
      <description>&lt;p&gt;In Go, &lt;strong&gt;goroutines&lt;/strong&gt; and &lt;strong&gt;channels&lt;/strong&gt; play a key role in making concurrent programming smooth and efficient. &lt;strong&gt;Goroutines&lt;/strong&gt; are like lightweight threads that let you perform tasks simultaneously, while channels help these &lt;strong&gt;goroutines&lt;/strong&gt; communicate seamlessly. Together, they bring a powerful way to write concurrent programs in Go that feels intuitive and straightforward.&lt;/p&gt;

&lt;p&gt;Imagine you're building a web scraper—a fantastic real-world application! Web scraping is all about gathering data from various websites, which can sometimes take a lot of time. But with &lt;strong&gt;goroutines&lt;/strong&gt;, you can simultaneously fetch data from multiple sites, making the whole process much faster. And with channels, you can easily collect and process all that data from the different websites you’re scraping.&lt;/p&gt;

&lt;p&gt;Let’s dive into a simple example of a web scraper in Go that utilizes &lt;strong&gt;goroutines&lt;/strong&gt; and channels to make this task efficient and fun! channels:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "fmt"
    "io/ioutil"
    "net/http"
)

func fetch(url string, ch chan&amp;lt;- string) {
    resp, err := http.Get(url)
    if err != nil {
        fmt.Println("Error fetching URL:", err)
        ch &amp;lt;- ""
        return
    }
    defer resp.Body.Close()

    body, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        fmt.Println("Error reading response body:", err)
        ch &amp;lt;- ""
        return
    }

    ch &amp;lt;- string(body)
}

func main() {
    urls := []string{"https://example.com", "https://google.com", "https://github.com"}
    ch := make(chan string)

    for _, url := range urls {
        go fetch(url, ch)
    }

    for range urls {
        fmt.Println(&amp;lt;-ch)
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the fetch function retrieves the content of a URL using an HTTP GET request and shares it with a channel. We set up a goroutine for each URL in the urls slice, allowing the fetching of each URL to happen at the same time. Finally, we read from the channel to see the fetched content and display it.&lt;/p&gt;

</description>
      <category>go</category>
      <category>beginners</category>
      <category>programming</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
