<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: FDiaz</title>
    <description>The latest articles on DEV Community by FDiaz (@sty6x).</description>
    <link>https://dev.to/sty6x</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sty6x"/>
    <language>en</language>
    <item>
      <title>Scaling Zensearch's capabilities to query the whole database</title>
      <dc:creator>FDiaz</dc:creator>
      <pubDate>Fri, 08 Nov 2024 15:46:30 +0000</pubDate>
      <link>https://dev.to/sty6x/scaling-zensearchs-capabilities-to-query-the-whole-database-2bf5</link>
      <guid>https://dev.to/sty6x/scaling-zensearchs-capabilities-to-query-the-whole-database-2bf5</guid>
      <description>&lt;p&gt;Previously I've been able to crawl and index web pages for my search engine without a problem, until my database grew more than what RabbitMQ's message queue was capable of holding. If a message in a message queue exceeds its default size, RabbitMQ will throw an error and panic, I could change the default size but that would not scale if my database grows, so in order for users to crawl web pages without having to worry about the message broker crashing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Segments
&lt;/h2&gt;

&lt;p&gt;I've implemented a function to create segments with a maximum segment size or MSS from the same principles from TCP when creating segments, the segment contains an 8 byte header where each 4 byte of the 8 byte header is the sequence number and the total segment count, and the rest of the body is the payload of the segmented database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// MSS is number in bytes
function createSegments(
  webpages: Array&amp;lt;Webpage&amp;gt;, // webpages queried from database
  MSS: number,
): Array&amp;lt;ArrayBufferLike&amp;gt; {
  const text_encoder = new TextEncoder();
  const encoded_text = text_encoder.encode(JSON.stringify(webpages));
  const data_length = encoded_text.byteLength;
  let currentIndex = 0;
  let segmentCount = Math.trunc(data_length / MSS) + 1; // + 1 to store the remainder
  let segments: Array&amp;lt;ArrayBufferLike&amp;gt; = [];
  let pointerPosition = MSS;

  for (let i = 0; i &amp;lt; segmentCount; i++) {
    let currentDataLength = Math.abs(currentIndex - data_length);

    let slicedArray = encoded_text.slice(currentIndex, pointerPosition);

    currentIndex += slicedArray.byteLength;
    // Add to offset MSS to point to the next segment in the array
    // manipulate pointerPosition to adjust to lower values using Math.min()

    // Is current data length enough to fit MSS?
    // if so add from current position + MSS
    // else get remaining of the currentDataLength
    pointerPosition += Math.min(MSS, currentDataLength);
    const payload = new Uint8Array(slicedArray.length);
    payload.set(slicedArray);
    segments.push(newSegment(i, segmentCount, Buffer.from(payload)));
  }
  return segments;
}

function newSegment(
  sequenceNum: number,
  segmentCount: number,
  payload: Buffer,
): ArrayBufferLike {
  // 4 bytes for sequenceNum 4 bytes for totalSegmentsCount
  const sequenceNumBuffer = convertIntToBuffer(sequenceNum);
  const segmentCountBuffer = convertIntToBuffer(segmentCount);
  const headerBuffer = new ArrayBuffer(8);
  const header = new Uint8Array(headerBuffer);
  header.set(Buffer.concat([sequenceNumBuffer, segmentCountBuffer]));
  return Buffer.concat([header, payload]);
}

function convertIntToBuffer(int: number): Buffer {
  const bytes = Buffer.alloc(4);
  bytes.writeIntLE(int, 0, 4);
  console.log(bytes);
  return bytes;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Parsing incoming segments
&lt;/h2&gt;

&lt;p&gt;This method of creating small segments of a large dataset would help scale the database query even if the database grows.&lt;/p&gt;

&lt;p&gt;Now how does the search engine parse the buffer and transform each segments into a web page array? &lt;/p&gt;

&lt;h3&gt;
  
  
  Reading from segment buffers
&lt;/h3&gt;

&lt;p&gt;First extract the segment header, since the header contains 2 properties namely &lt;code&gt;Sequence number&lt;/code&gt; and &lt;code&gt;Total Segments&lt;/code&gt;,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func GetSegmentHeader(buf []byte) (*SegmentHeader, error) {
    byteReader := bytes.NewBuffer(buf)
    headerOffsets := []int{0, 4}
    newSegmentHeader := SegmentHeader{}

    for i := range headerOffsets {
        buffer := make([]byte, 4)
        _, err := byteReader.Read(buffer)
        if err != nil {
            return &amp;amp;SegmentHeader{}, err
        }
        value := binary.LittleEndian.Uint32(buffer)

        // this feels disgusting but i dont feel like bothering with this
        if i == 0 {
            newSegmentHeader.SequenceNum = value
            continue
        }
        newSegmentHeader.TotalSegments = value
    }
    return &amp;amp;newSegmentHeader, nil
}

func GetSegmentPayload(buf []byte) ([]byte, error) {
    headerOffset := 8
    byteReader := bytes.NewBuffer(buf[headerOffset:])
    return byteReader.Bytes(), nil

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Handling retransmission and requeuing of segments
&lt;/h3&gt;

&lt;p&gt;The sequence number will be used for retransmission/requeuing of the segments, so if the expected sequence number is not what was received then re-queue every segment starting from the current one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    // for retransmission/requeuing
        if segmentHeader.SequenceNum != expectedSequenceNum {
            ch.Nack(data.DeliveryTag, true, true)
            log.Printf("Expected Sequence number %d, got %d\n",
                expectedSequenceNum, segmentHeader.SequenceNum)
            continue
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Appending segment payloads
&lt;/h3&gt;

&lt;p&gt;The total segment will be used for breaking out of listening to the producer (database service) if the total number of segments received by the search engine is equal to the length of the total segments that is to be sent by the database service then break out and parse the aggregated segment buffer, if not the keep listening and append the segment payload buffer to a web page buffer to hold bytes from all of the incoming segments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        segmentCounter++
        fmt.Printf("Total Segments : %d\n", segmentHeader.TotalSegments)
        fmt.Printf("current segments : %d\n", segmentCounter)
        expectedSequenceNum++
        ch.Ack(data.DeliveryTag, false)
        webpageBytes = append(webpageBytes, segmentPayload...)
        fmt.Printf("Byte Length: %d\n", len(webpageBytes))

        if segmentCounter == segmentHeader.TotalSegments {
            log.Printf("Got all segments from Database %d", segmentCounter)
            break
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;I use vim btw&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Thank you for coming to my ted talk, I will be implementing more features and fixes for zensearch.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>networking</category>
      <category>typescript</category>
      <category>go</category>
    </item>
    <item>
      <title>I built my own search engine</title>
      <dc:creator>FDiaz</dc:creator>
      <pubDate>Fri, 01 Nov 2024 15:02:50 +0000</pubDate>
      <link>https://dev.to/sty6x/my-search-engine-zensearch-35jp</link>
      <guid>https://dev.to/sty6x/my-search-engine-zensearch-35jp</guid>
      <description>&lt;h2&gt;
  
  
  The &lt;a href="https://www.youtube.com/watch?v=vacJSHN4ZmY" rel="noopener noreferrer"&gt;Beninging&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;From building react applications to building my own search engine and web crawler for indexing. I’m happy to introduce to you to Zensearch, a search engine where you as a user have more control over what you want your searches to be, you can create entries to crawl different websites and continue using the search engine functionality if you have existing indexed data in the database while it does its work, now I know this may not be the most complex or sophisticated search engine in the world like google or brave search but I built this thing to gauge how much I can do on my own and learn as much as I can while doing it, and oh boy I’ve learned a lot.&lt;/p&gt;

&lt;p&gt;It all started when I was building my React web application, a sort of commonplace book for inserting your favorite quotes or adding notes to that specific page as if you’re trying to converse with the author or typing down what you were thinking at that moment in time on a page that corresponds to the page of your physical book, its not a bad project but I just got so bored of building Reactjs applications, not that it’s bad but it felt like I was not going anywhere with it, there’s no technical depth into what I was doing and I was not learning anything from building those ReactJs projects.&lt;/p&gt;

&lt;p&gt;so I tried to study about computer networking, Operating systems, Computer architecture and so on then after a few months of studying and building my own application layer protocol like a websocket where I can handle multiple users and each user can join these different rooms or namespaces where they can communicate with each other and I felt ecstatic, alive even. I felt like I could do so many things as long as I understood how the computer works eg: threads, semaphores, process, memory layout, interrupt signals etc, So I thought to myself, what projects can I do to utilize some of things that I’ve learned? &lt;/p&gt;

&lt;p&gt;oh and I'm a self-taught btw and I used &lt;a href="https://www.theodinproject.com/" rel="noopener noreferrer"&gt;The Odin Project&lt;/a&gt; to learn programming and web development so shout out to those guys because they taught me how to become independent to study and refused to hand hold programmers throughout the curriculum.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;p&gt;I've only been able to program using Nodejs, that was my bread and butter along with typescript, so I built the web crawler using Nodejs... pretty stupid right? I mean the plan was to create a crawler that can crawl an array of source URLs from the front-end and let each crawler send these extracted data to the database, and as we all know Yabascript is single-threaded and every asynchronous task is handled by the environment where Yavascript is running eg: browser's apis, node, deno, bun and done. &lt;/p&gt;

&lt;p&gt;so doing multi-tasking operations using Nodejs was a suicide mission and it was, from converting the webpage object to be encoded to an 8-bit buffer but then the shared array buffer can only transport 64-bit array buffer  due to data alignment so I had to convert from 8-bit buffer to 64-bit by adding some offset paddings and then back from 64-bit buffer to 8-bit buffer after sending data from the crawler to the main thread and then finally parsing it to a vajascript Object... wow that was fun, there is another way for message passing but that creates a copy of the same data that is in the crawler to the main thread so I didn't want to that since it would take so much memory.&lt;/p&gt;

&lt;p&gt;I had to handle race conditions using nodejs' &lt;code&gt;atomics&lt;/code&gt; module and to this day I still don't understand how that module even works to be honest and annoyed me so much so I had to turn to Golang. I love this language so much, it's so easy to create threads handle race conditions, using semaphores and wait groups, I haven't had the need to use &lt;code&gt;mutex&lt;/code&gt; yet and I'm excited to learn it so maybe in the future, along with &lt;code&gt;context&lt;/code&gt; would be fun to learn.&lt;/p&gt;

&lt;p&gt;Let's move on to front-end shall we? has any of you read this article from frontend masters? &lt;a href="https://frontendmasters.com/blog/you-might-not-need-that-framework/" rel="noopener noreferrer"&gt;You might not need that framework&lt;/a&gt;, remember that I said that I got bored of ReactJs? well, this made me appreciate frameworks because of their reusability and their data binding mechanisms.&lt;/p&gt;

&lt;p&gt;I don't want to get into too much details about the front-end but I used a PubSub pattern to update any UI changes when data changes and used web components along with the shadow dom to create reusable components, the shadow dom was a pain to access in javascript and style since it is isolated from entire dom tree so accessing it using CSS and DOM API won't work, so yeah those were the only challenges I had but it was fun.. it was fun when I was migrating the crawler from Nodejs to Go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to consider
&lt;/h2&gt;

&lt;p&gt;There are some functionalities that I have not yet been implemented because I was so eager to show off the project but that doesn't matter to me that much even if this is an ongoing project, this won't be a one and done project I will keep improving zensearch in the future so for now here are some key things that are missing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Implementing a list of already indexed websites to be displayed to the users on the front-end.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save the most recently crawled web page for continuation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create cancellation for crawling but still save the indexed pages up to that point.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rabbitmq's message size limitation for scaling, if a database contains more than the default size that is set in rabbitmq, the message broker will throw an error and crash, so to avoid this I will try to implement a &lt;code&gt;window frame algorithm&lt;/code&gt; used in tcp by creating a pipelining mechanism where the array of webpages will be broken into segments and sent to the search engine by &lt;code&gt;N&lt;/code&gt; size where &lt;code&gt;N&lt;/code&gt; is the size of the window, I still need to think about this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Give users the ability to remove their Indexed websites.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Epilogue
&lt;/h2&gt;

&lt;p&gt;I would like to write more about what I learned and some nuances of my development journey but I think this would be too long, so for now I want to show off my greatest project, and I would be happy to get some feedback from you guys if you have the time and let me know if there are any problems and improvements I could do to make Zensearch better, oh and this is all thanks to theprimeagean this guy inspired me to go deeper into things and to learn the fundamentals instead of just running &lt;code&gt;npm create vite@latest my-vue-app -- --template react-ts&lt;/code&gt; in the terminal, which admittedly made me insecure about myself as a programmer and the things that I know but because of that insecurity I've learned new things now I'm always striving to learn more things and would be happy to learn from &lt;strong&gt;YOUR&lt;/strong&gt; feedback so thank you listening to my ted talk.&lt;/p&gt;

&lt;p&gt;Github repository for &lt;a href="https://github.com/francccisss/zensearch" rel="noopener noreferrer"&gt;Zensearch&lt;/a&gt;&lt;/p&gt;

</description>
      <category>searchengine</category>
      <category>go</category>
      <category>docker</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>What I've Learned This Week with TOP(The Odin Project)</title>
      <dc:creator>FDiaz</dc:creator>
      <pubDate>Sat, 03 Sep 2022 11:12:17 +0000</pubDate>
      <link>https://dev.to/sty6x/what-ive-learned-this-week-ob9</link>
      <guid>https://dev.to/sty6x/what-ive-learned-this-week-ob9</guid>
      <description>&lt;h1&gt;
  
  
  Learning Webpack And SOLID Principles
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;This is my First ever post in dev.to and Im not very particular at formatting so you have to forgive me.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So I've been studying WebDev for 5 months now using The Odin Project as my guide for a Structured program and I've taught myself how to become more independent in looking up things and this made me more curious and made me want to learn more about web development thanks to TOP, concepts are the things that i look forward to learning and backend development, I've  started this journey because i wanted to do something with my life and i don't want to cause anymore trouble or become a hindrance for my mother who's always supporting me throughout this whole journey of self-studying Web Development.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;this is not a success story but more of a journal to see my progression and blog to get people's opinion and perspective on the topics that I've learned and if I've interpreted it or understood them correctly, so please do tell me if I understood some of it wrong and correct me, i don't want to be ignorant and oblivious about important concepts.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I've  only used dev.to recently and was amazed of how abundant the resources you could learn from it, its crazy, so anyways this is the last 2 weeks of August 2022 of what i've learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned This week
&lt;/h2&gt;

&lt;h3&gt;
  
  
  19th - 20th week of TOP:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  19th Week
&lt;/h4&gt;

&lt;p&gt;I've just finished working on the Restaurant Page Provided by TOP, I've  learned a lot about webpack5, how to bundle up modules so that we can decrease the size of our project (minifying it), how to structure our project using webpack eg: the dist (distribution) folder is where all of our modules are bundled up together in one output called 'main.js'(for my case, you can name your ouput whatever you want) and images or resources are also outputted there in the dist folder this will take our script at 'src/index.js' as the 'entry point', there are still gaps in my knowledge about webpack, i don't know why the resources' local url that are being used in our modules are being converted to different names after outputting it, i might look it up in the future.&lt;/p&gt;

&lt;p&gt;In the Asset Management section in webpack documentations where if we want to use different types of file besides javascipt we could use a 'loader' or the built-in Asset Modules, understanding it is pretty straight forward(i think), the 'test:' property looks for a specific type eg: .css, .png or .jpeg and the second property "use:" uses the 'loader' that we installed using npm for that specific type of file or 'type' that doesn't take in a loader but instead uses the built-in asset module in Webpack eg: 'type': 'asset/resources'.&lt;/p&gt;

&lt;h5&gt;
  
  
  Parsing different file types into Json
&lt;/h5&gt;

&lt;p&gt;We can use Json files and read other types of files (toml,yaml or Json5) and convert it into Json using a custom parser of a specific webpack loader.&lt;/p&gt;

&lt;p&gt;first we install them and then inside webpack.config we use the require function on each file eg:'require('yaml')', then we do the same thing when we want to use different file types, "use" and "type" looks for the specific type extension, then use its loader, and now we use the 'parser' object with a 'parse:' property for each file type and in that 'parse' property we type in whatever that file type is with a '.parse' at the end to use the parse method, and then just import it to wherever you want to use it.&lt;/p&gt;

&lt;h5&gt;
  
  
  Script Automation
&lt;/h5&gt;

&lt;p&gt;Script Automation, everytime we try to run our project after every change we need to build our wepack or project and referesh the page, but this task is tedious and no one wants to do a very small task just to see if a code actually works, 'watch' and 'web-dev-server' are helpful tools for that problem, watch build our projects whenever we save it without having to type in 'npx webpack build', it watches all the files within our dependency graph for change and if one of those files are changed, the 'watch' will recompile it automatically so we dont have to build it everytime.&lt;/p&gt;

&lt;p&gt;In the package.json scripts property object we can make a property inside of it called "watch" and the value of it "webpack --watch" now we can do "npm run watch" from the command line. &lt;/p&gt;

&lt;p&gt;Now for Web-dev-server this one provides a web server and has the ability to use a live reloading, (because watch doesnt have live reloading), first we need to install the module '--save-dev webpack-dev-server', after that we need to choose what file we want to serve (in this case the dist folder) so inside webpack config we add a devServer object and add in a property called 'static' with a value of './dist' this tells webpack to serve the files from the dist directory on localhost:8080 webpack server serves bundled files from the directory defined in output.path, then after that we go to our package.json to add a script called "start" with a value of "webpack serve --open" now we can just type in the command 'npm start' to run the webpack dev server.&lt;/p&gt;

&lt;h4&gt;
  
  
  20th Week
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;OOP Principles:&lt;/strong&gt;&lt;br&gt;
Currently im still trying to build up my knowledge to understand this concept which is pretty simple but some articles and blogs are using too much techy and abstract words and makes it more harder for me to understand some of it, but im slowly understanding the implementation of the SOLID principle:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Single Responsibility&lt;/strong&gt;&lt;br&gt;
Tells us that every module or class or function should only have one responsibility which is very self explanatory and it should not do anything else aside from that one thing, for example: if i make a Person class that makes a person then that class should just MAKE a person, not make person run jump or walk, Another one is a Book Class which just makes Books OR a basic calculator function that takes in 2 numbers and outputs something based on what operation the user uses, this function calculator SHOULD ONLY calculate things not display it to the user, you could do that in a separate function to achieve that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open/Closed&lt;/strong&gt;&lt;br&gt;
This one is easy to understand too, this one is used for extensibility of a Class or module or Function, so that we dont have to tinker the innards of a fully functioning Class or Module or function because modifying a built class or module would probably break the code and would need refactoring to cater different functionalities in the future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Liskov Substitution&lt;/strong&gt;&lt;br&gt;
This tells us about having our dervied class from a parent class, to be able to be a substitute of a parent class, this one is achieved by using inheritance &lt;br&gt;
 --&lt;em&gt;this is all that i understood from this principle because i still haven't been able to apply this to any of my projects YET but i think i'd be able to understand this one easily and apply this to one of my projects with further research, there's probably more to this or this is maybe all there is to it&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interface Segregation&lt;/strong&gt;&lt;br&gt;
An easy one, this basically means that our Classes or subclasses should not have any methods that they dont need and shouldn't be forced to use any of those methods, one of the fundamentals of composition over inheritance.&lt;br&gt;
these extra methods should be removed or tucked away, it should only execute actions that the Class requires.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency Inversion&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;This part is still pretty tough for me to comprehend as i said in the Liskov substitution, i haven't been able to implement this principle yet, but i would love to explain it from how i understood it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So what this one means is that if there are 2 classes or modules that need to work together for example a Class A, the has data of a customer and Class B that wants to output that data of the customer, but in order for Class B to output that data from Class A, we need to reference the Class A in Class B which violates this principle, because what if we want to have a different data from Class D(different :D) then this would cause problems in our program, because Class B only references the Class A, and if we want to use Class D we'd have to modify our Class B in order to cater different data that we want to import and use in Class B, so in order for us to avoid this conundrum, we would have to have a mediator between Classes A and D and Class B, or an interface if you will, this interface will connect the data from Classes A,D to B without having to disrupt or modify the functionality or behavior of Class B.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;br&gt;
I've learned a lot in 2 weeks and have a long way to go since there are still a few gaps of my understanding in &lt;strong&gt;Webpacks&lt;/strong&gt; and the &lt;strong&gt;SOLID&lt;/strong&gt; Principles.&lt;br&gt;
&lt;em&gt;I'm hoping this community would help me understand more concepts in the future and i could really learn a lot from this community&lt;/em&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>yeet</title>
      <dc:creator>FDiaz</dc:creator>
      <pubDate>Tue, 30 Aug 2022 04:02:03 +0000</pubDate>
      <link>https://dev.to/sty6x/yeet-2467</link>
      <guid>https://dev.to/sty6x/yeet-2467</guid>
      <description>&lt;p&gt;yeet&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
