<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Krinskumar Vaghasia</title>
    <description>The latest articles on DEV Community by Krinskumar Vaghasia (@krinskumar).</description>
    <link>https://dev.to/krinskumar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/krinskumar"/>
    <language>en</language>
    <item>
      <title>SPO-GCC - Final post</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Thu, 12 Dec 2024 04:58:14 +0000</pubDate>
      <link>https://dev.to/krinskumar/spo-gcc-final-post-3jmp</link>
      <guid>https://dev.to/krinskumar/spo-gcc-final-post-3jmp</guid>
      <description>&lt;p&gt;Last week we saw how, I was able to make my own pass in a version of gcc and was able to get a dump of this file.&lt;/p&gt;

&lt;p&gt;In that version, I was able to generate a hash for all functions where the cloned function have same hash. I was also able to print this hash in the dump if the dump flag was passed. However I was not able to able to compare these hashes with each other, I tried a bunch of things but nothing work. In this blog I will talk about the things I did to fix this, and see if I was able to solve this problem or not.&lt;br&gt;
Note: you can find my changes I made in my fork of gcc &lt;a href="https://github.com/KrinsKumar/gcc" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  Comparing the hashes
&lt;/h3&gt;

&lt;p&gt;Branch &lt;a href="https://github.com/KrinsKumar/gcc/tree/prune-hash" rel="noopener noreferrer"&gt;prune-hash&lt;/a&gt; contains my previous changes where the output file printed the hashes of the functions. I was unable to get the pass to compare the hash functions to determine weather this hash is copy of another hash or not. In other words, I was not able to share calculated hashes to all of the runs. &lt;br&gt;
When we run this with the dump flag, we would see something like this.&lt;br&gt;
For a noprune version - See how the hashes of the functions are different.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3miipnug1mo2ou42s6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3miipnug1mo2ou42s6j.png" alt="Image description" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a prune version - See how the highlighted hashes are the same indicated duplicate functions.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajfuwjel41b8hpjcbqdx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajfuwjel41b8hpjcbqdx.png" alt="Image description" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I made a new branch called &lt;a href="https://github.com/KrinsKumar/gcc/tree/prune-compare-hash" rel="noopener noreferrer"&gt;prune-compare-hash&lt;/a&gt; where I experimented further to get the compare working. However, I was not able to get it to work, I ended up pushing all of my code to this branch. NOTE: the pass in this branch is not complete and the gcc build on this branch will result in errors. My idea was to create a static hash map that stores all the hashes and can be accesses in difference runs because of its static property.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;There is not enough/proper documentation for us to refer and make conclusive decisions.&lt;/li&gt;
&lt;li&gt;If someone is not familiar with the build process, they can end up spending a lot of time just trying to build the file. Also, Some errors are very cryptic, and I ended up restarting my build from scratch (2~ hours). So save this time, make sure you are not making any manual changes in the build folder and only using the make utility to interact with the build.&lt;/li&gt;
&lt;li&gt;Dont forget to use threads when building the compiler.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Next steps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;All of the above work was done on x84, Perhaps the result could be different I were to use arch64. I would like to test this theory after this course and make sure I understand the underlying problem. With my current understanding, I am unable to pinpoint the main cause of the errors I am facing. Nor am I able to implement other solutions outside of the scope of my knowledge.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I would also like to go back maybe rethink my hashing algorithm which could be a root to my problem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I did not get time to look in use dynamic memory allocation, This might very well be a dead end but I would still like to give this a try in the future. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Reflection
&lt;/h3&gt;

&lt;p&gt;I started with making lines and patterns on a small 6502 emulator. Then I ended up making a whole game on this small chip. I made a small version of assembly to print numbers from 1 to 30 (this task is more challenging than you think!). I ended up with making an actual change/contribution to my personal version of gcc!!! This is one the hardest course I did in my school yet somehow most rewarding. I now have a deeper knowledge of our code at lower level and optimization techniques that can be incorporated for best results. These techniques could be different based on what environment we are in. Even though my pass was not completely working, it was a great learning experience for me and expanded my perspective on how the compilers works.&lt;/p&gt;

</description>
      <category>spo</category>
    </item>
    <item>
      <title>0.4 - week - 3 - Results</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Sun, 08 Dec 2024 22:52:16 +0000</pubDate>
      <link>https://dev.to/krinskumar/04-week-3-results-4f9a</link>
      <guid>https://dev.to/krinskumar/04-week-3-results-4f9a</guid>
      <description>&lt;p&gt;This is my final post of my contribution series and in this blog I will be talking about how I was able to complete my contribution(well kind of).&lt;/p&gt;

&lt;h3&gt;
  
  
  Context
&lt;/h3&gt;

&lt;p&gt;So as a part of my course, I was supposed to make an impactful contribution to any repo that we like. I worked on this &lt;a href="https://github.com/activitypods/activitypods/issues/348" rel="noopener noreferrer"&gt;issue&lt;/a&gt; and this is my &lt;a href="https://github.com/assemblee-virtuelle/semapps/pull/1337" rel="noopener noreferrer"&gt;PR&lt;/a&gt;. Notice how both of these are in different repo. Yeah I was surprised too, I took this issue because I kind of understood what was happening in the original repo, but the other repo had me stressing because how hard it was me to understand. Check out my &lt;a href="https://dev.to/krinskumar/04-week-1-objectives-3a00"&gt;Blog1&lt;/a&gt; and &lt;a href="https://dev.to/krinskumar/04-week-2-progress-1i3d"&gt;Blog2&lt;/a&gt; to get an idea where we are at.&lt;/p&gt;

&lt;h3&gt;
  
  
  The PR
&lt;/h3&gt;

&lt;p&gt;I raised my first PR in my last blog, after some days I got two reviews. One was to not throw anything in the deepest part of the function because its being used by a lot of other flows and we dont want to disturb them because that is not in the scope of the issue. and the other one was to use the mutation module that they had in the repository to throw so that the throws are consistent. Working with this module was very neat and made my code look better. Made me realize that one aspect of open source contribution is to make sure that in our first walk around the repository, we are looking at certain things are done that we might have to implement. For example I knew that I had to throw which was for sure because it was what the story was about, instead I used up a lot time just understanding how the code works and not how the throwing syntax look like. I fixed those changes and pushed my new changes for them to got reviewed.&lt;/p&gt;

&lt;p&gt;The changes got approved this time, and there was a suggestion to add tests, I had some time left before the deadline of this project, So I thought why not?. This was not as straightforward as I thought it will be. Before I start adding my tests, I wanted to run other tests. However, The tests were just not running for some reason and on top of that there was no logs for me to debug, until I figured out that it had its own local package.json file which was new for new. Anyways I downloaded the dependancies and ran the tests again. And guess what? it failed again. Fortunately this time, there were some logs. I looked at the logs and found out that the tests were trying to hit a port, was I supposed to run the server before tests? I had a lot of questions which made me wander for a readme file or something. While looking for a readme, I found a dockerfile and had an ahha moment. This was it!!. I had to run a docker file before the tests and after that all of the tests were starting. Adding the test was the easy part because I was already a little familiar with the code. I have pushed the tests with a screenshot of it passing. &lt;/p&gt;

&lt;h3&gt;
  
  
  My changes in detail
&lt;/h3&gt;

&lt;p&gt;So there was this route which was returning 500 when a database for a user was not existing. The story wanted me to do 404 if the user does not exist and 410 if the user deleted their account recently. the latter got me scratching my head. I looked at the logs in the screenshot of the issue and traced the files where this 500 could be returned. I changed this too 404 (which I ended up reverting because this was being used by a lot of other flows and we dont want to change that remember?). And I traced back a little bit and found the entry point of the route. I just added a small check there to retrieve the account using the WebID passed in the params.  I used to ctx to run a function that gave me the account, I returned 404 if the account was not there and 410 if it had a deletedAt property. I found out that the delete function added this row to deleted accounts. So I just leveraged this row.&lt;/p&gt;

&lt;p&gt;When I wanted to hit this route, the postman was just not working it was giving me a very cryptic error. After trying for a little bit, I had to consult with the other developers and they asked me comment out a check which fixed my issue. I knew that this check was the issue, but was unsure if this could be removed for debugging. The resulting code block might be short, but trust me this was defiantly one of the hardest project for me to work on, on par with the big repos I worked on at my coop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gq9off24djflsa2835i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gq9off24djflsa2835i.png" alt="Image description" width="655" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The journey
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpgdi1luf1x4pclfiyn3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpgdi1luf1x4pclfiyn3.png" alt="Image description" width="637" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Honestly, this was the hardest repo that I had to onboard myself onto. There was no typescript and a ctx was being passed everywhere. It took me so much time just to wrap my head around this idea of this architecture using ctx. Another hurdle was the fact that the code was split onto two different repos, so I had to link the code before I run the server. Booting the server was pretty straightforward because of the clear documentation, but coding was a little challenging. Every time I test my route using postman, I had to look at the logs in the first repo, but to find where the logs came from and debug further, I had to move to a different repo. Not a big deal breaker but inconvenient for sure. I was added to a matrix server and every one there were super welcoming and helpful and answered a couple of questions that I asked. I enjoyed working on this repo and learned a thing or two about the context architecture. Thank you for reading.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyv080s33ctbcqfltczv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyv080s33ctbcqfltczv.png" alt="Image description" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>GCC - A new pass</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Fri, 06 Dec 2024 01:16:29 +0000</pubDate>
      <link>https://dev.to/krinskumar/gcc-a-new-pass-2l29</link>
      <guid>https://dev.to/krinskumar/gcc-a-new-pass-2l29</guid>
      <description>&lt;p&gt;This is part two of my project where I will be implementing a custom pass in GCC compiler. In my last blog, I built my own version of GCC and made the version my default version. This can be checked using &lt;code&gt;which gcc&lt;/code&gt; which should display the path address of the build. The pass that I will be making a pass that will be responsible for finding duplicate functions and pruning them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get the demo pass to work
&lt;/h3&gt;

&lt;p&gt;When I had my personal build running, I started by implementing the demo code that professor gave us to get things started and get familiar with how all of this works, I just plugged the code which included &lt;br&gt;
a new line in makefile.in file -&amp;gt; &lt;code&gt;tree-prunecheck.o/&lt;/code&gt;, &lt;br&gt;
a new line in passes.def file -&amp;gt; &lt;code&gt;NEXT_PASS (pass_prunecheck);&lt;/code&gt;, &lt;br&gt;
a new line in tree-pass.h file -&amp;gt; &lt;code&gt;extern gimple_opt_pass *make_pass_prunecheck (gcc::context *ctxt);&lt;/code&gt; &lt;br&gt;
and finally the actual implementation in the file called tree-prunecheck.cc. All of this in the gcc/gcc folder in the gcc public repository.&lt;/p&gt;

&lt;p&gt;I plugged these in my gcc local version in aarch64 version of my class servers. I thought that I can run &lt;code&gt;make&lt;/code&gt; and &lt;code&gt;make install&lt;/code&gt; in my previous build and everything would work. I ended up doing that and running a simple hello_world program which ended up not working. I was just not able to find the pass dump as one of the dumps and when I specifically passed my pass name as one of the dumps to be produced, it error out. I thought the problem was my architecture, so I started building in &lt;code&gt;x86&lt;/code&gt;, but the problem continued. &lt;/p&gt;

&lt;p&gt;I made a build on my own local mac and faced a different issue altogether. I think it was because of some compatibility issues with my mac version or something. I mad a decision to not waste a lot of time in this, because we had a class servers for a reason and I can ask for help if something is not working in my class servers with my friends and professor as opposed to some random issue on my mac.&lt;/p&gt;

&lt;p&gt;I eventually consulted with one of my friends doing the same course, and he told me that I am supposed to make in an entirely new directory and start everything from scratch, this will make sure everything is clean. I did this and this time my passes started showing up as one of the dump files when I compiled my programme. I dont know if making a new directory from scratch fixed my problem or reseting my changes in my local gcc did. Whichever it was, I could finally start working on my own pass now.&lt;/p&gt;
&lt;h3&gt;
  
  
  Making changes to the pass
&lt;/h3&gt;

&lt;p&gt;I was also told not to run &lt;code&gt;make&lt;/code&gt; and &lt;code&gt;make install&lt;/code&gt; on the same build directory after I make my changes, so every time I make a change I was supposed to do the whole build process. Regardless I was sure from the start the I wanted to implement a code such that I am hashing each function and comparing the hashes. Here is what the pseudocode would look like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check if the function is versioned: 

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;No&lt;/code&gt;: Return &lt;code&gt;NOPRUNE&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Yes&lt;/code&gt;: Search for the origin of the function

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Not found&lt;/code&gt;: Return &lt;code&gt;NOPRUNE&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Found&lt;/code&gt;: Calculate the hash of both of the functions and compare

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Hash same&lt;/code&gt;: Return &lt;code&gt;PRUNE&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Hash not same&lt;/code&gt;: Return &lt;code&gt;NOPRUNE&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is how my code would look like or so I thought, but I was not aware as to how I can share context of hashes across different executions. Untimely I ended u just printing the hashes and looking at the dump file to compare the files.&lt;br&gt;
This is what the code looked like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#define INCLUDE_MEMORY
#include "config.h"
#include "system.h"
#include "coretypes.h"
#include "backend.h"
#include "tree.h"
#include "gimple.h"
#include "tree-pass.h"
#include "ssa.h"
#include "gimple-iterator.h"
#include "gimple-walk.h"
#include "internal-fn.h"
#include "gimple-pretty-print.h"
#include "cgraph.h"
#include "hash-table.h" // GCC hash utilities
#include &amp;lt;string&amp;gt;        // For std::string

// Pass metadata
const pass_data pass_data_prune_check = {
  GIMPLE_PASS, /* type */
  "prune", /* name */
  OPTGROUP_NONE, /* optinfo_flags */
  TV_NONE, /* tv_id */
  PROP_cfg, /* properties_required */
  0, /* properties_provided */
  0, /* properties_destroyed */
  0, /* todo_flags_start */
  0, /* todo_flags_finish */
};

class pass_prunecheck : public gimple_opt_pass {
public:
  pass_prunecheck(gcc::context *ctxt) : gimple_opt_pass(pass_data_prune_check, ctxt) {}

  /* opt_pass methods */
  bool gate(function *) final override {
    return 1; // Always execute pass
  }

  unsigned int execute(function *) final override;
};

unsigned int
pass_prunecheck::execute(function *fun)
{
  // Check if the function is a clone
  int isClone = DECL_FUNCTION_VERSIONED(fun-&amp;gt;decl);

  if (dump_file) {
    fprintf(dump_file, "Function: %s\n", function_name(fun));
    if (isClone) {
      fprintf(dump_file, "Function is a clone/version: True\n");
    } else {
      fprintf(dump_file, "Function is a clone/version: False\n");
    }
  }

  // Initialize a hash value
  unsigned long hash = 5381;

  // Process all Gimple statements to compute the hash
  basic_block bb;
  FOR_EACH_BB_FN(bb, fun) {
    for (gimple_stmt_iterator gsi = gsi_start_bb(bb); !gsi_end_p(gsi); gsi_next(&amp;amp;gsi)) {
      gimple *g = gsi_stmt(gsi);
      int code = gimple_code(g);

      // Update hash using a simple algorithm (DJB2-inspired)
      hash = ((hash &amp;lt;&amp;lt; 5) + hash) + code; // hash * 33 + code
    }
  }

  // Print the hash
  if (dump_file) {
    fprintf(dump_file, "Hash for function %s: %lu\n", function_name(fun), hash);
  }

  if (dump_file) {
    fprintf(dump_file, "\n\n##### End function prune check, start regular dump of current gimple #####\n\n\n");
  }

  return 0;
}

// Entry point for GCC to register the pass
gimple_opt_pass *make_pass_prunecheck(gcc::context *ctxt) {
  return new pass_prunecheck(ctxt);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing the pass
&lt;/h3&gt;

&lt;p&gt;We were given a utility folder with two different runs with one having duplicate functions. This way we can test both cases. This is what the dump file looked like when duplicate function was in the code. See how we have two hashes with the same values indicating that the functions are same. NOTE: There is more in this file, I cropped the screenshot to only show the main part with the hashes displayed.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnx8l104p5flzi55wx9po.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnx8l104p5flzi55wx9po.png" alt="Image description" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And this is the part where there was no duplicate code. Here you can notice that the hashes displayed are not the same hence indicating that the function are not same therefore no pruning needed.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98k6excj5vh65wukup4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98k6excj5vh65wukup4s.png" alt="Image description" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pain point
&lt;/h3&gt;

&lt;p&gt;One of the most challenging and annoying part of this project was to build the gcc compiler. Primarily because of how long it takes to execute. And I have to do this every time I make a change in my the build to complete.&lt;br&gt;
There is very less documentation for gcc in general and there is a very big learning curve if someone wanted to contribute to gcc. So doing spikes on some of my ideas was an absolute nightmare and spent a lot of hours trying to research for potential solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;I think this was one of the most challenging project I worked on, not just in this course but in my student journey at Seneca. Partly because of so many inconsistencies in my knowledge on gcc and the passes. Even after so many extensions I was not able to completely complete this project. I think this was a good starting point for me and now I am much more confident with how passes work. I just need to learn more of the utilities provided by gcc that I can use in my passes.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>0.4 - week - 2 - Progress</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Tue, 03 Dec 2024 18:57:07 +0000</pubDate>
      <link>https://dev.to/krinskumar/04-week-2-progress-1i3d</link>
      <guid>https://dev.to/krinskumar/04-week-2-progress-1i3d</guid>
      <description>&lt;p&gt;This is my part two of my contribution and you can check my part one &lt;a href="https://dev.to/krinskumar/04-week-1-objectives-3a00"&gt;here&lt;/a&gt;. Last time I talked about how I want to tackle this and in this blog I will be talking about my progress.&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;So I am making this contribution on this &lt;a href="https://github.com/activitypods/activitypods/issues/348" rel="noopener noreferrer"&gt;issue&lt;/a&gt; where I am supposed to tweak the response based on the status of the user. Currently, the server is returning 500 if the user's dataset is not found. We want to return 404 if the user does not exist and 410 if the user is deleted.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I did
&lt;/h3&gt;

&lt;p&gt;I first tried to find the code which was actually returning the 500 code but no matter where I looked in the repository, I was not able to find the actual code. I am aware of what the syntax of a backend server look like on javascript. but this was different. and the reason why I found nothing was because all of the important code was in some other repository and I had to download that and link all the files to make the servers and the frontend work. They had some pretty good documentation for how to link these repos so that they work together and I was also able to find all of the necessary code in that repo. &lt;a href="https://github.com/KrinsKumar/semapps" rel="noopener noreferrer"&gt;This&lt;/a&gt; is my fork of that. Then comes the understanding the repo. This part took me a lot of time because the code was not similar to express and everything was working with the use of context which was my first time.&lt;/p&gt;

&lt;p&gt;Another issue was that I was not able to make a call to /inbox route which was the route I was working on from the frontend. This was because this route is to be called from the backend. I reached out to the devs in a chat room that they invited me and they recommended me to use postman. One of the issue I was facing was also because I cloned a broken version of the code which was breaking the inbox route that I was working on. I pulled the new version which was correct and started working. &lt;/p&gt;

&lt;p&gt;The context made it very difficult for me to understand the code and I had to manually look for code that made sense and was used by my inbox route. I made a change in the deepest level of the code and that broke everything, which made me realize that this code is common and is being used by a lot of other routes. So moving forward I had to very careful and make sure I am not changing the code not at the deepest level at least. Just getting used to the code and tweaking things to deepen my understanding took me like a week. Everyone in the chat room were very helpful and made sure I understand what I was doing. Finally I added a code block which was an additional check to check if a user exists before we move further in the code and return 404 or 410 based on the status of the user. You can look at the change &lt;a href="https://github.com/assemblee-virtuelle/semapps/pull/1337/files" rel="noopener noreferrer"&gt;here&lt;/a&gt; which is a draft PR. The reason I raised a draft PR was to run my changes with the developers before I start adding tests to make sure that the code I added was correct. Its possible that by the time you look at his PR, it either not a draft or already merged. I also wrote a bunch of questions in the draft PR and currently I am waiting for the questions to be answered.&lt;/p&gt;

&lt;h3&gt;
  
  
  Moving forward
&lt;/h3&gt;

&lt;p&gt;Now I want to make sure I am in loop with the developers and then add tests and documentation and finally get the PR merged. I have made this error where I write all of the code and raise a PR only to find out that something I am doing is not what the devs want. This time I am making sure that the changes I have are approved by the devs before I make a deeper dive. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>0.4 - week - 1 - Objectives</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Fri, 29 Nov 2024 03:25:53 +0000</pubDate>
      <link>https://dev.to/krinskumar/04-week-1-objectives-3a00</link>
      <guid>https://dev.to/krinskumar/04-week-1-objectives-3a00</guid>
      <description>&lt;p&gt;So I will be making my final contribution as a student of OSD and as per requirements I need it to be a bigger and more impactful contributions as compared to my older ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  So whats the plan?
&lt;/h2&gt;

&lt;p&gt;Well, throughout this course I have worked at documentation, adding workflows on github, add small features or remove some features. This time around I want to make a contribution that actually forces me to learn the entirety of the codebase and something that excites me. The  way I define this is my asking myself some questions, questions like does this issue makes me want to add testes? or make changes in the CI/CD pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  The find - &lt;a href="https://github.com/activitypods/activitypods/issues/348" rel="noopener noreferrer"&gt;Issue&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The issue I will be working on will be to add a feature where we are sending different response status based on the state of our data in our database. This app called activitypods is not a monolithic repo and makes me download another repo which I will be contribution on. After a lot of figuring things our and reading a lot of docs I was able to get the app running, which involved a lot of steps and I got lost a couple of times. I have been added to a group chat where I can directly talk with the devs. I have not yet started the actual development but I think this will be difficult because this is my first time working with such a structure. I would like to get the core development done by the end of the first week in December. This will give me enough time review all the comments and make sure that the PR is ready to be merged the end of the deadline for release 0.4 which is December 11th.&lt;/p&gt;

&lt;p&gt;As always thanks for sticking with me on my journey to learning open source and stay tuned to find our how I did on this issue!!.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Releasing my open source project.</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Fri, 22 Nov 2024 16:19:04 +0000</pubDate>
      <link>https://dev.to/krinskumar/releasing-my-open-source-project-4aa</link>
      <guid>https://dev.to/krinskumar/releasing-my-open-source-project-4aa</guid>
      <description>&lt;p&gt;This week we will be releasing out open source project into the wild so that other developer can use it with their package manager as opposed to having to download our github project manually every time we release a new update.&lt;/p&gt;

&lt;p&gt;My code was on javascript whose dependencies were managed by &lt;a href="https://www.npmjs.com/" rel="noopener noreferrer"&gt;npm&lt;/a&gt;, so I decided to push my package onto npm. Firstly I decided to groom my package by making changes in my &lt;code&gt;package.json&lt;/code&gt; file, fields like name and description.&lt;/p&gt;

&lt;p&gt;The release process was very intuitive, I just had to login myself using into npm inside my terminal using &lt;code&gt;npm adduser&lt;/code&gt; and run &lt;code&gt;npm publish --access public&lt;/code&gt;. The first time I ran this command, it did not work, gave me a cryptic error that I dont have permission to push??!!. Turns out that there exists a package name scrappy already, so I changed my name to scrappy-cli. This time the publish worked. &lt;/p&gt;

&lt;p&gt;I installed this globally and tried to run it, annnnd, it did not work, the command scrappy-cli was not recognized. I did a lot of digging that found out that the command name is extracted from the bin attribute in the package.json. I forgot to change it to scrappy-cli from just scrappy. I made the change and found out that we can test our command line before pushing using &lt;code&gt;npm link&lt;/code&gt;. My command line was working fine with npm link, So I pushed it and this time when I downloaded it globally, it worked!!.&lt;/p&gt;

&lt;p&gt;I made a new npm command that would push my changes onto github as a new tag and publish on npm, this is what the command looks like &lt;code&gt;"patch-release": "npm version patch &amp;amp;&amp;amp; npm publish &amp;amp;&amp;amp; git push --follow-tags"&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  User testing.
&lt;/h2&gt;

&lt;p&gt;My tool is very intuitive and easy to use. I asked my friend to use my tool and he was able to get it running just using the docs. It did take some time to figure out the setup process of the api keys for LLM..&lt;/p&gt;

&lt;p&gt;If you want to use scrappy-cli you can just run &lt;code&gt;npm i -g scrappy-cli&lt;/code&gt; and follow the docs.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Proper Contributions - 2</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Wed, 20 Nov 2024 20:05:04 +0000</pubDate>
      <link>https://dev.to/krinskumar/proper-contributions-2-1b05</link>
      <guid>https://dev.to/krinskumar/proper-contributions-2-1b05</guid>
      <description>&lt;p&gt;This week we carries on our spirits from our last week and made two new contributions. &lt;/p&gt;

&lt;h1&gt;
  
  
  Issue #1 - &lt;a href="https://github.com/0xflotus/ip2d/issues/728" rel="noopener noreferrer"&gt;ip2d&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Again like last time, I will start with a dud. ip2d is a javascript library which is used to perform conversion on the IP addresses. My issue was not concerning the functionality of the app but to implement a pre commit hook that will do checks. The check we wanted to do was linting. This will make sure that the new changes in local are abiding with the linting rules before the commits are made. While this was a small issue to work on, it still was very interesting to me because I never interacted with pre commit hooks before, and learning it was a rabbit hole for me. I looked at different frameworks like lint staged, left hook and husky. After trying all, I found that husky was the coolest and had a big room for expansion in case there were more things added to the pre commit hook. I think this was a great learning experience and I will implement this pre commit in all of my repos to save some the linting commits later in the PR.&lt;/p&gt;

&lt;h1&gt;
  
  
  Issue #2 - &lt;a href="https://github.com/tarasglek/chatcraft.org/issues/723" rel="noopener noreferrer"&gt;Chat Craft&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;If you read my previous blog, you already know what chat craft is, this will be my second proper contribution to chat craft. In this issue we were removing the RSS feature completely from the app. I started off with removing the icon in the header responsible for the triggering of the rss page. I was not sure if the function that were responsible for the RSS pages were to be removed, to learning from my mistakes in my previous contributions I just asked. &lt;br&gt;
I started by pushing everything and making a &lt;a href="https://github.com/tarasglek/chatcraft.org/pull/744" rel="noopener noreferrer"&gt;PR&lt;/a&gt;. I asked a question there. Turns out I was supposed to remove the function. I was under the impression that I am supposed to remove everything in the function folder and I just did that. But I was wrong, I only was supposed to removed the code block in the function folder that was specifically responsible for the RSS feed. This was pointed out in the PR and was a little embarrassing. I should have checked before I flushed out the code. Well I then got everything back and removed the code I thought was responsible for the RSS feed gen. All of my. changes are pushed now. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This one was a little tuff because I had to do a lot of digging to understand what parts I can remove and what needs to stay. Making that decision without being aware of all of the functionality of the app is more difficult that I thought. Also ask when in doubt rather than removing everything you deem unnecessary lol.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Proper Contributions - 1</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Wed, 20 Nov 2024 17:41:25 +0000</pubDate>
      <link>https://dev.to/krinskumar/proper-contributions-1-1ifb</link>
      <guid>https://dev.to/krinskumar/proper-contributions-1-1ifb</guid>
      <description>&lt;p&gt;Now that we are done with hacktoberfest, we have some experience with contributing to the open source projects. So we were tasked with make bigger contributions as opposed to random line fixes or documentations. So I started looking for issues that I thought were cool and could be done by me.&lt;/p&gt;

&lt;p&gt;I was able to find these two issues. First step for me to do was to assess if I can complete the issue, after which I asked the maintainer to assign to issue to me. I worked on two issues the first week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Issue #1 - &lt;a href="https://github.com/sahilsheikh-dev/YouTube-Video-Extension/issues/8" rel="noopener noreferrer"&gt;Youtube Chrome Extension&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Well this was supposed to be an issue with some development, but it turns our what we were trying to do was not possible. I still want to take about it, because I poured a lot of hours into this. In short, the project is a chrome extension which makes your youtube experience more pleasant. I had never worked with extensions before, so there definitely a bit of a learning curve here, and a lot of the documentation was for the older manifest version which is not supported by browsers these days. The most challenging part for me was to understand the architecture of the extension and understand how my changes would fit in.&lt;br&gt;
The issue was about getting the extension detect the skip button when it appears and click on it, so that we are not doing it manually. I added some javascript which would run every 5 seconds and query the DOM for the skip button and click on it. I had two problem with this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Youtube changes the class name of the skip button. So we cannot just use a const for querying the skip button.&lt;/li&gt;
&lt;li&gt;Youtube can distinguish between a user click and a click done using javascript. Additionally it blocks the latter, which made it impossible for me to implement this issue.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Issue #2 - &lt;a href="https://github.com/tarasglek/chatcraft.org/issues/736" rel="noopener noreferrer"&gt;Chat Craft&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;I had a lot of fun working on this issue, and learned a lot of things along the way. So Chat Craft is a web chat bot with integrations to different LLMs made specifically for programmers. My issue was to upgrade the version of the HTML renderer with the current stable version of the library we were using. The library was called &lt;code&gt;@iframe-resizer/react&lt;/code&gt; and we were using a really old version. So  was upon me to upgrade this. I started with looking at the documentation and found out that they split the code into two new packages. I installed one using &lt;code&gt;pnpm i&lt;/code&gt;. I followed the documentation and changed the way we used the iframe. &lt;br&gt;
When I started I was able to get nothing work for the first two hours no matter what. It turns out I was supposed to pass a license flag to the package which in our case was an open source license. I just ignored it when I started because I thought I will add it at the end of my programming session. When I got everything working I pushed it and made a &lt;a href="https://github.com/tarasglek/chatcraft.org/pull/741" rel="noopener noreferrer"&gt;PR&lt;/a&gt;.&lt;br&gt;
I was told to use the child script from the package itself as opposed to hard coding it. I hardcoded it because we the script from the previous version was hard coded as well. That goes to say that the old code is not set in stone and when in doubt we should clarify with the maintainers. I added another package and uses the script from the package. Lastly, we found out that there was a lot of console spam made from the new package, which was a turn off. So now we are looking at other potential libraries for HTML parsing for react.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conslusion.
&lt;/h2&gt;

&lt;p&gt;I learned a lot of things from my two contributions this week. I started my open source journey with making documentation and small CI fixes. But now I more confident when talking to people over issue on github. I think this is just the start and I shall slowly start working on bigger and impactful issues soon.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Adding CI tests for my open source.</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Thu, 14 Nov 2024 22:07:26 +0000</pubDate>
      <link>https://dev.to/krinskumar/adding-ci-tests-for-my-open-source-3j6j</link>
      <guid>https://dev.to/krinskumar/adding-ci-tests-for-my-open-source-3j6j</guid>
      <description>&lt;p&gt;So last week we added tests in my open source project. So this week we will be running these tests on every PR and commit we make and make sure that the new changes to be merged are not breaking anything. So how did we do this. &lt;/p&gt;

&lt;h2&gt;
  
  
  Github Actions
&lt;/h2&gt;

&lt;p&gt;GitHub actions provides us smooth compatibility to run these using a yaml file that will be stored in the &lt;code&gt;.github/workflows&lt;/code&gt; file. Added content in the file was not very difficult for me because I did a similar feature in &lt;a href="https://github.com/dscnsec/Stake-frontend-clone/issues/7" rel="noopener noreferrer"&gt;one my contributions&lt;/a&gt; for hacktoberfest. &lt;/p&gt;

&lt;p&gt;So the content of my yaml files are as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: PR Lint

on:
  pull_request:
    branches:
      - main
      - develop
      - feature/*
  push:
    branches:
      - main

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Set up Node.js
        uses: actions/setup-node@v3
        with:
          node-version: '18'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run linter
        run: npm run test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I tested this by making commits and also a PR and it worked in each instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Partner contribution
&lt;/h2&gt;

&lt;p&gt;Now it was my turn to test my classmates's CI pipeline and make sure their setup is correct and complete. So I raised this &lt;a href="https://github.com/Elisassa/Code-Formatter-Advisor/pull/9" rel="noopener noreferrer"&gt;PR&lt;/a&gt;. Besides the CI part, it was a learning curve for me because I had never made tests on python before. I kinda copied the syntax of what they had for the tests that were already there. I was able to see that my tests were passing when I raised the PR. The green check when a CI passes is always satisfying. I used Jest which is a popular javascript testing framework whereas they used something called pytest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I think CI is very cool and makes our life easier. Now we dont have to download every branch and test is locally before merge the PR. That part is handled by github. This way reviewers can focus more on the main changes that were made in the PR. While I was researching about CI and github actions, I found out that we can add more cool stuff in our workflow like linting and deploying which would be very cool.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Adding tests in my first open source project.</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Sat, 09 Nov 2024 04:46:49 +0000</pubDate>
      <link>https://dev.to/krinskumar/adding-tests-in-my-first-open-source-project-4586</link>
      <guid>https://dev.to/krinskumar/adding-tests-in-my-first-open-source-project-4586</guid>
      <description>&lt;p&gt;To catch you up, I started my first open source project this fall which is called Scrappy. &lt;a href="https://github.com/KrinsKumar/Scrappy" rel="noopener noreferrer"&gt;Scrappy&lt;/a&gt; is a command line tool that will convert any website that can be scraped into a markdown. The code base is getting bigger and bigger each commit and its getting harder to manage it. So we decided to start adding unit tests to make sure we are not missing anything before we push commits.&lt;/p&gt;

&lt;p&gt;In this &lt;a href="https://github.com/KrinsKumar/Scrappy/commit/8ca4536a465c53319dd2d3df76bd34eaa9c3ba14" rel="noopener noreferrer"&gt;commit&lt;/a&gt;, I have added the necessary tests that were needed for now. so lets go over what stack I used and why?&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit test - Jest
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://jestjs.io/" rel="noopener noreferrer"&gt;Jest&lt;/a&gt; is one of the most popular testing frameworks for Javascript. I had prior experience with working with jest, so I decided to use jest for this project. Setting up jest is very very easy, just run &lt;code&gt;npm install --save-dev jest&lt;/code&gt; and add the jest script onto your scripts in package.json with an appropriate command name. And you are done, you can start adding tests with proper name of the files and run your command in your terminal to get the tests running.&lt;br&gt;
Understanding jest is very simple, here is a snippet of one of the tests I made for this commit. Here I am only verifying that passing the -v command in the args in the function &lt;code&gt;validateArgs&lt;/code&gt; should return False. You can see how intuitive it is to use jest. While adding these tests I realized that a branch of my code which uses --api-key flag was broken. Which made me realize how we can get caught up in our new features and might accidentally break our old functionality. Now that I have tests I can rest assured that my changes are backwards compatible.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;validate args with -v&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/usr/local/bin/scrappy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;files/input.txt&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;-v&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;validateArgs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing the LLM capabilities
&lt;/h3&gt;

&lt;p&gt;For testing LLM, we decided to mock the HTTP call to the LLM service to that we can base our tests accordingly. There many libraries to achieve this, but I just hardcoded the http response and mocked it using the jest mock. I decided not to use a framework because my project was small and it was not necessary. Maybe in the future if we are making too many http calls we can think of using a library to imitate a server with the urls we need. I used the mockResolvedValue function offered by jest to make sure I always get the same value when I call the groq LLM function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;convert the body into markdown&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;groq&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Groq&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1234&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="nx"&gt;groq&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;chat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;jest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fn&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;mockResolvedValue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mockGrokResponse&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;the body of the webpage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;md&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getGroqResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;groq&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;md&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;not&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toBeNull&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;md&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Adding tests made me more confident in my code and more reluctant and worried when others work on my open source project. I also was able to find a bug in one of my branches. This will eventually save time of the future contributors of my project. One of my main conclusion of this process is to add tests at the go and not push it so that its a task in your backlog waiting to be completed. This may pile up and cause issues.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding Compiler Passes: A Deep Dive into GCC Compilation</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Mon, 04 Nov 2024 00:29:01 +0000</pubDate>
      <link>https://dev.to/krinskumar/understanding-compiler-passes-a-deep-dive-into-gcc-compilation-5h5g</link>
      <guid>https://dev.to/krinskumar/understanding-compiler-passes-a-deep-dive-into-gcc-compilation-5h5g</guid>
      <description>&lt;p&gt;When delving into compiler passes, it's essential to first grasp the build process, even if it's not strictly necessary. We’ll be contributing to GCC, understanding the compilation process is invaluable as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Compilation Process
&lt;/h2&gt;

&lt;p&gt;To explore the intricacies of compilation, we’ll utilize the commands -fdump-tree-all and -fdump-rtl-all. You might be surprised to learn that compiling a simple program like "Hello, World!" involves approximately 180 steps, depending on the attributes in use. It’s astonishing to realize the amount of processing required just to compile such a straightforward program!&lt;/p&gt;

&lt;h2&gt;
  
  
  Compilation passes in more detail
&lt;/h2&gt;

&lt;p&gt;I created a basic C program that prints "Hello, World!" and compiled it using GCC with the -fdump-tree-all flag. Running the command &lt;code&gt;gcc -fdump-tree-all -o main main.c&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jsnolgguicjgpavfj85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jsnolgguicjgpavfj85.png" alt="Image description" width="584" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly, we can utilize the -fdump-rtl-all command to observe the RTL (Register Transfer Language) representation of the compilation process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favvycs35yog6vpulrtbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favvycs35yog6vpulrtbg.png" alt="Image description" width="604" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Reflections on My Experience
&lt;/h2&gt;

&lt;p&gt;Complexity of Compilation: I was genuinely surprised by the complexity of the compilation process. The sheer number of passes and transformations the code undergoes opened my eyes to the sophistication of compilers.&lt;/p&gt;

&lt;p&gt;Intermediate Representations: Understanding the tree and RTL representations was particularly fascinating. These representations are critical in optimizing the code and understanding how high-level constructs are translated into low-level instructions.&lt;/p&gt;

&lt;p&gt;The most intriguing aspect was observing how small changes in the source code affected the intermediate outputs. It was a powerful demonstration of the compiler's role in transforming high-level code into machine-level instructions. Additionally, the use of different dump options provided unique insights into various stages of the compilation process.&lt;/p&gt;

&lt;p&gt;While I enjoyed the learning process, I faced challenges in fully grasping the implications of certain compiler optimizations. Some of the intermediate outputs were complex, and understanding the rationale behind specific transformations required a deeper knowledge of compiler theory. Not to mention the long time to build my own version of gcc compiler&lt;/p&gt;

&lt;p&gt;I realized there are gaps in my understanding of advanced compiler optimization techniques and the underlying principles of how compilers work.&lt;/p&gt;

</description>
      <category>spo</category>
    </item>
    <item>
      <title>GCC - Personal Version</title>
      <dc:creator>Krinskumar Vaghasia</dc:creator>
      <pubDate>Mon, 04 Nov 2024 00:03:21 +0000</pubDate>
      <link>https://dev.to/krinskumar/gcc-personal-version-o4f</link>
      <guid>https://dev.to/krinskumar/gcc-personal-version-o4f</guid>
      <description>&lt;p&gt;Now that we understand a little bit of assembly in difference architectures, we can play with compilers now. For this course we will be interacting with the GCC compiler. There is project associated with this on AArch64 systems. For this stage of the project, we were asked to make ourselves familiar with the gcc build process. In short, we were supposed to build the current development version of GCC and just experiment with it. &lt;/p&gt;

&lt;h2&gt;
  
  
  GCC
&lt;/h2&gt;

&lt;p&gt;Before we talk about what I did with the build, lets understand GCC, GCC was a popular compiler for c but now has been expanded its support to more languages. It also is responsible to optimization and portability options that we get while compiling our programmes. Some of them are listed below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;IFUNC: allows a program to provide multiple implementations of a function and to use a resolver function which will determine the implementation that will be used during run time. This was we can make multiple functions geared towards optimization for different platforms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;FMV(function multi versioning): Improved version of IFUNC where the resolver function not something we have to create. We only have to provide a target attribute so that the compiler resolver understands where the function is used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AFMV(automatic function multi-versioning): You can think how the amount of work is increased with writing function for all architectures. So there is this capability still in makes which will solve this. And this is what we will be building for this project. For AFMV, a compiler option will be used to specific the architectural variants and the function that will benefit will be automatically cloned. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The start of the project.
&lt;/h2&gt;

&lt;p&gt;To start off, I cloned the development version of GCC using &lt;code&gt;git clone git://gcc.gnu.org/git/gcc.git gcc&lt;br&gt;
&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now we can create a build directory and configure our build to use that directory using &lt;code&gt;~/gcc/configure --prefix=$HOME/gcc-test-001&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally we can start the build process. GCC takes a long time to build, so we will also be using a time command and the -j attribute to specify the cores to be used during the build process. This is the command I used &lt;code&gt;time make -j 4 |&amp;amp; tee build.log&lt;/code&gt;. Keep in mind this mind take a lot of time depending on what system you are using, for me it took almost 2 hours!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6ocb4kx8xhvosav2b11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6ocb4kx8xhvosav2b11.png" alt="Image description" width="153" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally after the long wait, we can install the source code using &lt;code&gt;make install&lt;/code&gt;. After the installation, we dont want to use the systems gcc compiler, we want to use our new compiler, by using &lt;code&gt;PATH=$HOME/gcc-test-001:$PATH&lt;/code&gt;, and check if it was installed using &lt;code&gt;which gcc&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We learned how to build our own version of GCC. Next we will talk about the compilation passes that a normal compilation goes through!!  &lt;/p&gt;

</description>
      <category>spo</category>
    </item>
  </channel>
</rss>
