<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gerardo Enrique Arriaga Rendon</title>
    <description>The latest articles on DEV Community by Gerardo Enrique Arriaga Rendon (@jerryhue).</description>
    <link>https://dev.to/jerryhue</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jerryhue"/>
    <language>en</language>
    <item>
      <title>Goodbye, OSD700</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Sun, 24 Apr 2022 05:08:10 +0000</pubDate>
      <link>https://dev.to/jerryhue/goodbye-osd700-3ggm</link>
      <guid>https://dev.to/jerryhue/goodbye-osd700-3ggm</guid>
      <description>&lt;p&gt;And so the curtains fall on.&lt;/p&gt;

&lt;p&gt;With this last post, I will officially say goodbye to OSD700, a course full of discoveries and learning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's talk about OSD700
&lt;/h2&gt;

&lt;p&gt;I have mentioned several times before, but I would like to mention it again so that the post is self-contained: OSD700 is a course option offered at the college that I am studying, Seneca College. Despite it being a course that counts toward a graduation, we almost never discussed about marks and how I will be graded. The professor in charge of the course stated one thing clearly, "as long as you are contributing and you are showing the effort, you will pass this. Do your best."&lt;/p&gt;

&lt;p&gt;The course description does not do justice to the responsibilities that you are given. OSD700 is described as a course where you will have to maintain an open-source project, but there's an important asoect here that it is never mentioned: what does it mean to maintain an open-source project?&lt;/p&gt;

&lt;h3&gt;
  
  
  Open-source projects and its challenges
&lt;/h3&gt;

&lt;p&gt;Some people may never be interested on maintaing an open-source project, due to the self-sacrifice required. Since it is an open-source project, you should not be expecting to get paid at all. Maybe you could be lucky and open a successful donation campaign, so that you can give full-time attnetion to the project. In other situations, however, the project is a side thing, since you have to focus on your full-time job, right?&lt;/p&gt;

&lt;p&gt;In my case, I was able to focus on this course like a full-time job, since I had a light courseload. However, it doesn't matter if you are a full-time or part-time maintainers, the challenges are still the same, they just have to prioritized quite differently.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are those challenges?
&lt;/h4&gt;

&lt;p&gt;For starters, you have to prioritize what you want to focus at a given time. Assuming you are a full-time maintainer, the ideal is that you can give your best on all areas, but that's just an idealization. Most of the time, you have to give up more work than you thought.&lt;/p&gt;

&lt;p&gt;Maybe you had an idea you wanted to implement, so you, all excited, start implementing a prototype. You notice that to realise it to your vision will take more time, so what do you do? If you believe in the cost-sunk fallacy, you might think that it is better to keep developing it until you reach that vision. However, all that time that you spent on developing that feature is time that could have gone to bug-fixing, paying off technical debt, finding bugs to file, or any other tasks that'd still improve the overall health of your repository. So, at the end, you swallow up your pride and say: "it is time to give up on this." It may sound somewhat defeatist, but I think acknowledging that other things have to prioritised is part of what it means to be an open-source maintainer.&lt;/p&gt;

&lt;p&gt;Another challenge is the one of not knowing what your end goal. For a lot of people, not knowing where they would end up after embarking on an adventure can provoke anxiety; the uncertainty of it all always make you asking, "am I in the right path?"&lt;/p&gt;

&lt;p&gt;However, instead of being scared of that adventure for the rest of your life, there are two opposite views on it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Find a way to set an end goal, and focus on that end goal until you reach it. When you reach it, try to set another end goal.&lt;/li&gt;
&lt;li&gt;Let the adventure take you wherever it may lead you, and just enjoy it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first one may work with people who already have experience on a particular set of problems and they would like to have something finished, while the second one is for the people who enjoy the journey more than the treasure at the end.&lt;/p&gt;

&lt;p&gt;However in an open-source project, you may need both: you want to get things done, so that others find your project useful, but you also would like to explore and enjoy what you learn along the way, since that will help to stimulate your creativity and develop a new way of solving the problem (and it may help keep your mind from going insane out of boredom).&lt;/p&gt;

&lt;p&gt;One more challenge that one may encounter is having to communicate your ideas with your fellow maintainers, if you are in team. The idea is that you are hopefully on a collaborative environment, where everybody is willing to listen to anybody. However, just willing to listen is not enough. You gotta communicate your ideas, even if you think they are bad or that they don't solve the problem. Why? Well, they help you grow as a developer. If your teammates can justify why a certain solution may not be suitable to a specific problem, then you can use to your advantage and learn from their way of thinking. Developers can create more robust code by listening to several situations and cases, so I think that developers can have more robust critical and logical thinking by listening to other ways of solving the problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  A little bit of retrospective
&lt;/h2&gt;

&lt;p&gt;Back when I started this semester, we were supposed to write a blog about the areas we would like to contribute the most, and being on charge of those areas, too.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/jerryhue/future-ideas-for-telescope-3a4"&gt;my post&lt;/a&gt;, I talked about documentation and dependencies. However, throughout the semester, I mainly focused on the dependency visualization project.&lt;/p&gt;

&lt;p&gt;At the start, I had this cool idea of the dependency tree that you could navigate through to discovery all of the dependencies that Telescope uses, but this idea was just cool; in terms of functionality and usability, it was horrible.&lt;/p&gt;

&lt;p&gt;After all, the main purpose of the dependency visualization was to make it easier to find GitHub links for other people, so finding an easy way to navigate through hundreds of dependencies was the most important choice. However, before the &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3487"&gt;front-end arrived&lt;/a&gt;, we had to write a service that could provide the information that the front-end would need. The service is not that big, and it actually does very little on its own, so it was a manageable project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Some opinions regarding the &lt;code&gt;dependency-discovery&lt;/code&gt; service
&lt;/h3&gt;

&lt;p&gt;If I have to be honest, I want to improve the dependency discovery service. I feel that the API can be improved and better defined. Also, the handling for exceptional cases is almost none, so there's that...&lt;/p&gt;

&lt;p&gt;In terms of projects, this is probably the first project I was given total freedom on how to implement and how to design. I was just given a set of requirements, the rest was left for me to figure out, which was somewhat difficult to deal with.&lt;/p&gt;

&lt;p&gt;Throughout my life I was always given a task to do, and they told how they want it to get it done, and I was able to follow just that. However, in the real world, most people that tell you what they want you to do are speaking from an area of expertise. Their solution might not be possible to realise, or their solution might not be one at all. This is why they tend to leave certain things vague: they just don't know what to do on a specific case or they might not know that that specific case actually exists. This is somewhat vexing for computer programs that could accept any kind of input, because essentially, you have what some might consider &lt;em&gt;undefined behaviour&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I am aware that the phrase &lt;em&gt;undefined behaviour&lt;/em&gt; has a technical meaning when discussing the safety of C programs, but I would like to take the phrase and view it more literally. When something that could happen in a program were to actually happen and you are not sure what the program may actually do, that is what I mean by &lt;em&gt;undefined behaviour&lt;/em&gt;. It's the behaviour of a program that is never documented nor expected, and so it ends up being undefined. it is not like this behaviour does not exist, it's just that it is hidden, arising from the consequences of your program. This is where a lot of bugs could occur (in fact, all hidden bugs on a program are due to this phenomenon).&lt;/p&gt;

&lt;p&gt;I hate that type of &lt;em&gt;undefined behaviour&lt;/em&gt;. Why? Because I hate unreliable programs. If computers are fast, why can't they be correct, too? If I am going to type random words on my text editor, I don't want it to crash on me because I accidentally wrote typed too many keys at once. As the user, I don't know how the program behaves, so I am expecting that, as long as I don't do anything that is apparently unsafe for the program (turning off my entire computer during an update), I am fine with how the program does things. Of course, if the program can prevent any bad consequences even from those unlikely situations, even better, but that's not a strict requirement.&lt;/p&gt;

&lt;p&gt;However, as a developer, when you are discovering what your program has to do, an important question always lingers in your head, "will I need this in the future?" Some people say yes, some people say no. Either way, the answer to this question cannot be boiled down to a simple yes or no, but instead it is reduced to the conclusion that the developer can make after years of experience, and even after that, that conclusion might turn out wrong.&lt;/p&gt;

&lt;p&gt;In terms of my set of experiences, I cannot provide an answer yet.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's left of the &lt;code&gt;dependency-discovery&lt;/code&gt;?
&lt;/h3&gt;

&lt;p&gt;A lot of stuff, actually.&lt;/p&gt;

&lt;p&gt;First of all, we gotta improve how the service itself works. Maybe a way to improve memory usage, since we cannot store so much information at a time, even though we would like to to save on GitHub calls...&lt;/p&gt;

&lt;p&gt;We could improve on the current API so that it is easier to use. For example, the &lt;code&gt;/projects&lt;/code&gt; route does not provide pagination, so you will get all names at once, which can be annoying for interfaces implementing pagination on their end.&lt;/p&gt;

&lt;p&gt;Another thing that could be done is to research what other functionality might useful for the service. This might not be necessary to do, since the service had a single purpose, but if this is an API that other clients could consume, maybe we could try to expand more on what could be possible with this service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final words
&lt;/h2&gt;

&lt;p&gt;What's left to say? Probably a lot, but I don't want to make a 20 minute read, since that'd me just rambling on and on and on about certain topics related to my experiences on this.&lt;/p&gt;

&lt;p&gt;I would like to end this post by thanking everybody who participated in the OSD700 course and gave their support to bring Telescope to version 3.0. Best wishes to everybody!&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>telescope</category>
    </item>
    <item>
      <title>Before the finale...</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Sat, 23 Apr 2022 09:47:18 +0000</pubDate>
      <link>https://dev.to/jerryhue/before-the-finale-gl7</link>
      <guid>https://dev.to/jerryhue/before-the-finale-gl7</guid>
      <description>&lt;p&gt;Although I should have written this post a week ago, I'm glad I am writing it now, since I feel that I can properly divide the topics I would like to talk about in this and next blog post.&lt;/p&gt;

&lt;p&gt;For the people that lack context: OSD700 is a course option given to students taking certain programs in Seneca College. As part of the course work, you are supposed to contribute to this open source project that the school gives support to: Telescope.&lt;/p&gt;

&lt;p&gt;I would like to talk more about this, but I want to leave it for next post, as it would be the final one that I would have to write on Telescope directly. It does not mean that I will stop writing about Telescope, instead, I would stop writing about it in the context of the OSD700. I may approach different ways on how to talk about Telescope, and experiment a little bit more!&lt;/p&gt;

&lt;p&gt;Either way, this post and the next one are the opposite sides of the same coin, that coin representing the finale of this "emotional" character development arc of mine (not really). For the last post, I would like to do a recap upon my adventure on this course, what I hope I had learned, what I managed to contribute to Telescope as a whole, and my aspirations going forward.&lt;/p&gt;

&lt;p&gt;However, we gotta talk about what we are going to ship in Telescope 3.0, right? Since old habits die hard, we still are going to talk about the PRs that I managed to contribute for 3.0, as well as what else went into release 3.0. Also, it is not like this release is going to be the last one in Telescope, there's still &lt;a href="https://github.com/Seneca-CDOT/telescope/issues"&gt;plenty of work to be done&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what's up with the release?!
&lt;/h2&gt;

&lt;p&gt;Well, this release was wild! It is kind of unfortunate that we couldn't have a calmed release for the final one (instead, the alpha release was much calmer...). There are a couple of problems that are going to be addressed throughout the weekend, because the team was starting to feel tired after a long meeting session where we prepared the remaining PRs for merging.&lt;/p&gt;

&lt;h2&gt;
  
  
  What did you manage to submit for 3.0.0?
&lt;/h2&gt;

&lt;p&gt;Most of the PRs I did for this release were small, since I was taking a step back to focus on other courses that I had to pay attention to.&lt;/p&gt;

&lt;p&gt;The most &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3520"&gt;remarkable one&lt;/a&gt; would be moving the Star field element that @dbelokon worked on &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3149"&gt;#3149&lt;/a&gt; to docusaurus. This one was fairly straightforward to do, since I had to do something similar in the past (throwback to what I had to do related to WebAssembly!). I did not add any new code, but instead I adapted it to Docusaurus. I had to follow up with a few fixes, since the original PR was missing &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3467"&gt;something that nobody&lt;/a&gt; noticed until it was time to build and deploy the docs.&lt;/p&gt;

&lt;p&gt;You can also include the &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3542"&gt;documentation for the beloved &lt;code&gt;dependency-discovery&lt;/code&gt; service&lt;/a&gt;, that describes the API of the service in a more detailed manner.&lt;/p&gt;

&lt;p&gt;And that's pretty much it. I did work on other PRs, but they were small fixes to stuff I had to fix so I could other tasks.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>telescope</category>
      <category>osd700</category>
    </item>
    <item>
      <title>Telescope 3.0.0-alpha</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Sat, 23 Apr 2022 02:49:41 +0000</pubDate>
      <link>https://dev.to/jerryhue/telescope-300-alpha-598n</link>
      <guid>https://dev.to/jerryhue/telescope-300-alpha-598n</guid>
      <description>&lt;p&gt;&lt;em&gt;This post should have been posted two weeks ago, but I didn't write it, so let's imagine that I wrote it two weeks ago :)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;And yes, this post is a follow-up of the other I posted...&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, release 3.0.0-alpha has happened, and with it, we are nearing our final release: release 3.0.0.&lt;/p&gt;

&lt;p&gt;I can't help but feel a little bit nostalgic when looking at the first release I worked on, release 2.5. Even though it has been only four months, I felt that much more has occurred.&lt;/p&gt;

&lt;p&gt;Either way, what I managed to contribute for this release? Even though not as active as previous releases, I managed to finish the tasks that I described on my previous blog post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Doing backups: not as difficult as it sounds
&lt;/h2&gt;

&lt;p&gt;I'm glad we chose Postgres as a database. Not only because it's open-source, or that it's free, or because it is well-integrated with Supabase, but also because it has an easy-to-use client program to run backup creation and restoration.&lt;/p&gt;

&lt;p&gt;I am not an expert on backups, so I don't know much about making backups on databases, but the experience I have when making backups&lt;br&gt;
on my personal data, I always found it somewhat unreliable when I had to use a third-party program to create backups, since Windows builtins are not good enough. I am glad I was proven wrong when it came to dealing with the difficulty of it.&lt;/p&gt;

&lt;p&gt;However, there was an important thing I had to take into account. All of our important services are deployed as Docker containers, so using &lt;code&gt;localhost:5432&lt;/code&gt; to refer to the database is not going to work. The original idea is to create a script and run it in the host computer that is running the containers. However, &lt;a href="https://github.com/humphd"&gt;&lt;code&gt;@humphd&lt;/code&gt;&lt;/a&gt; pointed out that that is not going to work, that we had to move the script into its own container that accessed the database container throught the docker network.&lt;/p&gt;

&lt;p&gt;So, after reviewing how to write Dockerfiles, the next step I had to verify is to how run the script. The main idea is that the script is ran as a cron job at a specific time inside the container. I was lucky enough to find a &lt;a href="https://devopsheaven.com/cron/docker/alpine/linux/2017/10/30/run-cron-docker-alpine.html"&gt;blog post&lt;/a&gt; that explained just what I needed. I had to place the script into a folder of the container's file system so that it can be run at 2 o'clock in the morning.&lt;/p&gt;

&lt;p&gt;That was run for creating backups, however. I also had to write a utility script that would restore the database using the backup generated by my script. Again, thanks to the wonderful client programs offered by the postgres team, this was a cakewalk.&lt;/p&gt;

&lt;p&gt;The major difference between the restoration script and the script that creates the backups is that the restoration script does not have to run periodically, so I just included it inside the container. That way, a system administrator can connect to the container and run the script inside. With Portainer available, this task becomes fairly straightforward and accessible to do. If you want to check the PR, &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3405"&gt;here&lt;/a&gt; it is to your delight.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;dependency-discovery&lt;/code&gt;, are you tested?
&lt;/h2&gt;

&lt;p&gt;So, after having a crash course on unit tests for the nth time, and some reading on the &lt;code&gt;jest&lt;/code&gt; documentation, I wrote the tests for the &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3431"&gt;&lt;code&gt;/projects&lt;/code&gt;&lt;/a&gt; and the &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3412"&gt;&lt;code&gt;/github&lt;/code&gt;&lt;/a&gt; routes.&lt;/p&gt;

&lt;p&gt;The main problem I have when writing unit tests is that I don't know how much is a "unit". Some websites that a unit can be a function, while others say that a unit is a class, while others say that an entire module is a unit! With so many different definitions, it is hard to choose a source of truth.&lt;/p&gt;

&lt;p&gt;Instead of worrying on the actual definition of a unit test, I had to understand the reasoning behind it. Why does make a unit test different from an integration test or an end-to-end test? They tend to be small, so that they are faster to run at once. They tend to not have points of failure. They also tend to not directly depend on anything that could influence the result of the test, among other stuff.&lt;/p&gt;

&lt;p&gt;So, in this case, I had to understand something regarding these tests. In these tests, we are testing just the routes and their responses, so that means we don't care about how the modules that the routes depend on do their work, we just care what they give us in return. We will assume that they work (although in some cases, they might not work), so that we can focus on the defects that our specific code has, instead of the whole system at a time. This brings the important concept of mocks.&lt;/p&gt;

&lt;p&gt;When I started reading the &lt;code&gt;jest&lt;/code&gt; documentation, they mentioned on how to use mocks and the like, but I failed to understand why would you want to mock your own code. Well, the lesson was, it does not matter if the dependencies that your code has are also another part of the project, they should be treated as a third-party library that will always work when the tests start running. This helped me on how to write the mocks that I needed for the unit tests, and thus helped me to write the tests themselves, too.&lt;/p&gt;

&lt;p&gt;And with that, the work for release 3.0.0-alpha has been finalized. Now, onto the final step, release 3.0.0!&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>telescope</category>
      <category>osd700</category>
    </item>
    <item>
      <title>Reaching the final stage</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Fri, 22 Apr 2022 19:20:36 +0000</pubDate>
      <link>https://dev.to/jerryhue/reaching-the-final-stage-13hc</link>
      <guid>https://dev.to/jerryhue/reaching-the-final-stage-13hc</guid>
      <description>&lt;p&gt;&lt;em&gt;This post should have been posted three weeks ago, but I didn't write it, so let's imagine that I wrote it three weeks ago :)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We start to enter the final set of releases: &lt;code&gt;3.0&lt;/code&gt;, this time, starting with its &lt;code&gt;alpha&lt;/code&gt; version. Telescope has come a long way, and there will be some major changes that are still pending before Telescope 3.0 actually reaches.&lt;/p&gt;

&lt;p&gt;A few things that I have to start working on are related to the Postgres backups that I promised a long time ago. Since now are going to use a &lt;strong&gt;real&lt;/strong&gt; database ("real", as in persistent, sorry &lt;code&gt;redis&lt;/code&gt;), we have to worry about the data we are storing, and the way we can take care of the data is by making backups and storing those backups in another place.&lt;/p&gt;

&lt;p&gt;Well, we decided to break that issue in two steps: Figure out how to make the backups at all, and then figure out how to store those backups. At least, when we finish the first one, we will at least have some backups (although they are not going to be store in a separate location yet).&lt;/p&gt;

&lt;p&gt;Also, I started to prepare writing the tests for the &lt;code&gt;dependency-discovery&lt;/code&gt; service. More specifically, the tests for the routes &lt;code&gt;/project&lt;/code&gt; and &lt;code&gt;/github&lt;/code&gt;. To prepare for it, I went over the &lt;code&gt;jest&lt;/code&gt; documentation, since my experience on writing tests is almost zero...&lt;/p&gt;

&lt;p&gt;Either way, look forward to the new release!&lt;/p&gt;

</description>
      <category>osd700</category>
      <category>opensource</category>
      <category>telescope</category>
    </item>
    <item>
      <title>Telescope 2.9</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Mon, 28 Mar 2022 00:32:10 +0000</pubDate>
      <link>https://dev.to/jerryhue/telescope-29-24fj</link>
      <guid>https://dev.to/jerryhue/telescope-29-24fj</guid>
      <description>&lt;p&gt;Welp, Telescope 2.9 release a few days ago.&lt;/p&gt;

&lt;p&gt;Amasia and I were in charge of making the release a reality, since we were sheriffs for that week (which is like a supervisor of sorts). I think we both did a pretty good job for this week, although she and I think that we could have done a better job (since there were several tasks that were pushed to the next release...).&lt;/p&gt;

&lt;p&gt;A lot of things were merged for release 2.9!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The parser service is almost done! Well, it is technically done, but we need to remove the legacy back-end that Telescope currently uses. For now, we had to turn it off, and we are expecting to file a PR that removes this back-end for once and for all.&lt;/li&gt;
&lt;li&gt;Setting up our linter with the current monorepo structure is done! Now, we have to figure out how to migrate things like tests and build steps (if it is even possible...).&lt;/li&gt;
&lt;li&gt;Although not yet in active use in production, the client authentication with supabase has been merged, and hopefully, after release 3.0, we can use Supabase for real.&lt;/li&gt;
&lt;li&gt;The search service got a nice upgrade in the back-end, that it now supports autocompletion, so if you were to give it a name that is half-written, it will return several suggestions, just like how Google or YouTube do.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;dependency-discovery&lt;/code&gt; currently features searching for issues of repositories linked to the dependencies of Telescope, with some minor implementation caveats.&lt;/li&gt;
&lt;li&gt;There are other features that got merged, such as a YouTube banner information support for YouTube posts, and a cool star field that shows GitHub contributor profiles.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Experiences as a sheriff
&lt;/h2&gt;

&lt;p&gt;It was the first time I did something like this, and as always, I found a lot of things I could improve on. Some funny anecdotes:&lt;/p&gt;

&lt;h3&gt;
  
  
  Telescope 2.9 released early?
&lt;/h3&gt;

&lt;p&gt;We tried to do a patch release (2.8.3), and it went somewhat wrong. We asked Duc's guidance on it, so that we could prepare for release 2.9, when accidentally he released Telescope 2.9! The command we use to do the release is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pnpm version version-tag &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Message..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, &lt;code&gt;pnpm&lt;/code&gt; (and &lt;code&gt;pnpm&lt;/code&gt;) have a shorthand if you don't want to specify the &lt;code&gt;version-tag&lt;/code&gt; value by hand. So, if you do regular patch updates (like &lt;code&gt;2.8.73&lt;/code&gt;!), you can write&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pnpm version patch &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Message..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, Duc accidentally mixed the shorthand with &lt;code&gt;minor&lt;/code&gt;, which refers to the second number in the version tag, so this made &lt;code&gt;pnpm&lt;/code&gt; bump Telescope to version &lt;code&gt;2.9&lt;/code&gt;!&lt;/p&gt;

&lt;p&gt;We did run some commands to fix this. We had to reset the &lt;code&gt;master&lt;/code&gt; branch from the &lt;code&gt;upstream&lt;/code&gt; repository, as well as delete the tags that were associated with the commits. We also deleted the release changelogs that get generated every release. After learning from our mistake, Duc will probably never use the shorthand again...&lt;/p&gt;

&lt;h3&gt;
  
  
  Huge backlog for release 2.9
&lt;/h3&gt;

&lt;p&gt;During our first meeting of the week, we were going through the open PRs to figure out the current progress on them, and see what issues would need to be figured out for successfully merging them. After going through the PRs, we decided to go through the issues.&lt;/p&gt;

&lt;p&gt;We discovered there were 3 full pages of issues scheduled for 2.9. Amasia reacted very surprised, and she also was impressed that I was able to hold back my reaction when I pointed out the number of pages.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happened with the dependency-discovery service?
&lt;/h2&gt;

&lt;p&gt;So, the &lt;code&gt;dependency-discovery&lt;/code&gt; is &lt;em&gt;finished&lt;/em&gt;. We can issue a front-end that does the thing we were planning all along, but there are some huge caveats:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is not efficient,&lt;/li&gt;
&lt;li&gt;It does not have tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The second caveat can be taken care of, but the first one is a little bit more cumbersome. I'm not implying that it is impossible, but the way I currently wrote it does not allow for a lot of optimization.&lt;/p&gt;

&lt;p&gt;The first major problem is that I am using a poor implementation of what a cache is supposed to be. Since we are doing calls to a GitHub API service, there are costs when we call it, one of those is the rate limit. Since we do not want to request the GitHub API every time &lt;strong&gt;we&lt;/strong&gt; receive a request, we need to cache these responses, serve them when another user accesses the same resource, and request the GitHub API service when the cache has reached a specific lifetime.&lt;/p&gt;

&lt;p&gt;Ideally, I should have used &lt;code&gt;redis&lt;/code&gt; in the first place (since we already have it). Why did I not use it? Well, the way I handle the requests does spark some annoying edge cases due to the way &lt;code&gt;nodejs&lt;/code&gt; works.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nodejs and the event loop
&lt;/h3&gt;

&lt;p&gt;Quick crash course on Nodejs: Nodejs runs JavaScript in a single thread. This is a major oversimplification, but it does describe the reality really close.&lt;/p&gt;

&lt;p&gt;However, how can Nodejs be fast, then? Well, although it does things single-threaded, it allows for a lot of concurrency due to the asynchronous nature of certain tasks in JavaScript.&lt;/p&gt;

&lt;p&gt;Imagine you try to request another service, which can be a service across a network, or even in your own filesystem. Since the request might take a long time to answer, you wouldn't want to waste CPU cycles on just waiting for the result, so instead you hold this task, put it into the bottom of a TODO list, and instead jump on the next task that you need to do. So, if someone made a request to your server, you can handle their request, while your server waits for the response of the service. When you finish handling this request (or you put it into the TODO list because you need to ask another service), you go and check on the task you had put on the list before. You notice that you received a response, so you continue with processing the first request and hopefully bring it to completion. After you are done with that, you check on the second task you put into the TODO list. You still haven't received a response yet, so you put it back into the end of the TODO list and instead check the first item of the list and handle that instead.&lt;/p&gt;

&lt;p&gt;What I briefly explain is how Nodejs handles asynchronous tasks, and they are akin to how an event loop works: the program receives an event to handle through a queue, they handle it, during handling they might receive or issue more events which will get appended into the event queue, and when they finish handling the event, they go back to the event queue to handle the next event.&lt;/p&gt;

&lt;p&gt;Of course, this is all managed by Nodejs, and the programmer has to write their programs in a way that fits this model. While some applications can be easily written like this, when you have something like a central resource that can be updated by everybody, it starts to become a little bit of a timing issue.&lt;/p&gt;

&lt;p&gt;You see, let me explain a little bit on how the &lt;code&gt;dependency-discovery&lt;/code&gt; handles a request to get GitHub issues:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The service receives the request, which contains the name of the project,&lt;/li&gt;
&lt;li&gt;the service will find the &lt;code&gt;npm&lt;/code&gt; package information which has that project name, and get the GitHub repo link,&lt;/li&gt;
&lt;li&gt;using this information, it will access the cache to check whether this information already exists, if it does, it copies and gives it as a response,&lt;/li&gt;
&lt;li&gt;if it doesn't exist, it will request the GitHub API for the issues,&lt;/li&gt;
&lt;li&gt;when it receives the issues, it will write it to the cache and then return them as a response.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sounds straightforward, right? However, if you remember how Nodejs handles the scheduling, you might notice something, something really bad.&lt;/p&gt;

&lt;p&gt;Let's say that two requests reach the server for the same resource (which means, they request for the same project name), since one has to have reached first, Nodejs will organize them in the queue like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Request A Start -&amp;gt; Request B Start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, we start with A first. We will go through steps 1 to 2, assuming that we do not need to &lt;code&gt;await&lt;/code&gt; anything. When we reach 3, we have to &lt;code&gt;await&lt;/code&gt; since we are accessing the cache (that being &lt;code&gt;redis&lt;/code&gt;), so now Nodejs makes the queue like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Request B Start -&amp;gt; Request A step 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we assume our access to the database will take a while, it is more efficient throughput-wise that we leave it and jump on working with B.&lt;/p&gt;

&lt;p&gt;So, after working with B, we will reach step 3, which gets &lt;code&gt;await&lt;/code&gt;ed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Request A step 3 -&amp;gt; Request B step 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We continue with request A, and we assume that we received something. Since the cache is clear, we should continue to 4. Oops, 4 requests an API through a network request, this has to be &lt;code&gt;await&lt;/code&gt;ed for sure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Request B step 3 -&amp;gt; Request A step 4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Okay, time to handle Request B! Well, since we haven't written anything in the cache yet, it means that we received nothing, so we have to request to GitHub for the information... Oh no.&lt;/p&gt;

&lt;p&gt;Request A is already doing this, so if we request GitHub through B for the same information, we would be wasting a request, right? Indeed, we will. So, what can we do?&lt;/p&gt;

&lt;p&gt;Well, instead of every request making their own requests to GitHub, you make the first request to create a Promise that every other request that is accessing the same resource can &lt;code&gt;await&lt;/code&gt; for, even that first request that created the Promise.&lt;/p&gt;

&lt;p&gt;This is the way the service currently does it, and it has a cache that stores these Promises that are accessed  by every other request accessing the same resource.&lt;/p&gt;

&lt;p&gt;Now, what is the biggest problem with this? Well, the cache is not optimized for this, at all. Actually, calling it a "cache" is giving it too much credit, it is just a global JavaScript object that gets hold of those Promises.&lt;/p&gt;

&lt;p&gt;How can we fix this, then? We cannot simply put &lt;code&gt;redis&lt;/code&gt; and expect it will store Promises for us.&lt;/p&gt;

&lt;p&gt;One way is to &lt;em&gt;maybe&lt;/em&gt; use &lt;code&gt;redis&lt;/code&gt; to store the GitHub data after the Promise has resolved, which is &lt;em&gt;fine&lt;/em&gt;, but now you have two sources of information that have to have their expiration time to be taken care of. And it is somewhat redundant, since you could have future requests refer to the already resolved promise.&lt;/p&gt;

&lt;p&gt;Another way is to slightly fix we currently handle the requests. Since this is an implementation detail, it should be fine that we change it as long as we keep the same behaviour. I have yet to think of a different way to handle the requests so that we do not reach this problem.&lt;/p&gt;

&lt;p&gt;At the end of the day, while the &lt;code&gt;depenndecy-discovery&lt;/code&gt; does the feature and &lt;em&gt;it works&lt;/em&gt;, there are a lot of wrinkles that we need to iron out.&lt;/p&gt;

</description>
      <category>telescope</category>
      <category>osd700</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Owning in Telescope</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Sun, 27 Mar 2022 16:16:04 +0000</pubDate>
      <link>https://dev.to/jerryhue/owning-in-telescope-236d</link>
      <guid>https://dev.to/jerryhue/owning-in-telescope-236d</guid>
      <description>&lt;p&gt;I would like to write about my current experience in Telescope, and since we are nearing a 3.0 release, I would also like to talk about the area that I have, without a doubt, focused more on the entire term: the &lt;code&gt;dependency-discovery&lt;/code&gt; service.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;Depedency-discovery&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The area that I have worked on the most is the &lt;code&gt;dependency-discovery&lt;/code&gt; service, and thus, it is my duty to see it being used by the 3.0 release. Although I have worked on some of these issues through this week, I would like to write about this as if I haven't (because this blog was supposed to be posted a week ago...).&lt;/p&gt;

&lt;p&gt;So, if this was the week of March 20th, what I should focus on to guarantee the shipping of the &lt;code&gt;dependency-discovery&lt;/code&gt; service?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Finish the collection of issues from GitHub repositories that are associated to the packages.&lt;/li&gt;
&lt;li&gt;Design the first front-end for the service.&lt;/li&gt;
&lt;li&gt;Add tests for the service.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I am oversimplifying the points above, as I don't want to delve deeply into &lt;em&gt;implementation&lt;/em&gt; details.&lt;/p&gt;

&lt;p&gt;An important point that I have to discuss is what risks I will be facing that could prevent me from shipping the &lt;code&gt;dependency-discovery&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A major risk I need to worry about is that nobody would be able to review and approve my code. Now, since my PRs tend to be small (less than 300 loc changes), I think this is unlikely. Either way, I need to keep my PRs small so that anybody could review (even if they are new to the service).&lt;/p&gt;

&lt;p&gt;Another risk is the front-end. We are nearing the 3.0 release, yet there are no signs of a front-end. I am not really sure how it is going to turn out, but I might want a really simple design that does what it needs to do, which is: facilitate finding issues of several repositories that are dependencies of Telescope.&lt;/p&gt;

&lt;p&gt;I will do my best to bring this issue to fruition and see other students using it.&lt;/p&gt;

</description>
      <category>telescope</category>
      <category>osd700</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Telescope 2.8</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Mon, 14 Mar 2022 23:07:26 +0000</pubDate>
      <link>https://dev.to/jerryhue/telescope-28-130p</link>
      <guid>https://dev.to/jerryhue/telescope-28-130p</guid>
      <description>&lt;p&gt;And so we end another chapter in the development of Telescope. Telescope version 2.8 arrived 4 days ago, and with it, a couple of interesting features arrived.&lt;/p&gt;

&lt;p&gt;Probably the most important one is the setting up of Telescope's private Docker registry, to manage Telescope's Docker images. I am still not sure the consequences of this, except being able to upload and versioning our Docker images on the "cloud".&lt;/p&gt;

&lt;p&gt;As for my end, there were 4 specific PRs that I worked on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Redis corruption
&lt;/h2&gt;

&lt;p&gt;One of the PRs that I worked on was related to fixing a &lt;a href="https://github.com/Seneca-CDOT/telescope/issues/1365"&gt;Redis file corruption issue during a power outage&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;My &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3122"&gt;first stab&lt;/a&gt; at the issue was trying to run a command that would repair the corrupted file before starting out redis. Of course, there were various degrees of success with this approach.&lt;/p&gt;

&lt;p&gt;So, what exact file was being corrupted? You see, redis, although being an in-memory database, introduces a few persistence mechanisms to prevent loss of data during a power outage. One of these mechanisms is writing the data being stored to an AOF (append-only file) to disk.&lt;/p&gt;

&lt;p&gt;The main problem was properly replicating this issue. When a power outage occurs, redis would end the AOF file abruptly. Because of this malformed file, if you were to startup redis again, then redis would report the file as corrupted and terminate immediately. One way to somewhat replicate this issue is by truncating the file to a certain length.&lt;/p&gt;

&lt;p&gt;The problem with my first approach is that, depending on how much you truncate, the program that would fix the AOF (&lt;code&gt;redis-check-aof&lt;/code&gt;) would fail on certain situations. For example, if you removed the first couple of kilobytes, &lt;code&gt;redis-check-aof&lt;/code&gt; would fix the file with no issue. However, remove over 8 megabytes, and now &lt;code&gt;redis-check-aof&lt;/code&gt; cannot fix anything. This may be to how the format of the AOF is, and certain limitations of the &lt;code&gt;redis-check-aof&lt;/code&gt;. If the program finds that the file was truncated at a offset that it does not know how to resolve, then it would just fail to fix it.&lt;/p&gt;

&lt;p&gt;So, after thinking about it for a while and considering other factors, I went with &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3129"&gt;another approach&lt;/a&gt;: use the &lt;code&gt;rdb&lt;/code&gt; file that redis generates as a backup. The reason for the "AOF" corruption was because of the constant writing to the file. The operating system can buffer all of these writes to efficiently write everything at once (instead of writing a file for every small string written to disk). However, if the OS hasn't flushed the buffer and there is no energy at all on an unexpected situation, even there the OS cannot do much.&lt;/p&gt;

&lt;p&gt;This &lt;code&gt;rdb&lt;/code&gt; file is very similar to the AOF one, but it is written when a configured interval of time has passed (for example, every 30 minutes). There is less chance of corruption, for the simple fact that the file is written less often. Of course, this means that the redis database will lose the data that was written in the last 30 minutes, which is not a big deal because our main persistent store is PostgreSQL, and redis is more used for caching common responses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation
&lt;/h2&gt;

&lt;p&gt;The next &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3146"&gt;PR&lt;/a&gt; is related to documentation. It is a very small PR, where I document the API of the &lt;code&gt;feed-discovery&lt;/code&gt; service. Not much to point out, except that the style I chose for writing the documentation is very similar to the GitHub API.&lt;/p&gt;

&lt;h2&gt;
  
  
  ESLint configuration
&lt;/h2&gt;

&lt;p&gt;Another small &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3168"&gt;PR&lt;/a&gt; was configuring the &lt;code&gt;dependency-discovery&lt;/code&gt; to prepare the linting process for the service. There's nothing much to describe here.&lt;/p&gt;

&lt;h2&gt;
  
  
  NPM package information in &lt;code&gt;dependency-discovery&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Finally, we have the &lt;code&gt;dependency-discovery&lt;/code&gt; related &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/3126"&gt;PR&lt;/a&gt;. Right now, we added a way to get the NPM package information by providing the package name. A small highlight is that the service caches everything, since the NPM information does not change that much. So, for the next time you request the same project, the response should be quite fast! This is the first step to actually provide the GitHub issue information, which hopefully will be brought to light on release 2.9.&lt;/p&gt;

</description>
      <category>telescope</category>
      <category>opensource</category>
      <category>osd700</category>
    </item>
    <item>
      <title>A little bit of an update...</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Mon, 14 Mar 2022 12:08:48 +0000</pubDate>
      <link>https://dev.to/jerryhue/a-little-bit-of-an-update-41hf</link>
      <guid>https://dev.to/jerryhue/a-little-bit-of-an-update-41hf</guid>
      <description>&lt;p&gt;Welcome back to my blog. This post was supposed to be done two weeks ago, yet I have to admit that I postponed it because I didn't have much idea on what to write.&lt;/p&gt;

&lt;p&gt;However, now I have a certain topic I would like to talk about.&lt;/p&gt;

&lt;p&gt;The topic is related to Telescope, the new &lt;code&gt;dependency-discovery&lt;/code&gt; service, and the &lt;code&gt;search&lt;/code&gt; service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making a well-integrated feature
&lt;/h2&gt;

&lt;p&gt;Back then, Telescope's primary reason for existing was to replace &lt;a href="https://en.wikipedia.org/wiki/Planet_(software)"&gt;&lt;code&gt;planet&lt;/code&gt;&lt;/a&gt;. It has slowly grown into a more full-featured feed aggregator, including videos, posts, text search, and other features.&lt;/p&gt;

&lt;p&gt;An interesting feature idea that has arisen is searching for posts by GitHub-related information. For example, to search all posts that mention the &lt;a href="https://github.com/microsoft/"&gt;Microsoft organization&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At the same time, the development of the &lt;code&gt;dependency-discovery&lt;/code&gt; service was taking place. After implementing a really basic feature, it was time to expand it to the original idea, which was linking GitHub repositories to the dependencies that Telescope uses, for the purpose of listing open GitHub issues.&lt;/p&gt;

&lt;p&gt;If you start to notice the deal here, my current plan is to make those two features blend together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why not separate?
&lt;/h3&gt;

&lt;p&gt;This is a valid question. Why not make the features in isolation? After all, you have to coordinate with another service to make this possible, which will make development much more slower.&lt;/p&gt;

&lt;p&gt;The reason why I want to integrate in the first place is because making big changes on an existing codebase is an excruciatingly long and painful task, riddled with possible bugs and mistakes.&lt;/p&gt;

&lt;p&gt;Imagine that we develop the features separately. This means that I would implement the GitHub issue search for the &lt;code&gt;dependency-discovery&lt;/code&gt; service in a different way so that it covers my use case only, while the &lt;code&gt;search&lt;/code&gt; service would implement the GitHub data indexing in another way for the sake of implementing their features for their specific use cases. You can start to see the trend: we develop software so that it covers our immediate needs and use cases, and planning ahead is less common than you'd think.&lt;/p&gt;

&lt;p&gt;Any good abstraction that you might have used during your role as a developer, is an abstraction that has been constantly challenged by the minds of several other developers, and that has been redesigned to allow extensibility for current and future use cases.&lt;/p&gt;

&lt;p&gt;In this case, then, we implement the GitHub-related features with our different approaches. Now, a few months later, the future maintainers would want to integrate these two features, with the possibility of adding a third one into the mix. How would they know how they will integrate it? That's the thing, they won't! The future maintainers would need to figure out how the old code works to figure out a way to integrate the features, because "rewriting it" would be discouraged (although it might be a sensible choice with the proper planning).&lt;/p&gt;

&lt;p&gt;So, instead of putting this burden on the future generation, I would prefer to do the "hard" work now, and let them improve upon this system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Figuring out a common set of requirements
&lt;/h2&gt;

&lt;p&gt;When you start to think about this GitHub-related features, there is a common denominator, that being the GitHub information itself.&lt;/p&gt;

&lt;p&gt;The main plan we have is to cache/store some GitHub data in our end, so that we don't have to depend on the GitHub API to get the information we care about. After all, the GitHub API has some rate-limiting, and since Telescope is used by several students at a time, we can reach this limit in no time.&lt;/p&gt;

&lt;p&gt;The next step is, where to store it? Telescope currently uses three technologies that can be used for storage: redis, postgreSQL (through Supabase), and Elasticsearch.&lt;/p&gt;

&lt;p&gt;Out of the three, redis is the worst option, as it is not for permanent storage. The idea is to store the data from GitHub permanently on our side. If we save it in redis, it will then be lost and has to be restored by asking the GitHub API, defeating our main objective to stay under the rate-limit issue. Also, redis is an in-memory database, so we cannot store so much data in the database and expect to not grow over the available memory and eventually crash.&lt;/p&gt;

&lt;p&gt;Elasticsearch is in a weird between. It is not a bad option, but it is also not a good one either. Elasticsearch does store the data permanently, so if Elasticsearch were to restart, all the data would still be there. The main problem, however, is the way that Elasticsearch stores the data. You see, Elasticsearch is specialized on indexing documents for facilitating search engines. This means that the main data structures that Elasticsearch are catered to a specific use case, and so are less flexible for other use cases.&lt;/p&gt;

&lt;p&gt;So, bring forward PostgreSQL! This might be the best option of all three, as it fits all the checkmarks that we care about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;designed for permanent storage in mind,&lt;/li&gt;
&lt;li&gt;general relational model that allows for flexibility when interpreting and analyzing the data,&lt;/li&gt;
&lt;li&gt;well-known, well-documented, and well-supported technology.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Great, we have a place to store the data, so how should we structure the common set of information that our features share? Well, since we are using a relational database, we have to think of the entities we are going to store and the relationships in between these entities.&lt;/p&gt;

&lt;p&gt;For starters, we have already several entities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Users (these include organizations, too!),&lt;/li&gt;
&lt;li&gt;GitHub Repositories,&lt;/li&gt;
&lt;li&gt;GitHub Pull Requests &amp;amp; Issues (GitHub treats these the same way with some minor differences),&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the relationships between the entities are somewhat like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A user can have zero or more repositories.&lt;/li&gt;
&lt;li&gt;A repository can have zero or more issues (which include Pull Requests).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, the database design is not overly complicated, which is the main idea. We don't want an overly complicated design, since we are actually going to use for later!&lt;/p&gt;

&lt;h2&gt;
  
  
  How to obtain the GitHub data
&lt;/h2&gt;

&lt;p&gt;This is an interesting question. The main way to collect all of this GitHub data is to analyse the posts that Telescope has aggregated, looking for links to GitHub users, pull requests, repositories, and issues.&lt;/p&gt;

&lt;p&gt;Another part that would have to collect GitHub data, although on a smaller and more focused scale, is the &lt;code&gt;dependency-discovery&lt;/code&gt; service. The idea is that the &lt;code&gt;dependency-discovery&lt;/code&gt; wants to provide open issues that belong to the repositories that correspond to the dependencies that Telescope has registered.&lt;/p&gt;

&lt;p&gt;Although this part has to be developed further, I think there is a nice starting point with all of this.&lt;/p&gt;

</description>
      <category>telescope</category>
      <category>opensource</category>
      <category>osd700</category>
    </item>
    <item>
      <title>Biggest release on Telescope yet</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Sun, 20 Feb 2022 06:06:28 +0000</pubDate>
      <link>https://dev.to/jerryhue/biggest-release-on-telescope-yet-2he</link>
      <guid>https://dev.to/jerryhue/biggest-release-on-telescope-yet-2he</guid>
      <description>&lt;p&gt;Last Thursday, Telescope 2.7 landed, and I think it is one of the biggest releases yet on Telescope, with over 83 addressed items! This includes issues and PRs in a single category.&lt;/p&gt;

&lt;h2&gt;
  
  
  What was included?
&lt;/h2&gt;

&lt;p&gt;This release included a big step forward in two areas: documentation and Supabase.&lt;/p&gt;

&lt;p&gt;Recently, the Telescope team was planning on moving most of the architecture to use Supabase. If you don't know what Supabase is, it is very similar to what Firebase does but with a more open-source approach.&lt;/p&gt;

&lt;p&gt;Really hard work has been done by &lt;a class="mentioned-user" href="https://dev.to/dukemanh"&gt;@dukemanh&lt;/a&gt; to ensure that we can start using the Supabase infrastructure in the &lt;code&gt;sso&lt;/code&gt; (single-sign-on) service of Telescope, and while there are still some wrinkles to iron out, we are almost there!&lt;/p&gt;

&lt;p&gt;Regarding documentation, there has been a lot of work from several people in Telescope, as well as from new contributors. We chose Docusaurus as our main documentation generator, mainly for the experience that several people in the team have with the tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  What did you do for the release?
&lt;/h2&gt;

&lt;p&gt;Most of the contributions that I landed were small contributions that fixed a few issues, nothing related to a big project.&lt;/p&gt;

&lt;p&gt;Well, there were a couple of PRs related to the &lt;code&gt;dependency-discovery&lt;/code&gt;, focused on converting it into a &lt;code&gt;docker&lt;/code&gt; container. The &lt;code&gt;dependency-service&lt;/code&gt; is now &lt;a href="https://api.telescope.cdot.systems/v1/dependency-discovery/projects"&gt;live&lt;/a&gt;! It is very small, I know, but we all have to start from somewhere!&lt;/p&gt;

&lt;h2&gt;
  
  
  Any other interesting things going with Telescope?
&lt;/h2&gt;

&lt;p&gt;So, lately, I've been trying to slowly migrate the development environment I have for Telescope from my laptop to my main desktop computer. I preferred my desktop computer for developing, as the screen is much bigger and I can fit several things at once.&lt;/p&gt;

&lt;p&gt;The thing is, it is somewhat of a challenge, since my laptop is a dual-boot machine (Linux and Windows), while my desktop computer is a single-boot machine (Windows only), and most of my development environment is in the Linux part. Of course, I installed WSL and stuff, but I am planning for something a little bit more.&lt;/p&gt;

&lt;p&gt;You see, when I have a clean environment, I would like to keep it organized and not pollute the global environment when installing things. This is somewhat difficult, since the regular way of installing software in Linux is through a package manager that handles the dependencies for you. This is useful and all, but most package managers do not have a concept of keeping environments isolated from each other.&lt;/p&gt;

&lt;p&gt;Why is this a problem? Let's say that you depend on a specific program, &lt;code&gt;docker&lt;/code&gt;, and you have a specific version that you know it works. Then, you start to work on a new project, and you would like to install a new version of &lt;code&gt;docker&lt;/code&gt; that has certain new features. The moment you install a new version of &lt;code&gt;docker&lt;/code&gt;, the old version gets overridden, so now you have to keep track of all the versions of &lt;code&gt;docker&lt;/code&gt; that your programs may depend on.&lt;/p&gt;

&lt;p&gt;Well, instead of keeping track of things manually when we already have computers, that automate so much for us, there should be a way to automate this, right? Well, there is! Enter &lt;a href="https://nixos.org/"&gt;&lt;code&gt;nix&lt;/code&gt;&lt;/a&gt;, a package manager which its main purpose is to keep your builds reproducible by isolating the environments themselves.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wait, what? Reproducible builds by isolating environments?
&lt;/h3&gt;

&lt;p&gt;If you are not really sure what I mean by a reproducible build, it means that if I run a process to build a program, I should be able to take the exact same codebase, run the exact same process in another computer, and get the exact same build I got before as a result. This might be difficult to get right, especially when you have a global state that could influence the build process.&lt;/p&gt;

&lt;p&gt;This is where the second word comes in, 'isolating'. Instead of trying to come up with a complicated scheme to separate the environments, &lt;code&gt;nix&lt;/code&gt; manages the global environment so that it fits the environment necessary for the build process (by global environment, I refer things like the environment variables of the shell).&lt;/p&gt;

&lt;p&gt;I won't explain deeply how &lt;code&gt;nix&lt;/code&gt; accomplishes this, since this would be the topic of another blog. However, I would like to mention an interesting application of &lt;code&gt;nix&lt;/code&gt;. Since we can isolate build environment, this may also mean that we can isolate the development environment of such project.&lt;/p&gt;

&lt;p&gt;Indeed, one of my main objectives to migrate the development environment from my laptop to my desktop is building a &lt;code&gt;nix&lt;/code&gt; template that describes the necessary dependencies to have an isolated development environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sounds cool, but how are you going to do it?
&lt;/h3&gt;

&lt;p&gt;Well, the main challenge is to discover exactly all the dependencies that Telescope uses. With dependencies, I include not only the JavaScript dependencies used by the project, but also tools that we use, like &lt;code&gt;docker&lt;/code&gt;, &lt;code&gt;node&lt;/code&gt;, &lt;code&gt;npm&lt;/code&gt;, &lt;code&gt;pnpm&lt;/code&gt;, etc.&lt;/p&gt;

&lt;p&gt;I also need to figure out how &lt;code&gt;nix&lt;/code&gt; would allow something like this, since the main purpose of &lt;code&gt;nix&lt;/code&gt; is creating a build environment that can be isolated. Since Telescope does not get built in the traditional sense of the word (that being something like compiling).&lt;/p&gt;

&lt;p&gt;It is going to be an interesting challenge. I will try to document that steps I took&lt;/p&gt;

</description>
      <category>telescope</category>
      <category>osd700</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Some bug fixes along the way...</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Sun, 13 Feb 2022 06:16:04 +0000</pubDate>
      <link>https://dev.to/jerryhue/some-bug-fixes-along-the-way-4gce</link>
      <guid>https://dev.to/jerryhue/some-bug-fixes-along-the-way-4gce</guid>
      <description>&lt;p&gt;Here we go, after a week from &lt;a href="https://dev.to/jerryhue/reflections-on-thinking-an6"&gt;this post&lt;/a&gt;, I got a nice report to give.&lt;/p&gt;

&lt;p&gt;After wrapping up with the &lt;code&gt;dependency-discovery&lt;/code&gt; service, I started to work on the follow up issues for it. One of them was related to &lt;a href="https://github.com/Seneca-CDOT/telescope/issues/2830"&gt;creating a Dockerfile&lt;/a&gt; so that it can spawn a container from it and put it live. The precursor to this was rewriting the script that generated the list of dependencies that the service would read. This script would update the dependency list every version bump so it would never go stale. The PR that addresses this is &lt;a href="https://github.com/Seneca-CDOT/telescope/pull/2838"&gt;#2838&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another issue I worked on was related to a &lt;a href="https://github.com/Seneca-CDOT/telescope/issues/2605"&gt;CSP violation on the status dashboard&lt;/a&gt;. There were actually two CSP problems that were separate from each other. Figuring this out was somewhat challenging, as I did not know about CSP before.&lt;/p&gt;

&lt;p&gt;Finally, another status-related issue was the terminal that showed the build log was not resizing to cover the whole vertical space available. While I solved in a more JS-centric approach, &lt;a class="mentioned-user" href="https://dev.to/dukemanh"&gt;@dukemanh&lt;/a&gt; guided me on a more CSS declarative approach throughout the review.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Birth of the &lt;code&gt;dependency-discovery&lt;/code&gt; service
&lt;/h2&gt;

&lt;p&gt;Right after the release of Telescope 2.6, I was somewhat frustrated with my progress. I managed to finish the starting point of the &lt;code&gt;dependency-discovery&lt;/code&gt;, although it did not offer much. Some issues were created as a follow-up.&lt;/p&gt;

&lt;p&gt;One of these logical steps was dockerizing the service. A couple of points were brought out, and a conclusion was made: we would need to rewrite the script that generates the dependency list into a more portable version. This meant rewriting the bash script into JavaScript.&lt;/p&gt;

&lt;p&gt;It was not difficult, although it was longer due to the lack of tools like &lt;code&gt;sed&lt;/code&gt;, &lt;code&gt;sort&lt;/code&gt;, and other Unix tools. Either way, the script had the same performance as the bash script, so we did not lose that much and gained portability.&lt;/p&gt;

&lt;p&gt;The next step, which I am going to work either on Sunday or Monday, is the &lt;code&gt;Dockerfile&lt;/code&gt; for this service. It shouldn't be hard, as I had used &lt;code&gt;docker&lt;/code&gt; before, but I have not written a &lt;code&gt;Dockerfile&lt;/code&gt; on my own before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Content Security Policy is a pain and a blessing
&lt;/h2&gt;

&lt;p&gt;For &lt;a href="https://github.com/Seneca-CDOT/telescope/issues/2605"&gt;#2605&lt;/a&gt; and &lt;a href="https://github.com/Seneca-CDOT/telescope/issues/2511"&gt;#2511&lt;/a&gt;, I thought that these issues were related, but when I started to understand the nuances behind the issues, I realized they were two separate issues that were thematically related (CSP related).&lt;/p&gt;

&lt;p&gt;Most of my explanation is found in the issues, so I wouldn't want to repeat myself, but I would like to say that CSP is both a pain and a blessing. The concept itself is very simple to understand, but some of the error messages don't provide a lot of context, so I had to put a hypothesis and prove that it was the case. I ended up being right, thankfully. This way of debugging really reminds of programming in C (when there is a segmentation fault...)&lt;/p&gt;

&lt;h2&gt;
  
  
  Terminal Resizing: Two Approaches
&lt;/h2&gt;

&lt;p&gt;For the last issue, I would like to emphasize two approaches to a problem.&lt;/p&gt;

&lt;p&gt;Imagine you have a special UI element that is not resized by normal means. That is, the UI element cannot be resized by a simple CSS statement. How would you approach about this?&lt;/p&gt;

&lt;p&gt;For a more concrete example, this special UI element is a terminal-like canvas, that shows text with a monospaced font.&lt;/p&gt;

&lt;p&gt;When I faced this problem I started to research whether the terminal offered an API for something like this. Indeed, since the element was from the &lt;a href="https://github.com/xtermjs/xterm.js"&gt;xterm.js&lt;/a&gt; library, it did offer something for my use case. Now that I knew that it existed, I decided to think of a way I would approach it, which I elaborated further in the thread of the &lt;a href="https://github.com/Seneca-CDOT/telescope/issues/2842"&gt;issue&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, another approach surfaced when review of the PR was being carried out. &lt;a class="mentioned-user" href="https://dev.to/dukemanh"&gt;@dukemanh&lt;/a&gt; mentioned that if we could resize the container of the terminal with CSS, then the &lt;code&gt;terminal-fit-addon&lt;/code&gt; would figure out the size of the terminal.&lt;/p&gt;

&lt;p&gt;He was right, and I agreed on it. The only problem is that I do not enjoy CSS that I am not familiar with, which is mostly all the time! Either way, I decided to embark on this, mostly with the help of Duc. At the end, I was surprised that Duc could solve the issue, to the point that I asked how he thought of the solution. His response was "I didn't, I learned a lot of CSS while struggling to work on the Telescope frontend and building the layout of one of my projects." This left me thinking about the two approaches shown.&lt;/p&gt;

&lt;p&gt;The two approaches show two ways of programming that some people might prefer over the other: an imperative solution and a declarative solution. The reason I went with the imperative solution was because it was easier for me to not only figure out, but also write the solution for it. However, the declarative solution is more natural for a person not familiar with the code to actually read and understand. Because of the inherent open-source objective of Telescope, the declarative approach was given a preference, and I am glad that it was.&lt;/p&gt;

</description>
      <category>telescope</category>
      <category>osd700</category>
    </item>
    <item>
      <title>Reflections on Thinking</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Sat, 05 Feb 2022 06:22:10 +0000</pubDate>
      <link>https://dev.to/jerryhue/reflections-on-thinking-an6</link>
      <guid>https://dev.to/jerryhue/reflections-on-thinking-an6</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/Seneca-CDOT/telescope"&gt;Telescope&lt;/a&gt; version 2.6 just released!&lt;/p&gt;

&lt;p&gt;Unfortunately, I couldn't manage to land a single PR.&lt;/p&gt;

&lt;p&gt;This week was rough. Last week, I wrote about the dependency graph program I was thinking for Telescope. The key word here is &lt;em&gt;thinking&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overthinking is Terrible
&lt;/h2&gt;

&lt;p&gt;Some people say that thinking before you speak is something you should on your day to day. Well, what happens if you take that to the extreme? Let's say that you end up on a endless spiral of thought, to the detriment of your productivity.&lt;/p&gt;

&lt;p&gt;I was preparing the dependency graph service for Telescope during the week, trying my best to land it for release 2.6, but I noticed it was already too late. While in the Triage meeting on Thursday, Dave mentioned something along the lines of "drive and communication is key on an open-source project." These words stuck with me during the day.&lt;/p&gt;

&lt;p&gt;I thought about them for a while. I felt disappointed that I couldn't land any code for the release, despite everybody else landing a ton of bug fixes, features, documentation, etc. I noticed everybody moving forward and I just stood there, thinking. I was falling behind.&lt;/p&gt;

&lt;p&gt;I was feeling miserable. It was a feeling worse than impostor's syndrome. I didn't know what to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Facing the fear
&lt;/h2&gt;

&lt;p&gt;While I told myself that I didn't know what to do, it was more of a "I don't want to face the truth."&lt;/p&gt;

&lt;p&gt;The question I was trying to answer was "why did I take so long for a single PR that I couldn't even submit?" It wasn't because I couldn't program it, and it wasn't because of the review process. It wasn't because the requirements were unclear, nothing of that sort. The &lt;strong&gt;real&lt;/strong&gt; reason was in front of me, but I didn't want to see it.&lt;/p&gt;

&lt;p&gt;I lacked communication.&lt;/p&gt;

&lt;p&gt;Ever since I can remember, I have struggled to share my thoughts. I don't want to seem as annoying or dumb or ignorant or anything of that sort. I want to provide something useful to the conversation, so I &lt;em&gt;think&lt;/em&gt; about my words, to the detriment of communicating with others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Communication is key
&lt;/h2&gt;

&lt;p&gt;I realised the meaning of Dave's words: "drive and communication is key on an open-source project." In reality, I feel that communication is key on a lot more things in life, but to keep it on topic, I will leave it as it is.&lt;/p&gt;

&lt;p&gt;If I had communicated more, I would have ended the PR much sooner. I would have received review feedback earlier, and thus could improve it earlier, but I didn't. It's not because I wanted to impress everyone, but it was more out of fear of not being able to show something of worth.&lt;/p&gt;

&lt;p&gt;However, after that terrible experience, I did learn something. While it was not necessarily related to programming skills or anything of the sort, I consider this lesson more valuable than learning another programming language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;While the starting point of the dependency graph or as I should call it, the &lt;code&gt;dependency-discovery&lt;/code&gt; service, is here, there is a &lt;a href="https://github.com/Seneca-CDOT/telescope/issues/2827"&gt;lot more to go&lt;/a&gt;. &lt;a href="https://github.com/Seneca-CDOT/telescope/issues/2828"&gt;A lot&lt;/a&gt;. I am not kidding, &lt;a href="https://github.com/Seneca-CDOT/telescope/issues/2830"&gt;A LOT&lt;/a&gt;. Also, there are other &lt;a href="https://github.com/Seneca-CDOT/telescope/issues/2837"&gt;areas sprouting&lt;/a&gt; as well that I got interested on.&lt;/p&gt;

&lt;p&gt;I am really excited for this. However, this kind of excitement is somewhat different than the one I had last week. I am not really sure what it is exactly, but thanks to the lessons learned I have gained a whole lot of motivation to do an even more amazing release 2.7!&lt;/p&gt;

&lt;p&gt;I don't want to say exactly what I am going to do, as I want my actions to speak for myself. I will keep you updated next week. Thank you.&lt;/p&gt;

</description>
      <category>telescope</category>
      <category>opensource</category>
      <category>osd700</category>
    </item>
    <item>
      <title>Collecting dependency information for Telescope 2.6</title>
      <dc:creator>Gerardo Enrique Arriaga Rendon</dc:creator>
      <pubDate>Sat, 29 Jan 2022 17:28:56 +0000</pubDate>
      <link>https://dev.to/jerryhue/collecting-dependency-information-for-telescope-26-1nd</link>
      <guid>https://dev.to/jerryhue/collecting-dependency-information-for-telescope-26-1nd</guid>
      <description>&lt;p&gt;A few days ago, I started working on a new feature for &lt;a href="https://github.com/Seneca-CDOT/telescope"&gt;Telescope&lt;/a&gt; developers: a dependency graph database for Telescope.&lt;/p&gt;

&lt;p&gt;The main question is: why? Why collect dependency information of Telescope? There are three related reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To visualize the dependencies that Telescope uses, as well as provide other information such as GitHub repositories.&lt;/li&gt;
&lt;li&gt;To aggregate GitHub repo issues, so that Telescope maintainers can help other project's communities by contributing and thus helping Telescope in the long term.&lt;/li&gt;
&lt;li&gt;To promote a healthy open source community, where we not only use the projects of other people, but we also give back by contributing to such projects. Only that way we can actually motivate a healthier open source community.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Current Progress
&lt;/h2&gt;

&lt;p&gt;While I started working on it, I haven't finished writing the MVP.&lt;/p&gt;

&lt;p&gt;My plan is to write a microservice that will provide the dependency information for any client. An interesting property of this project is that most of the data that the microservice is going to provide is static. When the microservice is started, it will collect the information related to the dependencies from a file given by &lt;code&gt;pnpm&lt;/code&gt;, &lt;code&gt;pnpm-lock.yml&lt;/code&gt;. The &lt;code&gt;pnpm-lock&lt;/code&gt; file contains all of the dependencies that &lt;code&gt;pnpm&lt;/code&gt; managed to find across all &lt;code&gt;project.json&lt;/code&gt; files in the workspaces.&lt;/p&gt;

&lt;p&gt;While &lt;code&gt;pnpm-lock&lt;/code&gt; gathered which dependencies are being used, but it does not show more metadata except the version used. So, for example, the GitHub URL has to be extract from somewhere else.&lt;/p&gt;

&lt;p&gt;Another thing that happens is that this method only includes &lt;code&gt;npm&lt;/code&gt; packages enlisted in the local &lt;code&gt;package.json&lt;/code&gt; files in Telescope. This includes most of Telescope's dependencies, since Telescope is mostly a JavaScript project, but that doesn't include other dependencies like &lt;code&gt;pnpm&lt;/code&gt;, &lt;code&gt;docker&lt;/code&gt; and &lt;code&gt;docker&lt;/code&gt; images, &lt;code&gt;nodejs&lt;/code&gt;, &lt;code&gt;git&lt;/code&gt;, and other dependencies. Scraping this information automatically may be a more difficult task, so we would have to provide support for manually written files that provide this information.&lt;/p&gt;

&lt;p&gt;After collecting the dependencies from the &lt;code&gt;pnpm-lock&lt;/code&gt; file, I would extract more information by accessing the &lt;code&gt;npm&lt;/code&gt; registry. Some information I am interested to collect is the GitHub repository link, as well as the description of the package.&lt;/p&gt;

&lt;p&gt;When all of this data collection is done, it is time to transform in an object and store it in an in-memory database that will then be given as a response. The reason I don't use a persistent data store is because I want this information to always be generated when initializing the service. The idea is that we want only want to show this dependency graph when Telescope is released, and on every release some dependency might change, thus making a persistent store somewhat useless. In the case of the microservice shutting down due to an error, it was can easily collect the information again.&lt;/p&gt;

&lt;p&gt;Why not collect the information and cache it? Since this is an MVP, I am not interested in how to make it extremely efficient. When we discuss about the API of the microservice, for example, we would want to think of improving the data collection.&lt;/p&gt;

&lt;p&gt;However, that's going to be an issue for the future! For now, we want to focus on the feature itself, leave the nice-to-have's for later.&lt;/p&gt;

</description>
      <category>osd700</category>
      <category>telescope</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
