<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dharam Ghevariya</title>
    <description>The latest articles on DEV Community by Dharam Ghevariya (@dharam_ghevariya_0d946c37).</description>
    <link>https://dev.to/dharam_ghevariya_0d946c37</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dharam_ghevariya_0d946c37"/>
    <language>en</language>
    <item>
      <title>My Experience Creating CI Workflows and Contributing Tests to an Existing Project</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Fri, 14 Nov 2025 18:41:57 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/my-experience-creating-ci-workflows-and-contributing-tests-to-an-existing-project-3hpc</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/my-experience-creating-ci-workflows-and-contributing-tests-to-an-existing-project-3hpc</guid>
      <description>&lt;p&gt;This week, I focused on setting up a complete GitHub Actions CI workflow and writing tests for a codebase I did not originally create. Both tasks helped me understand code quality, consistency, and the importance of automated testing in a project.&lt;/p&gt;

&lt;p&gt;To begin with, I created a &lt;a href="https://github.com/dharamghevariya/repo-contextr/blob/main/.github/workflows/ci.yml" rel="noopener noreferrer"&gt;&lt;code&gt;ci.yml&lt;/code&gt;&lt;/a&gt; file that runs on push and pull request to the main branch. The workflow has two major jobs. The first is the test job, which runs on a matrix of Ubuntu, Windows, and macOS to ensure cross-platform reliability. It checks out the repository, sets up Python 3.12 with pip caching, installs all necessary dependencies, runs linting and formatting checks using &lt;a href="https://docs.astral.sh/ruff/" rel="noopener noreferrer"&gt;ruff&lt;/a&gt;, performs type checking with &lt;a href="https://mypy.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;mypy&lt;/a&gt;(static type checker for Python), and finally executes the full test suite with coverage reporting. The second job is a build job that runs only if the tests pass successfully. It installs the build tools, creates distribution packages using &lt;code&gt;python -m build&lt;/code&gt;, and uploads the generated artifacts so they can be reviewed.&lt;/p&gt;

&lt;p&gt;To test these workflow, I had created a new &lt;a href="https://github.com/dharamghevariya/repo-contextr/pull/20" rel="noopener noreferrer"&gt;PR&lt;/a&gt; for more test implementation to my project similarly how I discussed in my &lt;a href="https://dev.to/dharam_ghevariya_0d946c37/building-test-suite-for-repo-contextr-using-pytest-4757"&gt;previous blog&lt;/a&gt;. When created the PR it ran the jobs defined the workflow and ran successfully - you can see by visiting &lt;a href="https://github.com/dharamghevariya/repo-contextr/actions/runs/19350660195" rel="noopener noreferrer"&gt;GitHub-Action&lt;/a&gt;. Along with that I ran once again before merging this PR to main branch: &lt;a href="https://github.com/dharamghevariya/repo-contextr/actions/runs/19350702949" rel="noopener noreferrer"&gt;GitHub-Action&lt;/a&gt;. To test the failure, I added a bug in test making the job fail - &lt;a href="https://github.com/dharamghevariya/repo-contextr/actions/runs/19350775218" rel="noopener noreferrer"&gt;GitHub-Action&lt;/a&gt;, and when committing this changes to main branch it shows a red cross(❌) beside the commit message denoting the failure.&lt;/p&gt;

&lt;p&gt;One challenge I ran into involved dependency installation. The project uses &lt;a href="https://peps.python.org/pep-0735/" rel="noopener noreferrer"&gt;PEP 735 dependency groups&lt;/a&gt; rather than the older &lt;code&gt;optional-dependencies&lt;/code&gt; format. Because of this, the usual &lt;code&gt;pip install -e ".[dev]"&lt;/code&gt; command did not work as expected. I had to explicitly install individual development dependencies such as ruff, mypy, &lt;a href="https://docs.pytest.org/en/stable/" rel="noopener noreferrer"&gt;pytest&lt;/a&gt;, and &lt;a href="https://pytest-cov.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;pytest-cov&lt;/a&gt;. This gave me a much better understanding of how dependency standards are evolving and how CI workflows need to adapt.&lt;/p&gt;

&lt;p&gt;Writing tests for someone else’s &lt;a href="https://github.com/Abhinavintech/Repo-Contextor" rel="noopener noreferrer"&gt;project&lt;/a&gt; turned out to be a very insightful experience. Since I was not familiar with the codebase, I started by studying the existing tests. The maintainers followed a simple, consistent style where tests were small, function-based, and relied on minimal setup. They used the built-in &lt;a href="https://github.com/Abhinavintech/Repo-Contextor/blob/81e7ff542ed457e5ff69d92b0dd1741c42e4d317/tests/test_io_utils.py#L5" rel="noopener noreferrer"&gt;tmp_path&lt;/a&gt; fixture for file handling and avoided heavy mocking. This made it easier for me to understand their approach and write tests that fit naturally into the project. I focused on two isolated utility functions, log_verbose and human_readable_age, which were good candidates for unit testing. Understanding how verbose logging and time formatting worked required some investigation, and GitHub Copilot was helpful in guiding me through the patterns and suggesting test structures that aligned with the existing style. This is the PR I had raised for it: &lt;a href="https://github.com/Abhinavintech/Repo-Contextor/pull/11" rel="noopener noreferrer"&gt;Test/cli functions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Working through this process also changed how I think about continuous integration. With CI running tests, linting, formatting checks, and type checking automatically on different operating systems, I felt much more confident that my code worked properly everywhere. I no longer had to remember to run all the commands myself because the CI pipeline handled everything for me. It acted like a safety net that quickly showed me if something was wrong. Overall, CI made the whole testing process easier, more reliable, and helped me trust the changes I was making.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building Test Suite for "repo-contextr" using "pytest"</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Fri, 07 Nov 2025 20:03:34 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/building-test-suite-for-repo-contextr-using-pytest-4757</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/building-test-suite-for-repo-contextr-using-pytest-4757</guid>
      <description>&lt;p&gt;When building software, testing often gets pushed to the end (or forgotten completely!). But for my project &lt;a href="https://github.com/dharamghevariya/repo-contextr" rel="noopener noreferrer"&gt;repo-contextr&lt;/a&gt;, I decided to do testing properly from the start. This post shares my experience writing tests and usage of the tool throughout the development process. If you are reading my blog for the first time, please visit my previous posts for a better understanding of the repo-contextr CLI tool.&lt;/p&gt;

&lt;p&gt;As this tool was built using Python, I looked for industry standard frameworks for testing within the Python ecosystem. After spending some time comparing different libraries, I selected &lt;a href="https://pytest.org/" rel="noopener noreferrer"&gt;pytest&lt;/a&gt;. This testing framework is straightforward to use, offers a rich feature set, and is widely adopted by the Python community.&lt;/p&gt;

&lt;p&gt;Along with the testing framework, I also integrated &lt;a href="https://pytest-cov.readthedocs.io/" rel="noopener noreferrer"&gt;pytest-cov&lt;/a&gt; to measure how much of my code is actually executed by tests. Running pytest alone only tells you whether tests passed or failed, but it does not reveal how much of the logic those tests are exercising. pytest-cov provides a coverage report with detailed visibility into which modules, functions and even specific lines are executed during testing. The coverage insights were especially useful, and additional command examples for usage can be found in the project documentation.&lt;/p&gt;

&lt;p&gt;Later, I explored the concept of &lt;a href="https://docs.pytest.org/en/stable/fixture.html" rel="noopener noreferrer"&gt;pytest Fixtures&lt;/a&gt;. These allow the reusable creation of setup environments and test data in a clean manner. Instead of repeating the same setup code inside every test, a fixture defines setup once and then makes it available across multiple tests. This improves readability and consistency. If you want to see how fixtures are used in this project, you can refer to the &lt;code&gt;conftest.py&lt;/code&gt; file in the test directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Tests
&lt;/h2&gt;

&lt;p&gt;For test organization, I created a dedicated &lt;code&gt;tests&lt;/code&gt; folder separate from the main &lt;code&gt;src&lt;/code&gt; code, with &lt;code&gt;conftest.py&lt;/code&gt; containing shared fixtures and unit tests stored inside a &lt;code&gt;unit&lt;/code&gt; subfolder. This structure keeps test logic isolated from production code and is a pattern many production Python projects follow.&lt;/p&gt;

&lt;p&gt;To configure pytest to work the way I expected, I added configuration settings inside &lt;code&gt;pyproject.toml&lt;/code&gt;. These settings define where tests are located, how pytest should discover them, and enable strict configuration behaviour. For coverage, I configured coverage to scan only the &lt;code&gt;src&lt;/code&gt; directory, ignore test directories and cache folders, and also enabled branch coverage. I generated HTML coverage output through the &lt;code&gt;htmlcov&lt;/code&gt; folder, which visually highlighted which lines and branches were tested.&lt;/p&gt;

&lt;p&gt;Fixtures in &lt;code&gt;conftest.py&lt;/code&gt; helped significantly in avoiding duplicate setup steps. For example, I created a &lt;code&gt;temp_dir&lt;/code&gt; fixture to produce a temporary folder for each test. Another fixture, &lt;code&gt;mock_files_dir&lt;/code&gt;, automatically placed a small collection of different file types inside the temporary directory. These reusable fixtures improved the clarity of tests, reduced duplication and made the suite easier to maintain over time. The testing documentation inside the repo explains this structure in further detail.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;While writing tests, I realized it is not enough to only test normal cases. Empty strings, Unicode content, unexpected edge cases and unusual inputs can reveal subtle bugs, so I made sure to test those early. Coverage reports showed that many untested lines were inside exception handling blocks. After adding tests that simulated permission errors and I/O failures (using &lt;a href="https://github.com/dharamghevariya/repo-contextr/blob/6ac98c88085cb4b646f7d5f1848ca6c6bb87fdcb/tests/unit/test_config.py#L14" rel="noopener noreferrer"&gt;&lt;code&gt;unittest.mock&lt;/code&gt;&lt;/a&gt;), overall coverage increased and the tool became more reliable. This experience reinforced the idea that testing is not only about verifying successful behaviour but also ensuring that failure paths behave safely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful Testing Features
&lt;/h2&gt;

&lt;p&gt;During the process, I discovered several practical pytest options that made the workflow faster. Being able to run just a single test file, a single test class, or even a specific test function helped a lot while iterating and debugging. Coverage visualisation was another valuable tool. Seeing which parts of the code were marked in red encouraged me to target those sections with improved test coverage. The HTML report was especially helpful because it presented code with colour indicators highlighting tested, partially tested and untested lines.&lt;/p&gt;

&lt;p&gt;For debugging, pytest allowed me to display variable values at the moment of failure, drop into an interactive debugger and reveal printed output during execution. These features helped make the debugging loop shorter and more interactive.&lt;/p&gt;

&lt;p&gt;To keep everything structured, I grouped related test functions into classes. This kept similar tests together, provided a natural grouping for running specific sections of the test suite, and helped improve discoverability. After completing my tests, the total coverage of the project was around ninety five percent, which I find ideal for this stage. The remaining areas that are not covered mostly relate to small display-oriented functions, print messages and type checking logic. These do not directly affect core behaviour and are validated in other ways.&lt;/p&gt;

&lt;h2&gt;
  
  
  Important Lessons
&lt;/h2&gt;

&lt;p&gt;One of the biggest takeaways from this experience is that writing good tests is not about achieving one hundred percent coverage. The goal should be to achieve the right coverage. I started with basic tests, then gradually expanded into unusual input cases such as empty strings, Unicode content, extremely long inputs and invalid file scenarios. I made sure to test error paths and exceptional behaviour, since those were the most frequently missed lines in coverage reports. Reusing fixtures greatly reduced duplicated setup code, resulting in cleaner and more maintainable test files. Clear and descriptive test names also helped a lot because they communicate exactly what behaviour is being verified, which benefits both future contributors and my future self.&lt;/p&gt;

</description>
      <category>tooling</category>
      <category>testing</category>
      <category>tutorial</category>
      <category>python</category>
    </item>
    <item>
      <title>Tackling Bigger Challenges and Exploring New Repositories in Hacktoberfest</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Fri, 31 Oct 2025 17:20:09 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/tackling-bigger-challenges-and-exploring-new-repositories-in-hacktoberfest-2g37</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/tackling-bigger-challenges-and-exploring-new-repositories-in-hacktoberfest-2g37</guid>
      <description>&lt;p&gt;I’m glad to continue sharing my open-source journey through this third update of &lt;a href="https://hacktoberfest.com/" rel="noopener noreferrer"&gt;Hacktoberfest&lt;/a&gt;! In my &lt;a href="https://dev.to/dharam_ghevariya_0d946c37/second-week-of-hacktoberfest-4apg"&gt;previous blog&lt;/a&gt;, I talked about my earlier pull requests in the &lt;a href="https://cloudinary.com/" rel="noopener noreferrer"&gt;Cloudinary&lt;/a&gt; community projects, how my fixes got merged, and what I learned from the feedback process. This week brought a few more interesting developments, deeper technical challenges, review discussions, and contributing to different repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuing the Work with Cloudinary
&lt;/h3&gt;

&lt;p&gt;As discussed in my last post, I had worked on &lt;a href="https://github.com/cloudinary-community/cloudinary-util/issues/232" rel="noopener noreferrer"&gt;Issue #232&lt;/a&gt; and created a corresponding &lt;a href="https://github.com/cloudinary-community/cloudinary-util/pull/241" rel="noopener noreferrer"&gt;Pull Request #241&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
This issue was about the &lt;strong&gt;generative fill feature&lt;/strong&gt; failing for images larger than 25 megapixels. I fixed the problem by first transforming such large images down to the required pixel limit before applying any other transformations. This approach solved the immediate bug, the plugin no longer failed, but as I later discovered from the &lt;a href="https://github.com/cloudinary-community/cloudinary-util/issues/232#issuecomment-3434456913" rel="noopener noreferrer"&gt;review comment&lt;/a&gt;, it also had an unintended side effect.&lt;/p&gt;

&lt;p&gt;The reviewer explained that while my fix worked for the &lt;code&gt;fill-background&lt;/code&gt; plugin, the same issue could appear in other plugins that use Cloudinary’s AI-based features such as &lt;code&gt;gen_restore&lt;/code&gt;, &lt;code&gt;gen_remove&lt;/code&gt;, &lt;code&gt;gen_recolor&lt;/code&gt;, and &lt;code&gt;gen_replace&lt;/code&gt;. In short, there was a deeper architectural issue, and my fix needed to be more generic.&lt;br&gt;&lt;br&gt;
Since I’m still new to this project, I didn’t want to rush into implementing a wide-ranging solution without proper guidance. I replied to the comment asking for direction on how to handle this in a scalable way. For now, I’m waiting for feedback from the maintainers before moving forward. It’s a great reminder that sometimes, fixing one bug can uncover a much bigger design consideration and that collaboration is key to solving it right.&lt;/p&gt;

&lt;h3&gt;
  
  
  Follow-Up Issue and Easy PR
&lt;/h3&gt;

&lt;p&gt;The next related task came from the &lt;a href="https://github.com/cloudinary-community/next-cloudinary" rel="noopener noreferrer"&gt;Next-Cloudinary&lt;/a&gt; repository — &lt;a href="https://github.com/cloudinary-community/next-cloudinary/issues/592" rel="noopener noreferrer"&gt;Issue #592&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
As you can see in &lt;a href="https://github.com/cloudinary-community/next-cloudinary/issues/592#issuecomment-3434484434" rel="noopener noreferrer"&gt;this comment&lt;/a&gt;, the maintainer asked me to upgrade the dependencies to reflect the fixes I had worked on in &lt;code&gt;cloudinary-util&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
So, I created &lt;a href="https://github.com/cloudinary-community/next-cloudinary/pull/636" rel="noopener noreferrer"&gt;PR #636&lt;/a&gt;, which simply upgraded the &lt;code&gt;@cloudinary-util/url-loader&lt;/code&gt; and &lt;code&gt;@cloudinary-util/util&lt;/code&gt; packages in the project’s &lt;code&gt;package.json&lt;/code&gt; and reinstalled them using &lt;code&gt;pnpm install&lt;/code&gt;. It was a straightforward change, but it ensured the latest bug fixes were included in the Next-Cloudinary SDK. Sometimes, even small updates like this help keep the ecosystem consistent and reliable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exploring a New Repository – OpsiMate 💡
&lt;/h3&gt;

&lt;p&gt;After finishing my work on the Cloudinary repositories, I decided to explore another open-source project to broaden my learning experience. That’s when I came across &lt;a href="https://github.com/OpsiMate/OpsiMate" rel="noopener noreferrer"&gt;OpsiMate&lt;/a&gt;, an open-source platform that helps organizations manage operations, providers, and facilities efficiently. It’s built using a modern stack of TypeScript, Express, and React, which gave me a chance to understand how backend validation and database logic work in real-world projects.&lt;/p&gt;

&lt;p&gt;While going through the repository, I found &lt;a href="https://github.com/OpsiMate/OpsiMate/issues/251" rel="noopener noreferrer"&gt;Issue #251&lt;/a&gt;, which addressed a bug allowing users to create multiple providers with the same name and type—for example, two “Test” VM providers. This caused confusion and potential data issues. The expected behavior was that the application should prevent duplicates and show a clear error message.&lt;/p&gt;

&lt;p&gt;To fix this, I created &lt;a href="https://github.com/OpsiMate/OpsiMate/pull/535" rel="noopener noreferrer"&gt;Pull Request #535&lt;/a&gt;, titled &lt;strong&gt;“[FIX]: Added UNIQUE constraint to database and error handling to the server for duplicate providers.”&lt;/strong&gt; In this fix, I enforced the validation directly at the database level by adding a &lt;code&gt;UNIQUE(provider_name, provider_type)&lt;/code&gt; constraint to the providers table schema. I also introduced a custom &lt;code&gt;DuplicateProviderError&lt;/code&gt; class, updated the business logic to catch this specific error, and modified the controller to return an HTTP 409 Conflict response whenever a duplicate provider is detected.&lt;/p&gt;

&lt;p&gt;This change ensures that even if multiple users try to create a provider with the same name and type at the same time, the database automatically blocks the duplication. It’s faster and more reliable than manually checking in the application code. Now, when a user tries to add a duplicate, the system shows a clear message saying &lt;em&gt;“Provider with name ‘test’ and type ‘VM’ already exists.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I verified this locally by first creating a provider named “test” under the Server type and then trying to create another one with the same name. The system immediately returned the conflict message, confirming that the fix worked as intended. Since this update involved a schema change, I deleted the existing &lt;code&gt;opsimate.db&lt;/code&gt; file and recreated the database to apply the new constraint. I’ve also asked the maintainers for guidance on how this should be handled in development and production environments.&lt;/p&gt;

&lt;p&gt;This contribution taught me the importance of enforcing business rules at the database level for consistency and reliability. It also gave me hands-on experience with structured error handling and how collaborative projects maintain best practices for readable, tested, and maintainable code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reflections
&lt;/h3&gt;

&lt;p&gt;This week was all about learning in depth rather than speed. I realized that not every pull request is about adding new features—sometimes it’s about understanding how different parts of a system connect and making sure a fix doesn’t cause new issues elsewhere. Community feedback also played a huge role; it’s not just about getting code merged but about improving through discussions and learning from experienced developers. As I wait for feedback from both Cloudinary and OpsiMate maintainers and finish my Hacktoverfest journey, I'm exploring other projects that excites me. Hacktoberfest has been a truly rewarding experience, helping me grow both technically and collaboratively. &lt;/p&gt;

&lt;p&gt;Thank you for following my journey!&lt;/p&gt;

</description>
      <category>hacktoberfest</category>
      <category>learning</category>
      <category>devjournal</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Continuing The Hacktoberfest with Cloudinary</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Tue, 28 Oct 2025 13:35:14 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/continuing-the-hacktoberfest-with-cloudinary-4aip</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/continuing-the-hacktoberfest-with-cloudinary-4aip</guid>
      <description>&lt;p&gt;I am very happy to continue sharing my open-source journey through this second update of &lt;a href="https://hacktoberfest.com/" rel="noopener noreferrer"&gt;Hacktoberfest&lt;/a&gt;. In my &lt;a href="https://dev.to/dharam_ghevariya_0d946c37/first-week-of-hacktoberfest-7of"&gt;previous blog&lt;/a&gt;, I discussed how I began contributing to &lt;a href="https://cloudinary.com/" rel="noopener noreferrer"&gt;Cloudinary&lt;/a&gt;'s open-source projects, the process of setting up monorepos, and how I made my first two pull requests. This time, I want to talk about what happened next — the waiting period, the reviews, the feedback, and the new challenges that came my way.&lt;/p&gt;

&lt;p&gt;Like many open-source contributors experience, getting pull requests reviewed can sometimes take a while. Maintainers often receive hundreds of submissions during Hacktoberfest, so patience becomes an essential part of the process.&lt;/p&gt;

&lt;p&gt;The first issue I had worked on was &lt;a href="https://github.com/cloudinary-community/cloudinary-util/issues/237" rel="noopener noreferrer"&gt;#237&lt;/a&gt;, and you can view my pull request here: &lt;a href="https://github.com/cloudinary-community/cloudinary-util/pull/238" rel="noopener noreferrer"&gt;PR #238&lt;/a&gt;. The change I made was quite simple — just three lines of code — along with the addition of proper test cases to ensure functionality. You can read the full explanation of the change in the &lt;a href="https://github.com/cloudinary-community/cloudinary-util/pull/238#issue-3465015689" rel="noopener noreferrer"&gt;PR discussion&lt;/a&gt;. Despite the simplicity, the reviewer appreciated the clarity and attention to testing. That feedback made me realize something important: a “feature” doesn’t always mean a large addition of code. Sometimes, even a few well-written lines can enhance the functionality and stability of a project significantly. This experience taught me that quality and understanding are more valuable than quantity when contributing to open-source.&lt;/p&gt;

&lt;p&gt;The next issue I had worked on was &lt;a href="https://github.com/cloudinary-community/cloudinary-util/issues/233" rel="noopener noreferrer"&gt;#233&lt;/a&gt;, which was a small bug related to escaping special characters in overlay text transformations. I had already explained the fix for this bug in my previous blog, but I later received a review confirming that the fix worked perfectly in end-to-end testing. Since I had already added a unit test for it, there were no further changes needed. You can read the review comment &lt;a href="https://github.com/cloudinary-community/cloudinary-util/pull/240#pullrequestreview-3362511225" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Shortly after, both my pull requests were merged, and the maintainer mentioned me to the &lt;a href="https://github.com/cloudinary-community/cloudinary-util/pull/240#issuecomment-3429368582" rel="noopener noreferrer"&gt;all-contributors bot&lt;/a&gt;, officially adding my name to the &lt;a href="https://github.com/cloudinary-community/cloudinary-util?tab=readme-ov-file#-contributors" rel="noopener noreferrer"&gt;contributors list&lt;/a&gt; of the repository. This was one of the most exciting moments for me because it made my contribution visible to everyone. On top of that, I also received an invitation link for the Cloudinary Hacktoberfest swag pack — a small but meaningful token of appreciation from the team. It reminded me that in open-source, patience and effort are always rewarded.&lt;/p&gt;

&lt;p&gt;After those successful contributions, I decided to take on another issue from the same repository — &lt;a href="https://github.com/cloudinary-community/cloudinary-util/issues/232" rel="noopener noreferrer"&gt;#232&lt;/a&gt;. This one was quite different and more complex compared to the previous ones. The bug was about users being unable to use image transformations on images larger than 25 megapixels. I tried fixing this issue through &lt;a href="https://github.com/cloudinary-community/cloudinary-util/pull/241" rel="noopener noreferrer"&gt;PR #241&lt;/a&gt;, but it turned out that the changes I made were affecting other image transformation plugins in the library. The reviewer suggested rethinking the approach to make sure the fix worked consistently across all transformation modules. I am still exploring a better and cleaner solution for this bug. It is definitely a challenging one, but it has also been a great learning experience in understanding how multiple components interact within a complex codebase.&lt;/p&gt;

&lt;p&gt;Looking back, this phase of Hacktoberfest has been extremely rewarding — not just because of the merged pull requests, but because of what I learned along the way. I learned that even small changes can have a big impact if done right. I understood that communication and patience are just as important as technical skills. Most importantly, I realized that open-source development is about teamwork, consistency, and the willingness to improve, not just about quick results.&lt;/p&gt;

&lt;p&gt;I am still working on the last bug mentioned above and am quite excited to see how it evolves. Regardless of how it turns out, I know I will come out of it having learned something valuable. Once again, I am thankful to the Cloudinary team for their guidance and support, and to the Hacktoberfest community for giving me the opportunity to be part of something truly collaborative and meaningful.&lt;/p&gt;

&lt;p&gt;Thank you for following my blog series, and I look forward to sharing more updates soon!&lt;/p&gt;

</description>
      <category>devjournal</category>
      <category>hacktoberfest</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Implementing Token Count Optimization in repo-contextr</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Sun, 26 Oct 2025 18:55:27 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/implementing-token-count-optimization-in-repo-contextr-1bkg</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/implementing-token-count-optimization-in-repo-contextr-1bkg</guid>
      <description>&lt;p&gt;Inspired by &lt;a href="https://repomix.com/" rel="noopener noreferrer"&gt;Repomix's&lt;/a&gt; Token Count Optimization feature, which I had explored in my &lt;a href="https://dev.to/dharam_ghevariya_0d946c37/token-count-optimization-feature-on-repomix-528h"&gt;previous blog&lt;/a&gt;, I decided to add a similar feature to my own project, &lt;a href="https://github.com/dharamghevariya/repo-contextr" rel="noopener noreferrer"&gt;repo-contextr&lt;/a&gt;. The idea was to help developers quickly find out how many tokens their repository would take when used with large language models. This helps plan for context limits and estimate API costs more easily.&lt;/p&gt;

&lt;p&gt;Before starting the development, I created a feature request issue: &lt;a href="https://github.com/dharamghevariya/repo-contextr/issues/18" rel="noopener noreferrer"&gt;Issue #18&lt;/a&gt;. The goal was to use the &lt;a href="https://github.com/openai/tiktoken" rel="noopener noreferrer"&gt;Tiktoken library by OpenAI&lt;/a&gt; for accurate token counting.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;About Tiktoken:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Tiktoken is OpenAI’s fast tokenizer that can count tokens exactly as OpenAI models like GPT-3.5 and GPT-4 do. It’s widely used by tools like LangChain and LlamaIndex to calculate how much text fits into a model’s context window. Instead of guessing based on character length, it uses the same algorithm as real LLMs, giving developers a more accurate way to measure cost and context.&lt;/p&gt;

&lt;p&gt;For the first version, I decided to start simple. Instead of integrating Tiktoken right away, I used an easier method that assumes one token for every four characters. This made it possible to test the idea quickly and get early feedback without adding a heavy dependency. Later, I planned to replace this logic with the real Tiktoken library in the next iteration, tracked under &lt;a href="https://github.com/dharamghevariya/repo-contextr/issues/19" rel="noopener noreferrer"&gt;Issue #19&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Details
&lt;/h2&gt;

&lt;p&gt;To build this feature, I started by creating a new branch just for this work. The idea was to keep my main branch stable and make development easier to manage. I wrote new modules for three main tasks — token counting, formatting, and CLI integration. The &lt;code&gt;token_counter.py&lt;/code&gt; module handles the token count logic. It scans each file, skips binary files, and counts tokens using the four-character approximation. The results are also combined at the folder level to show the total token count per directory.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;token_tree_formatter.py&lt;/code&gt; module formats the results into a simple tree structure. It uses characters like &lt;code&gt;├──&lt;/code&gt; and &lt;code&gt;└──&lt;/code&gt; to show folders and files clearly. This layout looks consistent with repo-contextr’s current output and helps developers easily see which parts of the repository take up more tokens. Files are sorted by size, and directory totals make it easier to find large sections quickly.&lt;/p&gt;

&lt;p&gt;I also added new CLI options for users. These include &lt;code&gt;--token-count-tree&lt;/code&gt; to show the full token tree, &lt;code&gt;--token-threshold N&lt;/code&gt; to filter smaller files, and &lt;code&gt;--tokens&lt;/code&gt; to show only the total token estimate. The feature blends with the existing CLI commands, so users can see token data directly in the usual output. Along with this, I updated &lt;code&gt;cli.py&lt;/code&gt;, &lt;code&gt;package.py&lt;/code&gt;, and &lt;code&gt;report_formatter.py&lt;/code&gt; to support token-related data. Everything was tested to ensure it worked smoothly with the rest of the app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This new feature adds more value to repo-contextr by letting developers estimate how big their repository is in terms of tokens. It helps identify which files or folders are the most token-heavy, making it easier to plan for LLM context limits and costs.  &lt;/p&gt;

&lt;p&gt;Even though this first version uses a simple character-based estimate, it sets a strong foundation for future improvements. The next step will be to integrate OpenAI’s Tiktoken library for accurate token counts. This project also reminded me how useful it is to keep work organized — using feature branches, writing clean and simple code, maintaining documentation, and keeping the Git history neat by squashing commits.&lt;/p&gt;

</description>
      <category>openai</category>
      <category>tooling</category>
      <category>opensource</category>
      <category>llm</category>
    </item>
    <item>
      <title>Token Count Optimization feature on Repomix</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Fri, 24 Oct 2025 01:39:56 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/token-count-optimization-feature-on-repomix-528h</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/token-count-optimization-feature-on-repomix-528h</guid>
      <description>&lt;p&gt;For this week, I had to extend my &lt;a href="https://github.com/dharamghevariya/repo-contextr" rel="noopener noreferrer"&gt;repo-contextr&lt;/a&gt; project with some additional features. However, this time the catch was that we didn’t have a feature requirement beforehand. Our professor gave us a CLI tool link called &lt;a href="https://repomix.com/" rel="noopener noreferrer"&gt;Repomix&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Repomix&lt;/strong&gt; is a command-line tool that helps developers analyze and visualize their codebase for AI processing. It measures metrics like token usage, file composition, and repository structure, allowing users to optimize how their code is represented when interacting with large language models (LLMs).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;While going through the &lt;a href="https://repomix.com/guide/usage" rel="noopener noreferrer"&gt;user guide&lt;/a&gt;, I got interested in the &lt;a href="https://repomix.com/guide/usage#token-count-optimization" rel="noopener noreferrer"&gt;Token Count Optimization&lt;/a&gt; feature. This caught my attention because I already had a feature for counting tokens in my project — though it was a rough estimate, treating each word as a single token. However, working on this project taught me that tokenization doesn’t work that way, especially when dealing with LLMs.&lt;/p&gt;

&lt;p&gt;To elaborate on the Token Count Optimization feature — it’s used to understand how much of your codebase would “cost” in terms of LLM context tokens when processing your query. Running&lt;br&gt;&lt;br&gt;
&lt;code&gt;repomix --token-count-tree&lt;/code&gt; produces a &lt;strong&gt;hierarchical visualization&lt;/strong&gt; showing token counts across your project structure. You can also apply thresholds, such as&lt;br&gt;&lt;br&gt;
&lt;code&gt;repomix --token-count-tree 1000&lt;/code&gt;, to focus on larger files. This helps identify token-heavy files, optimize file selection patterns, and plan compression strategies when preparing code for AI analysis.&lt;/p&gt;




&lt;h2&gt;
  
  
  Diving into the Implementation
&lt;/h2&gt;

&lt;p&gt;First, I used &lt;strong&gt;GitHub’s code search&lt;/strong&gt; to look for &lt;code&gt;"token count tree"&lt;/code&gt;, which led me to the configuration schema and the main orchestration in &lt;a href="https://github.com/yamadashy/repomix/blob/271b09e1030146fcee5719843a9b93e7f7af83fb/src/core/metrics/calculateMetrics.ts" rel="noopener noreferrer"&gt;&lt;code&gt;calculateMetrics.ts&lt;/code&gt;&lt;/a&gt;. GitHub’s search gave me an overview of the file structure, implementation details, and references to where the feature was used in other files.&lt;/p&gt;

&lt;p&gt;However, I quickly realized that the feature was quite complex, and navigating through multiple parts of the program in the browser-based GitHub interface was becoming difficult. That’s when I decided to set up the project locally in my code editor.&lt;/p&gt;

&lt;p&gt;After setting it up, I used&lt;br&gt;&lt;br&gt;
&lt;code&gt;git grep "tokenCountTree"&lt;/code&gt;&lt;br&gt;&lt;br&gt;
from the terminal, which showed me everywhere this configuration option appeared. This revealed the &lt;strong&gt;data flow&lt;/strong&gt; from CLI parsing → configuration → metrics calculation → output formatting.&lt;/p&gt;

&lt;p&gt;For each major component, I opened the file in VS Code and used &lt;strong&gt;“Go to Definition”&lt;/strong&gt; on every import and function call. This helped me build a mental map of how different modules connected. Whenever I encountered unfamiliar patterns, I used &lt;strong&gt;GitHub Copilot Chat&lt;/strong&gt; to clarify things instead of getting stuck. This made the learning process much smoother since it can get quite complex jumping from one concept definition to another on the internet.&lt;/p&gt;




&lt;h2&gt;
  
  
  Understanding the Architecture
&lt;/h2&gt;

&lt;p&gt;To give you an overview, Repomix uses a &lt;strong&gt;vertical slice architecture&lt;/strong&gt;, where each type of metric calculation has its own self-contained module while sharing common infrastructure.&lt;/p&gt;

&lt;p&gt;However, token calculation was handled differently — it used the &lt;a href="https://github.com/openai/tiktoken" rel="noopener noreferrer"&gt;&lt;code&gt;tiktoken&lt;/code&gt;&lt;/a&gt; library from OpenAI.&lt;br&gt;&lt;br&gt;
You can see its implementation &lt;a href="https://github.com/yamadashy/repomix/blob/271b09e1030146fcee5719843a9b93e7f7af83fb/src/core/metrics/TokenCounter.ts#L4" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In my &lt;code&gt;repo-contextr&lt;/code&gt; project, I implemented a similar feature that calculates the &lt;strong&gt;total token count&lt;/strong&gt; of the entire codebase — although in my case, it’s equivalent to the &lt;strong&gt;total number of words&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Exploring Parallelism and Task Execution
&lt;/h2&gt;

&lt;p&gt;One thing I’m still figuring out is &lt;strong&gt;how workers run tasks in parallel&lt;/strong&gt; and the &lt;strong&gt;role of the &lt;code&gt;TaskRunner&lt;/code&gt; abstraction&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;After tracing through &lt;a href="https://github.com/yamadashy/repomix/blob/271b09e1030146fcee5719843a9b93e7f7af83fb/src/shared/processConcurrency.ts#L4" rel="noopener noreferrer"&gt;&lt;code&gt;processConcurrency.ts&lt;/code&gt;&lt;/a&gt;, I discovered that Repomix uses &lt;strong&gt;Tinypool&lt;/strong&gt; to manage a pool of reusable worker threads.&lt;br&gt;&lt;br&gt;
The &lt;code&gt;TaskRunner&lt;/code&gt; wraps Tinypool with a simple &lt;code&gt;run(task)&lt;/code&gt; API — when you call it, Tinypool queues the task and assigns it to an available worker.&lt;/p&gt;

&lt;p&gt;The clever part is &lt;code&gt;TASKS_PER_THREAD = 100&lt;/code&gt;:&lt;br&gt;&lt;br&gt;
With 500 files on 8 cores, it creates only 5 workers instead of 8, avoiding unnecessary thread startup overhead.&lt;/p&gt;

&lt;p&gt;What still confuses me is Tinypool’s internal scheduling — when &lt;code&gt;Promise.all()&lt;/code&gt; submits 500 tasks simultaneously, how does it decide which worker gets which task? I also don’t fully understand when to use &lt;code&gt;runtime: 'worker_threads'&lt;/code&gt; vs &lt;code&gt;runtime: 'child_process'&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  My Plan for Repo-Contextr
&lt;/h2&gt;

&lt;p&gt;I liked the idea of &lt;strong&gt;separating the token count for each file&lt;/strong&gt;, so I plan to implement this feature in my &lt;code&gt;repo-contextr&lt;/code&gt; project as well.&lt;br&gt;&lt;br&gt;
However, I won’t be implementing actual tokenization in the context of LLMs or using parallelism at this stage.&lt;/p&gt;

&lt;p&gt;Still, exploring Repomix has given me a clearer understanding of how large-scale tools are structured — from &lt;strong&gt;CLI parsing to concurrent task orchestration&lt;/strong&gt; — and it has definitely influenced how I’ll approach the next iteration of my project.&lt;/p&gt;

</description>
      <category>cli</category>
      <category>tooling</category>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>First Week of Hacktoberfest</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Sat, 11 Oct 2025 14:19:22 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/first-week-of-hacktoberfest-7of</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/first-week-of-hacktoberfest-7of</guid>
      <description>&lt;p&gt;I am very happy to share that I am participating in the month-long open-source event called &lt;a href="https://hacktoberfest.com/" rel="noopener noreferrer"&gt;Hacktoberfest&lt;/a&gt;! This global event encourages developers from all over the world to contribute to open-source projects and learn from real-world codebases. During this period, open-source projects welcome contributions, and developers like me get a great opportunity to work on live projects, understand how large systems work, and collaborate with other contributors. Events like this help both contributors and maintainers, on one hand contributors get hands-on experience while maintainers get help improving their projects through community pull requests.&lt;/p&gt;

&lt;p&gt;After spending some time searching for good projects, I decided to contribute to &lt;a href="https://cloudinary.com/" rel="noopener noreferrer"&gt;Cloudinary&lt;/a&gt;'s community project, &lt;a href="https://github.com/cloudinary-community/next-cloudinary" rel="noopener noreferrer"&gt;Next Cloudinary SDK&lt;/a&gt;. I got to know about this organization from the Digital-Ocean &lt;a href="https://discord.gg/digitalocean" rel="noopener noreferrer"&gt;discord&lt;/a&gt; channel. Cloudinary is a platform that helps developers manage, optimize, and deliver images and videos efficiently across the web. In simple words, it helps websites handle media faster and smarter without worrying about manual optimization or delivery. In this bigger purpose project, the &lt;strong&gt;Next Cloudinary SDK&lt;/strong&gt; makes it very easy to use Cloudinary inside a Next.js app. It provides React components, hooks, and utilities that allow developers to display, transform, and optimize images with just a few lines of code. Basically, it acts like a bridge that connects Cloudinary’s services with Next.js so developers can build fast, image-friendly websites easily.&lt;/p&gt;

&lt;p&gt;The first step in my contribution journey was setting up the project and tools on my local system, which I would say was the most challenging part. I started by cloning my forked repository, and very soon, I faced my first challenge, which was understanding how a &lt;strong&gt;monorepo&lt;/strong&gt; works. In simple words, a monorepo (short for “monolithic repository”) is a single repository that contains multiple small projects or packages instead of maintaining them separately. This setup makes it easier to manage related codebases and their dependencies. In this particular project, they were using &lt;code&gt;pnpm-workspace.yml&lt;/code&gt; to define and link all the packages. Once I understood how the monorepo worked, everything started to make more sense. Honestly, &lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt; helped me a lot during this part. It guided me like a mentor while setting up the project, explaining what each part of the configuration was doing and helping me debug setup errors quickly.&lt;/p&gt;

&lt;p&gt;Once the setup was done successfully, I started exploring the &lt;a href="https://github.com/cloudinary-community/next-cloudinary/issues" rel="noopener noreferrer"&gt;issues&lt;/a&gt; of the repository to find something I could work on. After going through a few of them, I found &lt;a href="https://github.com/cloudinary-community/next-cloudinary/issues/592" rel="noopener noreferrer"&gt;Issue #592&lt;/a&gt; that I felt confident to work on. Interestingly, this issue was not in the main &lt;code&gt;next-cloudinary&lt;/code&gt; repository but in another related one — &lt;a href="https://github.com/cloudinary-community/cloudinary-util" rel="noopener noreferrer"&gt;cloudinary-util&lt;/a&gt; (&lt;a href="https://github.com/cloudinary-community/cloudinary-util/issues/237" rel="noopener noreferrer"&gt;Issue #237&lt;/a&gt;). So, I had to set up one more project locally, but this time it was very easy since I was already familiar with the project structure. This issue was about adding new functionality to the &lt;code&gt;cropMode&lt;/code&gt;, so that users could use it as a cropping plugin while using this library. Once I understood the logic, I made the required code changes and created a &lt;a href="https://github.com/cloudinary-community/cloudinary-util/pull/238" rel="noopener noreferrer"&gt;pull request&lt;/a&gt;. The project also had proper test cases, so I added three new tests that covered my code changes to ensure everything worked as expected.&lt;/p&gt;

&lt;p&gt;By this time, I was much more comfortable with the codebase and started understanding how things were structured. So, I decided to take up another issue from the same repository — &lt;a href="https://github.com/cloudinary-community/cloudinary-util/issues/233" rel="noopener noreferrer"&gt;Issue #233&lt;/a&gt;. This one was a small bug related to the overlay text functionality. The problem was that when users entered consecutive special characters such as &lt;code&gt;"."&lt;/code&gt;, &lt;code&gt;","&lt;/code&gt;, or &lt;code&gt;"/"&lt;/code&gt;, the code was only escaping the first character but not the rest. This caused issues in the generated URL because certain characters like commas have special meanings in Cloudinary’s syntax. To fix it, I simply replaced the &lt;code&gt;replace()&lt;/code&gt; function with &lt;code&gt;replaceAll()&lt;/code&gt; to make sure all the occurrences were properly escaped. You can see the fix here: &lt;a href="https://github.com/cloudinary-community/cloudinary-util/pull/240/files" rel="noopener noreferrer"&gt;Pull Request #240&lt;/a&gt;. Even though it was just a few lines of code, my professor always says that smaller pull requests are easier to review and merge — and they are equally important in open-source projects because they improve reliability and maintain consistency.&lt;/p&gt;

&lt;p&gt;So far, these are the two contributions I have made during the first week of Hacktoberfest. I am still waiting for feedback and reviews from the maintainers, which is quite understandable because this event brings a huge number of pull requests for them to review. I think waiting patiently is also part of the learning process. I am now exploring another issue in the same project and also looking for new repositories to contribute to. In my next blog, I will share another interesting issue that I am currently working on and how this whole open-source journey is helping me learn and grow as a developer.&lt;/p&gt;

&lt;p&gt;Thank you...!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>learning</category>
      <category>hacktoberfest</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Rewriting the Codebase: repo-contextr’s Week 6 Refactor Journey</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Fri, 10 Oct 2025 22:24:27 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/rewriting-the-codebase-repo-contextrs-week-6-refactor-journey-mce</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/rewriting-the-codebase-repo-contextrs-week-6-refactor-journey-mce</guid>
      <description>&lt;p&gt;This week was the cleanup week for &lt;a href="https://github.com/dharamghevariya/repo-contextr" rel="noopener noreferrer"&gt;repo-contextr&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;After devoting the first five weeks solely to feature development, I realized we had reached the point where &lt;strong&gt;code quality and maintainability&lt;/strong&gt; needed attention. Week 6 was therefore dedicated entirely to &lt;strong&gt;refactoring and restructuring&lt;/strong&gt; the project.&lt;/p&gt;




&lt;h2&gt;
  
  
  Background: The Early Design
&lt;/h2&gt;

&lt;p&gt;At the beginning of the project, I followed a straightforward design pattern, separating the functionality into two main modules: &lt;code&gt;commands&lt;/code&gt; and &lt;code&gt;utils&lt;/code&gt;. The &lt;code&gt;commands&lt;/code&gt; module was meant to contain the main features and logic of the tool, while the &lt;code&gt;utils&lt;/code&gt; module would host supporting functions to help those features run efficiently. However, as development progressed, &lt;code&gt;utils&lt;/code&gt; started to grow beyond its intended purpose. It became a large collection of loosely related functions — many of which were actually part of the tool’s core logic. Over time, this blurred the boundary between modules, and the design pattern I had initially set out to follow began to fade away. This was not only making the code difficult to navigate but also making it harder to onboard new contributors. It became clear that before adding any new features, the internal structure had to be cleaned up.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Previous Code Structure
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;src/contextr/               &lt;span class="c"&gt;# Main package&lt;/span&gt;
├── __init__.py
├── cli.py                  &lt;span class="c"&gt;# CLI argument parsing&lt;/span&gt;
├── main.py                 &lt;span class="c"&gt;# Application entry point&lt;/span&gt;
│
├── commands/               &lt;span class="c"&gt;# Command implementations&lt;/span&gt;
│   ├── __init__.py
│   └── package.py          &lt;span class="c"&gt;# Main command(328 lines - MONOLITHIC)&lt;/span&gt;
│
└── utils/                  &lt;span class="c"&gt;# "Utils" anti-pattern package&lt;/span&gt;
    ├── __init__.py
    └── helpers.py          &lt;span class="c"&gt;# ALL functionality (376 lines)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Refactor Plan
&lt;/h2&gt;

&lt;p&gt;To improve the maintainability and clarity of the codebase, I spent some time exploring well-structured open-source Python projects. A common theme I noticed was that each core functionality was isolated in its own dedicated module, with clear boundaries between responsibilities. Inspired by this, I decided to completely remove the &lt;code&gt;utils&lt;/code&gt; module and distribute its contents into purpose-specific packages. Before beginning, I created a new Git branch named &lt;code&gt;refactor/improve-codebase&lt;/code&gt; to ensure all the refactor work remained isolated from the main branch until it was stable. This allowed me to make incremental changes, test them thoroughly, and later merge the work in a clean, single commit.&lt;/p&gt;




&lt;h2&gt;
  
  
  Implementation Details
&lt;/h2&gt;

&lt;p&gt;The professor had also pointed out during evaluation that having a &lt;code&gt;utils&lt;/code&gt; module at the forefront was a design weakness in any serious project. After reviewing it, I realized that everything inside &lt;code&gt;utils&lt;/code&gt; could be reorganized into focused modules such as &lt;code&gt;discovery&lt;/code&gt;, &lt;code&gt;processing&lt;/code&gt;, &lt;code&gt;git&lt;/code&gt;, &lt;code&gt;output&lt;/code&gt;, and &lt;code&gt;config&lt;/code&gt;. Additionally, the logic responsible for generating reports could be encapsulated into a dedicated class, &lt;code&gt;RepositoryReportFormatter&lt;/code&gt;, improving testability and readability. This new modular approach helped separate concerns and made the code easier to extend and maintain.&lt;/p&gt;

&lt;p&gt;Throughout the process, I maintained a clean and disciplined Git workflow. I committed the changes in three logical stages and later used &lt;strong&gt;interactive rebase&lt;/strong&gt; to squash them into a single, well-documented commit. This ensured that the main branch retained a clean and readable history. Once everything was reviewed and verified, I merged it into the main branch. You can see the commit here: &lt;a href="https://github.com/dharamghevariya/repo-contextr/commit/1f9aff686186edb9ce3ae61baa481d6e4bd37b9d" rel="noopener noreferrer"&gt;1f9aff6&lt;/a&gt;. This workflow made the refactoring process organized, reversible, and transparent.&lt;/p&gt;




&lt;h2&gt;
  
  
  The New Structure After Refactor
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;src/contextr/
├── cli.py                    &lt;span class="c"&gt;# CLI interface (argparse)&lt;/span&gt;
├── main.py                   &lt;span class="c"&gt;# Entry point&lt;/span&gt;
│
├── commands/                 &lt;span class="c"&gt;# Command implementations&lt;/span&gt;
│   ├── __init__.py
│   └── package.py            &lt;span class="c"&gt;# Main orchestration (83 lines)&lt;/span&gt;
│
├── config/                   &lt;span class="c"&gt;# Configuration management&lt;/span&gt;
│   ├── __init__.py
│   ├── settings.py           &lt;span class="c"&gt;# Application constants&lt;/span&gt;
│   ├── toml_loader.py        &lt;span class="c"&gt;# TOML configuration loading&lt;/span&gt;
│   └── languages.py          &lt;span class="c"&gt;# Language/syntax mappings&lt;/span&gt;
│
├── discovery/                &lt;span class="c"&gt;# File &amp;amp; directory discovery&lt;/span&gt;
│   ├── __init__.py
│   └── file_discovery.py     &lt;span class="c"&gt;# File finding, filtering, path validation&lt;/span&gt;
│
├── processing/               &lt;span class="c"&gt;# File content processing&lt;/span&gt;
│   ├── __init__.py
│   └── file_reader.py        &lt;span class="c"&gt;# Content reading, binary detection&lt;/span&gt;
│
├── git/                      &lt;span class="c"&gt;# Git repository operations&lt;/span&gt;
│   ├── __init__.py
│   └── git_operations.py     &lt;span class="c"&gt;# Git info, recent files, root detection&lt;/span&gt;
│
├── formatters/               &lt;span class="c"&gt;# Output formatting&lt;/span&gt;
│   ├── __init__.py
│   └── report_formatter.py   &lt;span class="c"&gt;# Report generation (230 lines)&lt;/span&gt;
│
├── statistics/               &lt;span class="c"&gt;# File analysis &amp;amp; metrics&lt;/span&gt;
│   ├── __init__.py
│   └── file_stats.py         &lt;span class="c"&gt;# Statistics calculation (115 lines)&lt;/span&gt;
│
└── output/                   &lt;span class="c"&gt;# Display formatting&lt;/span&gt;
    ├── __init__.py
    └── tree_formatter.py     &lt;span class="c"&gt;# Tree structure generation&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Version Control Workflow
&lt;/h2&gt;

&lt;p&gt;This refactor week taught me the importance of writing code for humans first, and machines second. Design patterns that seem fine during early prototyping may not scale as a project matures. Keeping code modular, organized, and easy to understand is what makes a project sustainable in the long term. I also learned that avoiding catch-all directories like &lt;code&gt;utils&lt;/code&gt; encourages meaningful boundaries and accountability within the codebase. Refactoring also made me appreciate Git’s advanced capabilities. Using branches for isolation, rebasing for history cleanup, and well-scoped commits for traceability all contribute to a cleaner development lifecycle. Most importantly, I learned that restructuring a codebase is not just about rearranging files, it’s about improving readability, maintainability, and paving the way for future contributors.&lt;/p&gt;




&lt;p&gt;You can check out the project on GitHub here:&lt;br&gt;&lt;br&gt;
👉 &lt;a href="https://github.com/dharamghevariya/repo-contextr" rel="noopener noreferrer"&gt;repo-contextr on GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devjournal</category>
      <category>opensource</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Adding TOML Config File Support to an Open Source CLI Project</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Fri, 03 Oct 2025 13:48:44 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/adding-toml-config-file-support-to-an-open-source-cli-project-2c9c</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/adding-toml-config-file-support-to-an-open-source-cli-project-2c9c</guid>
      <description>&lt;p&gt;In Week 5 of my Open Source Development course, the task was to add a new feature to another student’s repository. I decided to work on the project &lt;a href="https://github.com/CynthiaFotso/Repository-Context-Packager" rel="noopener noreferrer"&gt;Repository-Context-Packager&lt;/a&gt;. This tool helps package useful information about a repository so it can be used later.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Right now, to run the tool you need to type a long command every time. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;repo-packager &lt;span class="s2"&gt;"PATH/TO/REPO"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output&lt;/span&gt; &lt;span class="s2"&gt;"output.txt"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--include&lt;/span&gt; &lt;span class="s2"&gt;"*.js"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--exclude&lt;/span&gt; &lt;span class="s2"&gt;"*test*"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-file-size&lt;/span&gt; 1024 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s2"&gt;"json"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you always use the same options, typing this again and again is not very user-friendly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: TOML Config File
&lt;/h2&gt;

&lt;p&gt;To fix this, I added support for a TOML config file. This means you can create a simple file called .repo-packager-config.toml in your project folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;output&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"output.txt"&lt;/span&gt;
&lt;span class="py"&gt;include&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"*.js"&lt;/span&gt;
&lt;span class="py"&gt;exclude&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"*test*"&lt;/span&gt;
&lt;span class="py"&gt;max_file_size&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;
&lt;span class="py"&gt;format&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"json"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the tool will read these values automatically. You don’t need to type them in the command line each time.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I Built It
&lt;/h3&gt;

&lt;p&gt;To handle TOML files, I chose &lt;a href="https://github.com/squirrelchat/smol-toml" rel="noopener noreferrer"&gt;smol-toml&lt;/a&gt;, a lightweight library that is simple and straightforward to work with.&lt;/p&gt;

&lt;p&gt;Here’s the main function I wrote:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;loadConfig&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;configFileName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;.repo-packager-config.toml&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;configPath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nx"&gt;configFileName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// If config file doesn't exist, return empty config&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;existsSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;configPath&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;configContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;configPath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;utf-8&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;parseToml&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;configContent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="s2"&gt;`Error parsing TOML config file '&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;configFileName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;': &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function first checks if the config file exists in the current directory. If it does, it opens the file, reads the contents, and parses it into an object so the program can use the values as options. If the file does not exist, it simply returns an empty object.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working With the Maintainer
&lt;/h2&gt;

&lt;p&gt;Before I began coding, I opened an &lt;a href="https://github.com/CynthiaFotso/Repository-Context-Packager/issues/15" rel="noopener noreferrer"&gt;issue&lt;/a&gt; in the repository to explain my feature idea and to get the maintainer’s feedback. This step was important because it made sure the idea was useful and aligned with the project. After the maintainer approved the idea, I created a draft pull request. A draft PR is useful because it lets others see that you are actively working on the feature, even if it’s not finished yet. Once I completed the feature, tested it, and updated the README with instructions, I changed the pull request from draft to ready for review. This signaled to the maintainer that the work was complete and ready to be checked. The full changes can be seen in &lt;a href="https://github.com/CynthiaFotso/Repository-Context-Packager/pull/16" rel="noopener noreferrer"&gt;Pull Request #16&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reviewing the Same Feature in My Repo
&lt;/h2&gt;

&lt;p&gt;Now the roles were reversed, I was the maintainer and got the feature contribution. One of my classmates suggested the same feature in my own project, &lt;a href="https://github.com/dharamghevariya/repo-contextr" rel="noopener noreferrer"&gt;repo-contextr&lt;/a&gt;, and opened &lt;a href="https://github.com/dharamghevariya/repo-contextr/pull/16" rel="noopener noreferrer"&gt;PR #16&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To properly test their changes, I first added their fork of my repository as a new remote:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git remote add bhchen24 https://github.com/BHChen24/repo-contextr.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gave me access to their version of the repository. Next, I checked out the branch they had created for the feature:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; issue-15-feat-support-using-a-TOML-dotfile-config-file bhchen24/issue-15-feat-support-using-a-TOML-dotfile-config-file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By doing this, I created a local branch that pointed directly to their work. This allowed me to pull down their changes and run the code on my own machine. Running it locally was important so I could confirm that the new TOML config feature worked as expected.&lt;/p&gt;

&lt;p&gt;During my review, I noticed a few issues where the implementation didn’t fully meet the requirements. I started a review on GitHub, added comments, and opened a discussion thread with my feedback (review link)[&lt;a href="https://github.com/dharamghevariya/repo-contextr/pull/16#pullrequestreview-3291624639" rel="noopener noreferrer"&gt;https://github.com/dharamghevariya/repo-contextr/pull/16#pullrequestreview-3291624639&lt;/a&gt;]. The contributor then made the necessary fixes and pushed the updates to their branch.&lt;/p&gt;

&lt;p&gt;Once I confirmed the fixes worked, I merged the changes into the main branch of my project with the following steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git checkout main
git merge issue-15-feat-support-using-a-TOML-dotfile-config-file
git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After pushing to GitHub, the pull request was automatically marked as merged. This completed the process and added the new feature into my project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learnings
&lt;/h2&gt;

&lt;p&gt;This week’s task helped me learn both sides of open source work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As a contributor: suggest a feature, open a PR, and finish it with tests and docs.&lt;/li&gt;
&lt;li&gt;As a maintainer: review someone else’s PR, give feedback, and merge changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It showed me the value of good communication, small steps (using draft PRs), and always testing code locally before merging.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>cli</category>
      <category>tooling</category>
      <category>learning</category>
    </item>
    <item>
      <title>Working with the two parallel branches in Git</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Fri, 26 Sep 2025 21:10:47 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/working-with-the-two-parallel-branches-in-git-17m8</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/working-with-the-two-parallel-branches-in-git-17m8</guid>
      <description>&lt;p&gt;This week was truly eye-opening as I dove into the concepts of &lt;strong&gt;branching and merging in Git&lt;/strong&gt;. At first, the workflow felt a bit tricky to grasp, but after carefully going through the readings from the Git book:&lt;br&gt;
&lt;a href="https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging" rel="noopener noreferrer"&gt;3.2: Basic Branching and Merging&lt;/a&gt;, &lt;a href="https://git-scm.com/book/en/v2/Git-Branching-Branch-Management" rel="noopener noreferrer"&gt;3.3: Branch Management&lt;/a&gt;, &lt;a href="https://git-scm.com/book/en/v2/Git-Branching-Branching-Workflows" rel="noopener noreferrer"&gt;3.4: Branching Workflows&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;I started to understand the power behind these features. Once I practiced them in &lt;strong&gt;Lab-03&lt;/strong&gt;, it honestly felt like working with a Git-powered &lt;em&gt;time machine&lt;/em&gt; where I am managing multiple timelines of a project and then bringing them back together.&lt;/p&gt;


&lt;h2&gt;
  
  
  Parallel Feature Development
&lt;/h2&gt;

&lt;p&gt;For the lab, we had to build &lt;strong&gt;two features in parallel branches&lt;/strong&gt; and later merge them into the main branch using different merging strategies. Here’s what I worked on:&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Last Modified Timestamps
&lt;/h3&gt;

&lt;p&gt;The goal was to display file modification timestamps in each file header of the output.  &lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="err"&gt;###&lt;/span&gt; &lt;span class="nx"&gt;File&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;js &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Modified&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2024&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt; &lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;helper&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./utils/helper&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To implement this, I used Python’s &lt;strong&gt;subprocess&lt;/strong&gt; library to run &lt;code&gt;git log&lt;/code&gt; commands for individual files and extract the last modified information.  &lt;/p&gt;

&lt;p&gt;I first created the issue on GitHub (&lt;a href="https://github.com/dharamghevariya/repo-contextr/issues/9" rel="noopener noreferrer"&gt;Issue #9&lt;/a&gt;), and then worked on it in a dedicated branch.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Statistics Enhancement
&lt;/h3&gt;

&lt;p&gt;This feature expanded the &lt;strong&gt;summary section&lt;/strong&gt; of the tool by adding more detailed statistics such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Total files and total lines&lt;/li&gt;
&lt;li&gt;Breakdown of file types
&lt;/li&gt;
&lt;li&gt;Largest file details
&lt;/li&gt;
&lt;li&gt;Average file size
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Summary
- Total files: 15
- Total lines: 342
- File types: .js (8), .md (3), .json (2), .css (2)
- Largest file: src/main.js (89 lines)
- Average file size: 22 lines
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have done this by going over files and keep track of the type counts, finding the largest file by line count, and then returning the structured result back to the caller. The helper functions included logic to group file types, identify the largest file, and compute averages.  &lt;/p&gt;

&lt;p&gt;Before start working on the feature implementation, I had created an &lt;a href="https://github.com/dharamghevariya/repo-contextr/issues/10" rel="noopener noreferrer"&gt;Issue #10&lt;/a&gt; to describe the functionality expected.&lt;/p&gt;




&lt;h2&gt;
  
  
  Merging the Timelines
&lt;/h2&gt;

&lt;p&gt;Once both features were implemented, it was time to bring them back to the main branch.&lt;/p&gt;

&lt;p&gt;I tried merging the branch for &lt;strong&gt;Issue #9&lt;/strong&gt; first. As It was branched off of the main branch the merge was &lt;strong&gt;fast-forward&lt;/strong&gt; and was done in a single &lt;a href="https://github.com/dharamghevariya/repo-contextr/commit/afcc16197af0923cfd064fbb96a5ddb92c234df6" rel="noopener noreferrer"&gt;commit&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But when it came to merge the second &lt;strong&gt;Issue #10&lt;/strong&gt;, I faced a &lt;strong&gt;merge conflict&lt;/strong&gt;. It was due to a README.md file, where the &lt;strong&gt;Issue #10&lt;/strong&gt; branch had changed the same line of code which &lt;strong&gt;Issue #9&lt;/strong&gt; had changed. It confused git and caused the merge conflict. To fix this I used the vs code editor's conflict resolver feature, which provides very clean UI of current and the upcoming features. So it created a &lt;a href="https://github.com/dharamghevariya/repo-contextr/commit/1cde6c1974043e68843cf13c1e591e300037b4a1" rel="noopener noreferrer"&gt;commit&lt;/a&gt; when I resolved the conflict.&lt;/p&gt;

&lt;p&gt;Overall, This lab taught me how powerful Git branching and merging really are. At first, it was overwhelming to handle multiple branches and merge strategies, but once I practiced, it felt natural and even fun.  &lt;/p&gt;

</description>
      <category>git</category>
      <category>beginners</category>
      <category>learning</category>
      <category>devjournal</category>
    </item>
    <item>
      <title>Raising PR to Another Repository</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Sat, 20 Sep 2025 21:41:30 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/raising-pr-to-another-repository-583a</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/raising-pr-to-another-repository-583a</guid>
      <description>&lt;p&gt;For lab-02, we were asked to implement a feature for another classmate's repository. I decided to contribute to the &lt;a href="https://github.com/CynthiaFotso/Repository-Context-Packager" rel="noopener noreferrer"&gt;Repository-Context-Packager&lt;/a&gt;, which is a command-line tool that analyzes repositories and generates a single file containing repository context. This project was written in JavaScript using Node.js.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started: Fork, Setup, and Initial Issues
&lt;/h2&gt;

&lt;p&gt;To contribute to this repository, I had to fork it to my GitHub account since I wasn't authorized to push directly to the original repo. During the initial setup, something immediately caught my attention: the &lt;code&gt;node_modules&lt;/code&gt; directory was being tracked in Git, even though it should have been ignored through the &lt;code&gt;.gitignore&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;This led me to file my first issue - &lt;a href="https://github.com/CynthiaFotso/Repository-Context-Packager/issues/9" rel="noopener noreferrer"&gt;Issue #9&lt;/a&gt; regarding this caching problem. The author had &lt;code&gt;node_modules&lt;/code&gt; listed in &lt;code&gt;.gitignore&lt;/code&gt;, but it wasn't working because the files were already being tracked by Git before being added to the ignore file. I created a local branch for this issue, solved it by using &lt;code&gt;git rm --cached&lt;/code&gt; to untrack the files, and raised &lt;a href="https://github.com/CynthiaFotso/Repository-Context-Packager/pull/10" rel="noopener noreferrer"&gt;PR #10&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Development: Implementing the Recent Changes Filter
&lt;/h2&gt;

&lt;p&gt;After resolving the initial issue, I moved on to implementing the main feature: a &lt;code&gt;--recent&lt;/code&gt; flag that filters and packages only files modified within a specified number of days based on Git commit history.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation Details
&lt;/h3&gt;

&lt;p&gt;The feature development involved several technical challenges and decisions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git Integration:&lt;/strong&gt;&lt;br&gt;
Initially, I considered using filesystem timestamps (&lt;code&gt;fs.stat().mtime&lt;/code&gt;) to determine file modification dates. However, I quickly realized this approach had significant limitations. Accessing file timestamps are very tricky thing to do as it changes according to the operating systems. Moreover, the tool was specifically developed for the git repositories so it made me opt for a Git-based approach using the &lt;code&gt;simple-git&lt;/code&gt; library:&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Review Experience
&lt;/h2&gt;

&lt;p&gt;I also received a similar PR on &lt;a href="https://github.com/dharamghevariya/repo-contextr/pull/2" rel="noopener noreferrer"&gt;my repository&lt;/a&gt; - it was the same feature I had implemented for the original author's repository, the difference was just in the language used.&lt;/p&gt;

&lt;p&gt;For the review process, I added the contributor's Git repository to my local remotes, then created a local branch pointing to their PR branch. This way I was able to test the feature locally. I found two issues related to code structure and created review threads for them. Once the contributor updated the code with the desired changes, I closed the threads and merged the PR.&lt;/p&gt;

&lt;p&gt;During the review, it was important for me to keep the code style consistent as I had been following it, so I created threads to maintain that consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Throughout this whole process, I got to know that when contributing to others' repositories, you have to understand their style so you can follow it to contribute effectively. When you start working on a feature, you may find other bugs to file and solve if possible. In the open source world, this is the best thing to receive and give.&lt;/p&gt;

&lt;p&gt;While doing code review, it was very important for me to keep the code style consistent as I had been following it, so I had to create threads for that. This experience taught me that open source development is not just about writing code - it's about understanding project conventions, maintaining quality standards, and building relationships within the community.&lt;/p&gt;

&lt;p&gt;The collaborative nature of open source means every contribution, whether fixing bugs or implementing features, helps improve the project for everyone. This cycle of giving and receiving makes the entire ecosystem stronger and more reliable.&lt;/p&gt;

</description>
      <category>github</category>
      <category>javascript</category>
      <category>opensource</category>
      <category>devjournal</category>
    </item>
    <item>
      <title>Introducing repo-contextr v0.1</title>
      <dc:creator>Dharam Ghevariya</dc:creator>
      <pubDate>Sat, 20 Sep 2025 18:32:43 +0000</pubDate>
      <link>https://dev.to/dharam_ghevariya_0d946c37/introducing-repo-contextr-v01-27aj</link>
      <guid>https://dev.to/dharam_ghevariya_0d946c37/introducing-repo-contextr-v01-27aj</guid>
      <description>&lt;h2&gt;
  
  
  Release 0.1 of repo-contextr
&lt;/h2&gt;

&lt;p&gt;Finally, at the end of week 3, &lt;strong&gt;version 0.1 of repo-contextr is out!&lt;/strong&gt; 🎉 This marks my first real open source project release, and honestly, I'm still amazed by how it all came together.&lt;/p&gt;

&lt;p&gt;This release represents the foundation of what the tool will look like at full release. When I first set up this project, I had no idea it would evolve the way it has. It's incredible to see how small pieces of work can create something genuinely useful. Although this isn't the end of the project, I can already see a clearer picture of what the final product will become.&lt;/p&gt;

&lt;p&gt;The tool solves a simple but annoying problem: sharing your entire codebase with AI assistants like ChatGPT or Claude without having to copy-paste files one by one.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Glimpse of repo-contextr
&lt;/h2&gt;

&lt;p&gt;It scans your entire project, gathers all the important information, and packages everything into one clean, organized text file that's perfect for sharing with AI tools.&lt;/p&gt;

&lt;p&gt;The tool is pretty straightforward in what it does. It looks through your project files, grabs your git information like commit details and branch name, and puts everything together in a nice structured format. You can also filter which files to include - for example, if you only want Python files, just add &lt;code&gt;--include "*.py"&lt;/code&gt; to the command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Basic usage - analyze current directory&lt;/span&gt;
contextr &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="c"&gt;# Filter for specific file types&lt;/span&gt;
contextr &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--include&lt;/span&gt; &lt;span class="s2"&gt;"*.py"&lt;/span&gt;

&lt;span class="c"&gt;# Save output to file for sharing&lt;/span&gt;
contextr &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--include&lt;/span&gt; &lt;span class="s2"&gt;"*.py"&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; context.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The best part is that the output is really well organized. It creates sections for your git information, shows your project structure like a file tree, and includes all your code with proper formatting. This makes it super easy for AI assistants to understand your entire codebase at once, instead of you having to explain everything piece by piece.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Learnings!
&lt;/h2&gt;

&lt;p&gt;During this release 0.1 development, I got to explore Git in ways I never had before. This project has taught me how to manage code changes, work with others, and understand how open source projects actually work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Git Skills:
&lt;/h3&gt;

&lt;p&gt;It is very hard at the start to wrap your mind around the concept of git and workflows related to it. Once you get the hang of branches, commit histories, pushing to remote repos and all, it becomes very easy. Then you won't have to think much before running any git commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Skills:
&lt;/h3&gt;

&lt;p&gt;On the other side of version control, which is the collaboration tool - GitHub. I believe this tool is also a major part of the workflows connected to git. I learned to create issues, PRs, and work with project authors and contributors during this phase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problems:
&lt;/h2&gt;

&lt;p&gt;It is quite obvious that working in an asynchronous manner has the problem of not coming to a result one would want to have at their own time. You have to understand others and help the contributors through the issues they are facing. I had the chance to get myself in this situation, where I and another contributor both missed the deadline of this release. But we calmly understood each other's issues and tried to resolve them, and completed this release.&lt;/p&gt;

&lt;p&gt;One other personal problem that I got into and am still in, is the name of the tool. So initially I wanted to publish this package to PyPI so it can be installed with pip. But in the middle of the project I got to know that the "contextr" CLI tool name was already taken by someone. So I had to change the name of the tool to "repo-contextr" and that confused the contributor, resulting in unnecessary issues. I am still working on this and probably will be able to publish it in the next release.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try repo-contextr Yourself
&lt;/h2&gt;

&lt;p&gt;Want to see what all the excitement is about? Here's how you can try repo-contextr:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔗 GitHub Repository:&lt;/strong&gt; &lt;a href="https://github.com/dharamghevariya/repo-contextr" rel="noopener noreferrer"&gt;github.com/dharamghevariya/repo-contextr&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install directly using pipx (recommended)&lt;/span&gt;
pipx &lt;span class="nb"&gt;install &lt;/span&gt;git+https://github.com/dharamghevariya/repo-contextr.git

&lt;span class="c"&gt;# Try it on any project&lt;/span&gt;
contextr &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--include&lt;/span&gt; &lt;span class="s2"&gt;"*.py"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>chatgpt</category>
      <category>tooling</category>
      <category>opensource</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
