<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bryce Seefieldt</title>
    <description>The latest articles on DEV Community by Bryce Seefieldt (@bseefieldt).</description>
    <link>https://dev.to/bseefieldt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bseefieldt"/>
    <language>en</language>
    <item>
      <title>NeoGPT PR merge achieved</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Fri, 12 Jan 2024 12:58:29 +0000</pubDate>
      <link>https://dev.to/bseefieldt/neogpt-pr-merge-achieved-4dfb</link>
      <guid>https://dev.to/bseefieldt/neogpt-pr-merge-achieved-4dfb</guid>
      <description>&lt;p&gt;I have worked my way through the scope of contributions I set out to make with the NeoGPT open source project.  It has been a positive experience and went a great deal smoother than some past contribution work.  This is primarily due to the authors engaging in the conversations around the issues posted, giving good feedback and also thoroughly reviewing changes and adding their preferences to them.  This has been a surprisingly uncommon experience in my short time as an open source contributor, but I am hoping this is what I encounter more often than not in working as a community.&lt;/p&gt;

&lt;p&gt;The 2 functions I originally setout to add to this project have both been merged into the main program and can be found in the &lt;a href="https://github.com/neokd/NeoGPT/blob/main/neogpt/config.py"&gt;config.py module&lt;/a&gt;.  These functions allow users to export and import their program configuration settings to/from .yaml files.   &lt;/p&gt;

&lt;p&gt;As these settings exist in a series of global variables in the config.py file, exporting the settings was as simple as mapping the variables into an “config” object, formatted such that could be passed to the yaml.dump() function which would write the config object to a YAML file.  I have found that Python has a great deal of libraries and modules available that can provide a lot of functionality that I may have previously done manually.  In this case, importing the yaml and toml modules made light work of writing and reading from files.  As well, the argparse module was used in this project which is one of the most helpful library tools I’ve run across.  It makes light work of processing command line arguments at runtime.&lt;/p&gt;

&lt;p&gt;The import function also used the yaml module, this time yaml.safe_load() function.  This was used to read an existing yaml configuration file and load its specific values into the current settings to run the program.  So similar to the export process, the object that was created using yaml.safe_load() was then mapped back to populate the global variables that represent the programs configuration settings.  This turned out to be a much easier process than I realized in the beginning.  &lt;/p&gt;

&lt;p&gt;As an addition, to the scope of work I originally planned to complete, I decided that providing some unit tests for my work, would help in developing a more complete contribution process. No public facing testing code had yet been added to this projects repo, so perhaps, unit tests for these functions would be a good jumping off point to make the creation of unit tests an ongoing step in the integration of new features and contributions.  Time will tell if that is the case, but I’m very happy with how my work has added to this project to date and it is very rewarding to see some substantial code contributions merged into the main project.&lt;/p&gt;

</description>
      <category>python</category>
      <category>opensource</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>NeoGPT contributions continued...</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Tue, 09 Jan 2024 18:43:55 +0000</pubDate>
      <link>https://dev.to/bseefieldt/neogpt-contributions-continued-2h61</link>
      <guid>https://dev.to/bseefieldt/neogpt-contributions-continued-2h61</guid>
      <description>&lt;p&gt;I have made good progress in my contribution pans for the NeoGPT  project as discussed in my previous post &lt;a href="https://dev.to/bseefieldt/escalating-neogpt-contribution-o14"&gt;“Escalating NeoGPT contribution”&lt;/a&gt;. I was able to work through my first goal of adding command line switches for the export functionality in my most recent &lt;a href="https://github.com/neokd/NeoGPT/pull/122"&gt;Pull Request&lt;/a&gt; which was successfully merged into the main branch of the project.  &lt;/p&gt;

&lt;p&gt;This PR also included some enhancements to the original export_config() function which I created, that provides some level of sanitization of the optional filename parameters which the user can provide along with the &lt;code&gt;--export&lt;/code&gt; argument.  Essentially, if a string ( representing the config filename) is provided by the user (overriding the default config filename to be written) then the export function forces a .yaml file format. In addition, it enforces some checks to provide a new filename if the provided filename is attempting to overwrite an existing config file of the same name. These changes were all wrapped into the now merged PR.&lt;/p&gt;

&lt;p&gt;In reviewing my export configuration process, the project author was able to extrapolate much of the functionality and transpose it to the load configuration process, which was my planned next step.  As they have provided the initial framework for the load_config() function I set out to create, I am now going through it to add some similar validation enhancements to those discussed above, as well as to test the functionality and provide any necessary debugging.&lt;/p&gt;

&lt;p&gt;As much of the intended scope initially laid out is near completion, I have discussed with the project author, implementing some unit test suites for the functionality I have introduced, to be included in the ongoing contribution QA process, and for future inclusion in the introduction of some CI/CD workflows into the project.  I will continue to update on the process as I introduce unit tests into my previous work.&lt;/p&gt;

</description>
      <category>gpt3</category>
      <category>opensource</category>
      <category>python</category>
      <category>ai</category>
    </item>
    <item>
      <title>Escalating NeoGPT contribution</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Mon, 01 Jan 2024 19:18:55 +0000</pubDate>
      <link>https://dev.to/bseefieldt/escalating-neogpt-contribution-o14</link>
      <guid>https://dev.to/bseefieldt/escalating-neogpt-contribution-o14</guid>
      <description>&lt;p&gt;I am continuing to build on my work on the &lt;a href="https://github.com/neokd/NeoGPT"&gt;NeoGPT&lt;/a&gt; customizable GPT chatbot project.  &lt;a href="https://github.com/neokd/NeoGPT/pull/122"&gt;My last efforts were focused on creating a function that can read the current program configurations and export them to a yaml file&lt;/a&gt;.  I created this function based on the parameters defined in the existing config.py module which was called in the main.py program at runtime in order to establish the current configuration details to define how the current session will be run.&lt;/p&gt;

&lt;p&gt;The implementation of the export_config function required reading settings defined in both config.py as well as the pyproject.toml file to be able to capture a snapshot of a variety of configuration parameters at the time the export_config function is called.&lt;/p&gt;

&lt;p&gt;To expand on my contribution to this project I have conferred with the project author to determine some next steps required to allow this function to be utilized as well as to add the functionality that allows the configuration yaml files to be read and parsed in order to load the configuration settings saved in the yaml file, such that the program build is configured to the saved settings.&lt;br&gt;
The requirements to implement this expanded functionality include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Adding a &lt;code&gt;-export config_x.yml&lt;/code&gt; command line switch that executes the export_config function from CLI and allows the user to define the yaml filename in which to save the current config settings.&lt;/li&gt;
&lt;li&gt;  Define a load_config function that can be called to read a specified yaml configuration file and load the defined settings to overwrite the current build configuration.&lt;/li&gt;
&lt;li&gt;  Adding an &lt;code&gt;-load config_x.yml&lt;/code&gt; command line switch that executes the load_config function from CLI and allows the user to define the file which should be read in order to update/overwrite the current config settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This will require a deeper analysis of the projects source code.  Creating the export_config function only required scraping the value of various parameters defined in config.py and pyproject.toml and writing them to file in the required format.  These next steps however will require looking at each step of the programs build and run process, assessing the methods used to set each individual configuration parameter and passing the saved parameters to each individual method such that it overwrites the current settings with the stored value.&lt;br&gt;
This should be an interesting deep-dive into the program that will give me a much better understanding of a significant portion of this programs executable codebase. &lt;/p&gt;

</description>
      <category>python</category>
      <category>gpt3</category>
      <category>opensource</category>
      <category>yaml</category>
    </item>
    <item>
      <title>Digging deeper into NeoGPT</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Wed, 13 Dec 2023 15:18:32 +0000</pubDate>
      <link>https://dev.to/bseefieldt/digging-deeper-into-neogpt-3fn4</link>
      <guid>https://dev.to/bseefieldt/digging-deeper-into-neogpt-3fn4</guid>
      <description>&lt;p&gt;I had a chance to revisit and contribute some more functionality to &lt;a href="https://github.com/neokd/NeoGPT"&gt;NeoGPT&lt;/a&gt;, a customizable GPT chatbot program written in Python.  It was nice to be able to do some more work on this project as the initial time invested in setting up the source code locally and create my build was significant compared to the actual contribution I made.  Whereas my first contribution for Hacktoberfest, was a simple bash script to streamline the build process, this time around I was able to dive deeper into the actual program and develop some functionality to allow the user to export the current program configuration settings. &lt;/p&gt;

&lt;p&gt;I am sure the experience of contributing to open source projects can vary dramatically depending on the project, how many owners and full-time contributors are involved, among a number of other variables.  My experience so far, in terms of the process of responding to issues posted by the project owners, has been more frustrating than rewarding.  I am finding in many cases that the posted issues often lack a great deal of necessary detail that would seem to be important for an outside developer to attempt to solve.  Even when I have assessed the issue, explored the code, and made requests for clarification as to the desired approach to the solution, or just asked some specific questions to better understand the code and/or the task at hand, I feel like the responses are brief and often lack the insight and clarity that my initial inquiries were attempting to elicit.  &lt;/p&gt;

&lt;p&gt;Perhaps it’s my level of comprehension being lower than that of the project developers, but my feeling is that if someone were to reach out to me with questions on how to contribute to my project, I would want to be as detailed as possible in my responses to avoid any confusion and to facilitate a functional contribution that aligns with my intentions and expectations.  I’m sure this won’t always be the case, but that’s just my observational rant as I continue to work towards an effective contribution on what is the most complex codebase I have yet to engage with.  This gives me some insight into the standard I would like to hold myself to in the future, if and when I begin sharing projects and accepting contributions from previously unfamiliar developers.&lt;/p&gt;

&lt;p&gt;In my specific efforts this week, as I mentioned, I am &lt;a href="https://github.com/neokd/NeoGPT/pull/122"&gt;developing a function to read and store the current configuration settings for the NeoGPT chatbot&lt;/a&gt;.  The biggest challenge in doing so is that the settings can vary dramatically depending on the command line arguments being provided at runtime among a number of other variables.  As well, those settings are not necessarily stored or logged in one central location. Due to this, there was a great deal of code analysis required to try and determine when the settings parameters are established and how to access them from a helper function.  &lt;/p&gt;

&lt;p&gt;One of the biggest questions this raised was when and how should the code be calling the export function, as the current state of the program at the time it calls the function has a significant impact on how I should write the function.  While I have tried to ask the right questions to gain clarity on this, the answer is still not clear to me.  I’ve been able to partially address the scope of the issue and generate a draft pull request that presents my progress while attempting to elicit some feedback to get to the desired solution.&lt;/p&gt;

</description>
      <category>gpt3</category>
      <category>llm</category>
      <category>python</category>
      <category>opensource</category>
    </item>
    <item>
      <title>PyPI Packaging</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Wed, 13 Dec 2023 15:15:03 +0000</pubDate>
      <link>https://dev.to/bseefieldt/pypi-packaging-1nn8</link>
      <guid>https://dev.to/bseefieldt/pypi-packaging-1nn8</guid>
      <description>&lt;p&gt;Hey guys, I hate to brag but I’m officially a published author!  That’s right I published my Python &lt;a href="https://pypi.org/project/ez-txt2html-bryce-seefieldt/"&gt;ez-txt2html converter to PyPi&lt;/a&gt;, and it was easy! Here’s how I did it.&lt;br&gt;
First off, for simplicity sake, I did away with my src directory and instead moved all my py code into a directory named for my project (/ez-txt2html).  Creating a directory named for your project is important when creating a build to upload to PyPi.&lt;br&gt;&lt;br&gt;
After some research into the process I decided to use hatch as the build backend, so I needed the package installed,which was as simple as &lt;code&gt;pip install hatch&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next up I added some build related details to my &lt;a href="https://github.com/bryce-seefieldt/ez-txt2html/blob/packaging/pyproject.toml"&gt;pyproject.toml&lt;/a&gt; which included some version info, general project details as well as some instructions on how to create a build.  Most importantly was adding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; [build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This directs the build function to use hatch to create a build of your package. In addition, I was receiving an error when trying to build that required I define what package should be targeted.  This was done by adding the following to the pyproject.toml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[tool.hatch.build.targets.wheel]
packages = ["ez-txt2html"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is essentially just pointing to the newly created package directory.  The result was the creation the /dist directory which contained the .whl and .tar.gz binary files which are what eventually get added to PyPi as your packages.&lt;/p&gt;

&lt;p&gt;From there, I needed to learn a bit about  &lt;a href="https://pypi.org/"&gt;PyPi&lt;/a&gt;  or Python Package Index, which is the home for all the wonderful packages that you know if you have ever run the handy &lt;code&gt;pip install&lt;/code&gt; command.  PyPi has a pretty quick and easy onboarding, which requires a secured account be created and, for the purposes of submitting packages from CLI, an API token be generated.  This can be done in your PyPi profile. Once logg just navigate to  &lt;a href="https://pypi.org/manage/account/"&gt;https://pypi.org/manage/account/&lt;/a&gt; and scroll down to the API tokens section. Click “Add Token” and follow the few steps to generate an API token which is your access point to uploading packages. &lt;br&gt;
With all this in place, I was able to use twine to handle the package upload. First I needed to install twine, again as simple as &lt;code&gt;pip install twine&lt;/code&gt;. In order for twine to access my API token during the package upload process, it needed to read it from .pypirc file that contains the token info. For some that file may exist already, for me I was required to create it.  Working in windows I simply used a text editor to create it in my home user directory ($HOME/.pypirc).  The file contents had a TOML like format looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[pypi]
username = __token__
password = "your-PyPi-api-token"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally, the package could be uploaded using the command &lt;code&gt;python -m twine upload --repository pypi dist/*&lt;/code&gt;&lt;br&gt;
This uploads the contents of the project’s /dist directory to a PyPi and if all goes well, provides you with  a link to your &lt;a href="https://pypi.org/project/ez-txt2html-bryce-seefieldt/"&gt;package page&lt;/a&gt;. Here, front and center is a handy command line that can be run on CLI. For those playing along at home run  &lt;code&gt;pip install ez-txt2html-bryce-seefieldt&lt;/code&gt;.&lt;br&gt;
 If I did my job right, this will download and install ez-txt2html as an executable and importable module in your Python environment.  From their you can run &lt;code&gt;python -m ez-txt2html.ez_txt2html -h&lt;/code&gt; to see the options for running the package from CLI.  Like magic. What an accomplishment, if I do say so myself.  I’ve passed this on to a couple of my devBuds and it seems to work for them NP – let me know if it works for you, or if you have any tips on how I could make this process smoother or smarter.&lt;/p&gt;

</description>
      <category>python</category>
      <category>pypi</category>
      <category>opensource</category>
    </item>
    <item>
      <title>PyPI Packaging</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Wed, 13 Dec 2023 09:35:22 +0000</pubDate>
      <link>https://dev.to/bseefieldt/pypi-packaging-3m17</link>
      <guid>https://dev.to/bseefieldt/pypi-packaging-3m17</guid>
      <description>&lt;p&gt;Hey guys, I hate to brag but I’m officially a published author!  That’s right I published my Python &lt;a href="https://pypi.org/project/ez-txt2html-bryce-seefieldt/"&gt;ez-txt2html converter to PyPi&lt;/a&gt;, and it was easy! Here’s how I did it.&lt;br&gt;
First off, for simplicity sake, I did away with my src directory and instead moved all my py code into a directory named for my project (/ez-txt2html).  Creating a directory named for your project is important when creating a build to upload to PyPi.&lt;br&gt;&lt;br&gt;
After some research into the process I decided to use hatch as the build backend, so I needed the package installed,which was as simple as &lt;code&gt;pip install hatch&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next up I added some build related details to my &lt;a href="https://github.com/bryce-seefieldt/ez-txt2html/blob/packaging/pyproject.toml"&gt;pyproject.toml&lt;/a&gt; which included some version info, general project details as well as some instructions on how to create a build.  Most importantly was adding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; [build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This directs the build function to use hatch to create a build of your package. In addition, I was receiving an error when trying to build that required I define what package should be targeted.  This was done by adding the following to the pyproject.toml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[tool.hatch.build.targets.wheel]
packages = ["ez-txt2html"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is essentially just pointing to the newly created package directory.  The result was the creation the /dist directory which contained the .whl and .tar.gz binary files which are what eventually get added to PyPi as your packages.&lt;/p&gt;

&lt;p&gt;From there, I needed to learn a bit about  &lt;a href="https://pypi.org/"&gt;PyPi&lt;/a&gt;  or Python Package Index, which is the home for all the wonderful packages that you know if you have ever run the handy &lt;code&gt;pip install&lt;/code&gt; command.  PyPi has a pretty quick and easy onboarding, which requires a secured account be created and, for the purposes of submitting packages from CLI, an API token be generated.  This can be done in your PyPi profile. Once logg just navigate to  &lt;a href="https://pypi.org/manage/account/"&gt;https://pypi.org/manage/account/&lt;/a&gt; and scroll down to the API tokens section. Click “Add Token” and follow the few steps to generate an API token which is your access point to uploading packages. &lt;br&gt;
With all this in place, I was able to use twine to handle the package upload. First I needed to install twine, again as simple as &lt;code&gt;pip install twine&lt;/code&gt;. In order for twine to access my API token during the package upload process, it needed to read it from .pypirc file that contains the token info. For some that file may exist already, for me I was required to create it.  Working in windows I simply used a text editor to create it in my home user directory ($HOME/.pypirc).  The file contents had a TOML like format looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[pypi]
username = __token__
password = "your-PyPi-api-token"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally, the package could be uploaded using the command &lt;code&gt;python -m twine upload --repository pypi dist/*&lt;/code&gt;&lt;br&gt;
This uploads the contents of the project’s /dist directory to a PyPi and if all goes well, provides you with  a link to your &lt;a href="https://pypi.org/project/ez-txt2html-bryce-seefieldt/"&gt;package page&lt;/a&gt;. Here, front and center is a handy command line that can be run on CLI. For those playing along at home run  &lt;code&gt;pip install ez-txt2html-bryce-seefieldt&lt;/code&gt;.&lt;br&gt;
 If I did my job right, this will download and install ez-txt2html as an executable and importable module in your Python environment.  From their you can run &lt;code&gt;python -m ez-txt2html.ez_txt2html -h&lt;/code&gt; to see the options for running the package from CLI.  Like magic. What an accomplishment, if I do say so myself.  I’ve passed this on to a couple of my devBuds and it seems to work for them NP – let me know if it works for you, or if you have any tips on how I could make this process smoother or smarter.&lt;/p&gt;

</description>
      <category>python</category>
      <category>pypi</category>
      <category>opensource</category>
      <category>beginners</category>
    </item>
    <item>
      <title>PyPI Packaging</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Wed, 13 Dec 2023 09:35:22 +0000</pubDate>
      <link>https://dev.to/bseefieldt/pypi-packaging-4hkd</link>
      <guid>https://dev.to/bseefieldt/pypi-packaging-4hkd</guid>
      <description>&lt;p&gt;Hey guys, I hate to brag but I’m officially a published author!  That’s right I published my Python &lt;a href="https://pypi.org/project/ez-txt2html-bryce-seefieldt/"&gt;ez-txt2html converter to PyPi&lt;/a&gt;, and it was easy! Here’s how I did it.&lt;br&gt;
First off, for simplicity sake, I did away with my src directory and instead moved all my py code into a directory named for my project (/ez-txt2html).  Creating a directory named for your project is important when creating a build to upload to PyPi.&lt;br&gt;&lt;br&gt;
After some research into the process I decided to use hatch as the build backend, so I needed the package installed,which was as simple as &lt;code&gt;pip install hatch&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next up I added some build related details to my &lt;a href="https://github.com/bryce-seefieldt/ez-txt2html/blob/packaging/pyproject.toml"&gt;pyproject.toml&lt;/a&gt; which included some version info, general project details as well as some instructions on how to create a build.  Most importantly was adding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; [build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This directs the build function to use hatch to create a build of your package. In addition, I was receiving an error when trying to build that required I define what package should be targeted.  This was done by adding the following to the pyproject.toml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[tool.hatch.build.targets.wheel]
packages = ["ez-txt2html"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is essentially just pointing to the newly created package directory.  The result was the creation the /dist directory which contained the .whl and .tar.gz binary files which are what eventually get added to PyPi as your packages.&lt;/p&gt;

&lt;p&gt;From there, I needed to learn a bit about  &lt;a href="https://pypi.org/"&gt;PyPi&lt;/a&gt;  or Python Package Index, which is the home for all the wonderful packages that you know if you have ever run the handy &lt;code&gt;pip install&lt;/code&gt; command.  PyPi has a pretty quick and easy onboarding, which requires a secured account be created and, for the purposes of submitting packages from CLI, an API token be generated.  This can be done in your PyPi profile. Once logg just navigate to  &lt;a href="https://pypi.org/manage/account/"&gt;https://pypi.org/manage/account/&lt;/a&gt; and scroll down to the API tokens section. Click “Add Token” and follow the few steps to generate an API token which is your access point to uploading packages. &lt;br&gt;
With all this in place, I was able to use twine to handle the package upload. First I needed to install twine, again as simple as &lt;code&gt;pip install twine&lt;/code&gt;. In order for twine to access my API token during the package upload process, it needed to read it from .pypirc file that contains the token info. For some that file may exist already, for me I was required to create it.  Working in windows I simply used a text editor to create it in my home user directory ($HOME/.pypirc).  The file contents had a TOML like format looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[pypi]
username = __token__
password = "your-PyPi-api-token"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally, the package could be uploaded using the command &lt;code&gt;python -m twine upload --repository pypi dist/*&lt;/code&gt;&lt;br&gt;
This uploads the contents of the project’s /dist directory to a PyPi and if all goes well, provides you with  a link to your &lt;a href="https://pypi.org/project/ez-txt2html-bryce-seefieldt/"&gt;package page&lt;/a&gt;. Here, front and center is a handy command line that can be run on CLI. For those playing along at home run  &lt;code&gt;pip install ez-txt2html-bryce-seefieldt&lt;/code&gt;.&lt;br&gt;
 If I did my job right, this will download and install ez-txt2html as an executable and importable module in your Python environment.  From their you can run &lt;code&gt;python -m ez-txt2html.ez_txt2html -h&lt;/code&gt; to see the options for running the package from CLI.  Like magic. What an accomplishment, if I do say so myself.  I’ve passed this on to a couple of my devBuds and it seems to work for them NP – let me know if it works for you, or if you have any tips on how I could make this process smoother or smarter.&lt;/p&gt;

</description>
      <category>python</category>
      <category>pypi</category>
      <category>opensource</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Frontend testing JSDOM web apps</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Sat, 09 Dec 2023 23:40:34 +0000</pubDate>
      <link>https://dev.to/bseefieldt/frontend-testing-jsdom-web-apps-pee</link>
      <guid>https://dev.to/bseefieldt/frontend-testing-jsdom-web-apps-pee</guid>
      <description>&lt;p&gt;I have been working on expanding my understanding of creating test environemnts and developing tests for various languages.  It’s a fascinating and challenging element of the development process that I have been wanting to explore for a long time now.&lt;/p&gt;

&lt;p&gt;Having successfully created a test environment and written testing for my Python static site generator project, I thought it would be a good learning experience to implement testing for a different project that I have contributed to in the past.  This &lt;a href="https://github.com/hammadsaedi/regex-pro"&gt;RegEx Pro&lt;/a&gt; project is the front-end  for a JavaScript web app that defines a number of tools to test and validate regular expressions against user input.&lt;/p&gt;

&lt;p&gt;I decided to use the &lt;a href="https://jestjs.io/docs/getting-started"&gt;Jest&lt;/a&gt; testing framework to create a testing environment for the application. Going into this I had minimal familiarity with Jest and immediately, I realized that this task was venturing into unfamiliar territory in teerms of my understanding of testing front-end code.  Specifically, the project has a fairly straightforward &lt;a href="https://github.com/hammadsaedi/regex-pro/blob/main/script.js"&gt;structure which utilizes JSDOM&lt;/a&gt; to render the web app.  While the source code was simple and straightforward in it's method of rendering a UI, it presented a unique challenge in developing the appropriate test environment which needs to interact and render the DOM elements. &lt;/p&gt;

&lt;p&gt;Investigating options for testing JSDOM with Jest was a surprisingly difficult task.  I found that there is limited documentation on the basics of writing tests for JSDOM. What information I did find described many varied approaches to the problem.  I came to realize there a quite a few frameworks and tools to address front-end JS testing, with widely varying  methods. The documentation I found addressing this challenge was often beyond my level of comprehension and often addressed much more complex cases than I was attempting.&lt;/p&gt;

&lt;p&gt;In the end, I found this article, &lt;a href="https://oliverjam.es/articles/frontend-testing-node-jsdom"&gt;Frontend testing in Node with jsdom&lt;/a&gt; by &lt;a href="https://oliverjam.es/"&gt;Oliver Jam&lt;/a&gt; which provided a very simple entry point to understanding front-end testing for JSDOM.  In addition, it provided an example &lt;a href="https://github.com/oliverjam/frontend-testing-jsdom"&gt;repo&lt;/a&gt; which creates simple but functional JSDOM test environment. Being able to work through the environemnt configuration on a fundamental level was extremely helpful in understanding the challenge I was facing and allowed me to make some progress towards my goal.&lt;/p&gt;

&lt;p&gt;What I was able to implement is undoubtedly the first iteration in configuring an appropriate test environment. It can be &lt;a href="https://github.com/bryce-seefieldt/regex-pro/tree/issue-33"&gt;reviewed here&lt;/a&gt;. &lt;br&gt;
I would really appreciate any feedback on what I have implemented and suggestions on streamlining the configuration and the tests I’ve written so far.  I still intend to work further to address code coverage, including more comprehensive integration testing and expanding the unit tests. This will require some further research on my part into testing user input cases. &lt;/p&gt;

&lt;p&gt;Some of my key takeaways from the process so far:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The RegEx Pro application has an &lt;a href="https://github.com/bryce-seefieldt/regex-pro/blob/issue-33/index.html"&gt;index.html&lt;/a&gt; entry point , which renders &lt;a href="https://github.com/bryce-seefieldt/regex-pro/blob/issue-33/script.js"&gt;script.js&lt;/a&gt; as the landing page. The most challenging step in understanding the use of Jest for testing JSDOM elements was learning how to generate mock document objects which can be used to test the various JS modules independently.
&lt;/li&gt;
&lt;li&gt;  The approach I took to address the creation of JSDOM objects for independent unit tests required a number of dependencies. In the end the following dev-dependencies were required in &lt;a href="https://github.com/bryce-seefieldt/regex-pro/blob/issue-33/package.json"&gt;package.json&lt;/a&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  "devDependencies": {
    "@babel/core": "^7.23.5",
    "@babel/preset-env": "^7.23.5",
    "babel-jest": "^29.7.0",
    "jest": "^29.7.0",
    "jest-environment-jsdom": "^29.7.0",
    "jsdom": "^23.0.1",
    "jsdom-global": "^3.0.2",
    "text-encoding": "^0.7.0"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  A major hurdle that took quite a bit of trial and error to solve was Jest producing the following error at runtime:
&lt;code&gt;SyntaxError: Cannot use import statement outside a module&lt;/code&gt; 
Which was related to importing modules into  .test.js files. It was solved by installing Babel and including the following &lt;code&gt;.babelrc&lt;/code&gt; file in the project:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; {
    "presets": [
        "@babel/preset-env"
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  As well, the syntax for the &lt;a href="https://github.com/bryce-seefieldt/regex-pro/tree/issue-33/tests"&gt;test files&lt;/a&gt;, utilized the following code to initialize a document object that can be called within the test functions to render the applications various JSDOM components:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { JSDOM } from "jsdom";
const dom = new JSDOM();
global.document = dom.window.document;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As my research continues into front-end testing I have read about a number of other approaches and frameworks that may facilitate JSDOM testing, including the &lt;a href="https://www.selenium.dev/"&gt;Selenium&lt;/a&gt; framework which may offer some more comprehensive tools for integration testing and CI workflows.  I will continue to post updates as I refine and expand my initial testing for this project.&lt;/p&gt;

</description>
      <category>frontend</category>
      <category>jest</category>
      <category>jsdom</category>
      <category>opensource</category>
    </item>
    <item>
      <title>CI using GitHub Actions</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Sat, 09 Dec 2023 20:40:58 +0000</pubDate>
      <link>https://dev.to/bseefieldt/ci-using-github-actions-5oj</link>
      <guid>https://dev.to/bseefieldt/ci-using-github-actions-5oj</guid>
      <description>&lt;p&gt;I set out to add further development and contribution tools to my &lt;a href="https://github.com/bryce-seefieldt/ez-txt2html"&gt;ez-txt2html&lt;/a&gt; converter.  Specifically, I wanted to introduce an initial Continuous Integration workflow.  For this, I used the “Python Application” workflow template in &lt;a href="https://github.com/features/actions"&gt;Github Actions&lt;/a&gt; to create a CI testing workflow that builds a Py v3.12 environments, builds the project dependencies, runs Flake8 linting and Black formatting and then subsequently runs PyTest using the test suites I’ve written previously.&lt;/p&gt;

&lt;p&gt;What was generated after confirming the details by which to build the workflow was a &lt;a href="https://github.com/bryce-seefieldt/ez-txt2html/blob/main/.github/workflows/python-app.yml"&gt;YAML file&lt;/a&gt; that defined:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  what actions should trigger the workflow to run.  In this case any push or PR to the main branch.&lt;/li&gt;
&lt;li&gt;  what OS the build environment should run on, defined as Ubuntu for this build.&lt;/li&gt;
&lt;li&gt;  what steps to execute in the build and workflow, including:

&lt;ul&gt;
&lt;li&gt;run checkout v3 module.&lt;/li&gt;
&lt;li&gt;python version to be installed.&lt;/li&gt;
&lt;li&gt;process to install project dependencies.&lt;/li&gt;
&lt;li&gt;run linting.&lt;/li&gt;
&lt;li&gt;and run tests, in this case using PyTest.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setting up this basic CI workflow was a breeze and was a process I feel I could have utilized many times in past projects to streamline additions to projects.   It is fair to say my workflow used strictly defaults provided by the template, so there remains a great deal of exploration to be done, in terms of utilizing the full capabilities of GitHub Actions. I am very excited to expand my understanding how to harness the potential of CD and CI for greater efficiency and productivity.  Rest assured I will post some updates as I explore further.&lt;/p&gt;

&lt;p&gt;Over the last few weeks I have spent some time contributing testing functionality to a number of projects, which was met with with drastically different results.  In an upcoming post, I will deep dive into exploration of setting up a Jest test environment and writing front-end testing for a JavaScript web application that I have contributed to in the past.  I faced many challenges in this process, which I look forward to sharing soon.&lt;/p&gt;

&lt;p&gt;On the other end of the frustration spectrum, I was able to look at what one of my peers has been doing with a similar static site generator (text to html converter) application. This is a &lt;a href="https://github.com/paulkim26/til-to-html"&gt;TypeScript project&lt;/a&gt; I have also contributed some code to in the past, &lt;a href="https://dev.to/bseefieldt/typescript-in-wsl-1hh3"&gt;where I discussed&lt;/a&gt; some of the initial challenges of configuring the project environment, which required me to setup Windows Subsytem for Linux.  This project utilized Bun as an alternative to node.js to build and run the application. While there was an initial implementation curve that took a lot of work to get through, however my current task of contributing some additional tests to the existing test suites, was seamless utilizing the tools provided by Bun.  The simplicity and ease of Bun’s built-in test environment really made the process effortless.  Now I was just adding to an exiting test setup, so I can’t speak on the process to get Bun configured, but writing and running test with the provided tools was very easy, particularly in comparison to configuring the PyTest environment for my project.&lt;/p&gt;

&lt;p&gt;As I mentioned, I have a great deal of exploration to do in terms of code coverage for my projects and fully utilizing GitHub Actions, but what I have learned so far really feels like a major step towards understanding the full spectrum of developing and supporting open-source projects. &lt;/p&gt;

</description>
      <category>opensource</category>
      <category>githubactions</category>
      <category>pytest</category>
      <category>github</category>
    </item>
    <item>
      <title>PyTest unit testing now underway</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Sat, 11 Nov 2023 04:56:25 +0000</pubDate>
      <link>https://dev.to/bseefieldt/pytest-unit-testing-now-underway-3ccm</link>
      <guid>https://dev.to/bseefieldt/pytest-unit-testing-now-underway-3ccm</guid>
      <description>&lt;p&gt;I have now added some testing to my &lt;a href="https://github.com/bryce-seefieldt/ez-txt2html"&gt;ez-txt2html converter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I decided to use &lt;a href="https://pypi.org/project/pytest/"&gt;pytest&lt;/a&gt; as framework for running tests on this project.  It seems like a popular choice for many Python developers. As I built tests I also took advantage of some of the tools available in the built-in Python module unittest.  It’s &lt;code&gt;mock()&lt;/code&gt; function provides some helpful ways of providing mock data in order to run tests on certain functions independently, hard coding variables from calling functions.&lt;/p&gt;

&lt;p&gt;Pytest was an easy install, simple as &lt;code&gt;pip install pytest&lt;/code&gt; and running it had a reasonable learning curve. I wrote test (.py) files in my src directory alongside my main module files, running them simply with CLI prompt &lt;code&gt;pytest test_file.py&lt;/code&gt; and responding to any errors or warnings at runtime. I have some work to do to further refine my directory structure, but for now for simplicity on a small program, this was my cleanest approach.&lt;/p&gt;

&lt;p&gt;The real work came from thinking through the types of tests needed for various processes and functions. There are a huge amount of variables and outcomes to think through. I found that searching testing examples that involve similar input/output types, common intention or have similar file and code structure is a good way to think of what some of the priority types of tests are. Ultimately it really does take a great deal of prioritization before jumping too heavily into the code.&lt;br&gt;
In writing tests, I learned a lot about how isolate individual functions, and some of the more in-depth python theory around modularity.  Thinking about a function being called and receiving explicit parameters when isolated from the main program, causes one to look at how certain functions can be designed differently.  Also thinking about how you can assert outcomes that fulfill a tests purpose can be very difficult, especially when a function may interact with global variables or doesn’t return an objector variable? &lt;/p&gt;

&lt;p&gt;While creating tests for functions involved in reading from file and converting text content to html, I was able to take a much more detailed look at the related algorithms and identify some output formatting inconsistencies. That would have been difficult to identify were I not going through the individual functions and trying to define what the expected output format should be.  the biggest realization about writing tests was how important it is to understand and be able to exactly define the expected outcome/output of a program or function is, was probably the biggest realization about writing test.&lt;/p&gt;

&lt;p&gt;Working with test suites, as with many additions relating to accommodating future development and contributions, really causes you to reassess and tidy things like file structure, naming conventions, modularity and so on. I've struggled to a degree with various environment configuration and path related challenges while trying to integrate more new tools, techniques, and workflows. It's a challenge but it's rewarding when you work through it.&lt;/p&gt;

</description>
      <category>python</category>
      <category>opensource</category>
      <category>pytest</category>
      <category>todayilearned</category>
    </item>
    <item>
      <title>Format and Lint for Python Open-Source</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Mon, 06 Nov 2023 19:46:01 +0000</pubDate>
      <link>https://dev.to/bseefieldt/format-and-lint-for-python-open-source-moo</link>
      <guid>https://dev.to/bseefieldt/format-and-lint-for-python-open-source-moo</guid>
      <description>&lt;p&gt;This week I set out to make some changes to my source code environment to support contributors to the project.  The main goal was to add a formatter and a linter to the environment to support consistent code formatting and layout as other developers contribute code changes.&lt;/p&gt;

&lt;p&gt;To do this I worked with &lt;a href="https://pypi.org/project/black/"&gt;Black Formatter&lt;/a&gt; and &lt;a href="https://flake8.pycqa.org/en/latest/"&gt;Flake8 Linter&lt;/a&gt; as they were both tools I saw used frequently and with good feedback from Python developers. Both were very simple to install and implement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Black
&lt;/h2&gt;

&lt;p&gt;Black was a straightforward install &lt;code&gt;pip install black&lt;/code&gt; from CLI. &lt;br&gt;
Once installed it can be run on a .py file or set of directories containing files with the command &lt;code&gt;black .\src\.&lt;/code&gt; from my project base directory. This formats every file in src directory and every child directory.  Including a filename instead of “.” works on just the targeted file.   The changes to the files are implemented and saved automatically.&lt;/p&gt;
&lt;h2&gt;
  
  
  Flake8
&lt;/h2&gt;

&lt;p&gt;Flake8 required the exact same process to install and a slightly different process to run. From CLI &lt;code&gt;python -m pip install flake8&lt;/code&gt; installs the extension into your local environment or any virtual environment active where the install command was executed (this applies to the Black installation above as well).&lt;br&gt;
Once installed I ran &lt;code&gt;flake* .\src\.&lt;/code&gt; to lint all .py files in my src directory and its child folders. The same path changes can be made to apply flake8 to just a single file. &lt;/p&gt;

&lt;p&gt;Rather than automatically editing and saving your files, flaked8 run from CLI provides a list of suggested changes for each file targeted, including the line and column number of the issue and a message describing the suggested edit.  This can be altered in your editor and flake8 should be run repeatedly until you have satisfied all the suggested edits.  If there is a particular issue you would like ignored when running the application, you can run &lt;code&gt;flake8 --extend-ignore &amp;lt;&amp;lt;Error Code&amp;gt;&amp;gt; &amp;lt;&amp;lt;path/to/files&amp;gt;&amp;gt;&lt;/code&gt; instead.&lt;/p&gt;
&lt;h2&gt;
  
  
  VSCode Integration
&lt;/h2&gt;

&lt;p&gt;Both of these tools offer VSCode extensions to simplify the process.  They can be searched via the VSCode extensions window by  typing &lt;code&gt;Ctrl + Shift + X&lt;/code&gt; and searching “Black Formatter” and “Flake8” in the extensions sidebar. Just install and/or enable the extension.  For Black I chose to apply the Black formatting automatically on save, this can be setup in VSCode Settings &lt;code&gt;Ctrl + ,&lt;/code&gt; and searching “format on save” – by  selecting the format on save checkbox, Black automatically makes format changes to your .py files  each time they are saved.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conflicts
&lt;/h2&gt;

&lt;p&gt;In addition to command line arguments that can be used to ignore or customize the format applied by both programs, you can also customize each application with settings in a pyproject.toml file.&lt;/p&gt;

&lt;p&gt;I was actually running into a conflict between the default settings of Black and Flake8 that when used together actually had a disagreement as to what style to adhere to.  Specifically the 2 tools enforced different defaults for maximum line length.  So as Flake8 made a recommendation to shorten a line based on a smaller maximum line length setting, Black would immediately change the edits made upon saving the file.  &lt;/p&gt;

&lt;p&gt;To solve this I chose to reduce the line-length default in Black to match that of Flake8.  I did this by simply adding the following code to my “pyproject.toml” file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[tool.black] 
line-length = 79
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;The format changes that Black made to my code were minor although they did implement the consistency desired from its use. &lt;/p&gt;

&lt;p&gt;The majority of the changes made by Black were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Changing all single quotes to double quotes.&lt;/li&gt;
&lt;li&gt;  Adding an extra blank line between functions.&lt;/li&gt;
&lt;li&gt;  Corrected a few instances of incorrect indentation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Flake8 suggested a number of edits as well some of those included&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;missing whitespace around operator&lt;/li&gt;
&lt;li&gt;blank line contains whitespace&lt;/li&gt;
&lt;li&gt;line too long (91 &amp;gt; 79 characters)&lt;/li&gt;
&lt;li&gt;continuation line under-indented for visual indent&lt;/li&gt;
&lt;li&gt;missing whitespace after ','OR ‘(‘&lt;/li&gt;
&lt;li&gt;expected 2 blank lines, found 1&lt;/li&gt;
&lt;li&gt;too many blank lines (2)&lt;/li&gt;
&lt;li&gt;f-string is missing placeholders&lt;/li&gt;
&lt;li&gt;no newline at end of file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While not consequential to the functionality of the code, the changes enforced by these 2 applications adds a great deal of conformity to all code, and can really ease the burden and concern one has when seeking outside contribution to your project.  It takes a great deal of the guess work out of assessing and accepting edits made by others and certainly reduces the amount of potential conflicts presented by git when attempting to merge the contributions.  This way everyone can focus more on the quality of code rather than style guide adherence.&lt;/p&gt;

&lt;p&gt;While all of this was very straightforward to implement, I also wanted to simplify and streamline the setup of these tools for contributors setting up the project for the first time, as well as a single command option for running the tools against edited files.  &lt;/p&gt;

&lt;p&gt;At the end of the day this was an easy task that I tackled by creating stand alone python scripts that run a series of executables.  For example, &lt;a href="https://github.com/bryce-seefieldt/ez-txt2html/blob/main/format.py"&gt;format.py&lt;/a&gt; was a simple way to run both Black and Flake8 on the entire source code.&lt;/p&gt;

&lt;p&gt;While this ended up being the easy option, the truth is that it took me days diving down a best practices rabbit hole. Something about the prospect of preparing my project to facilitate ease of contribution drove me to get into some deep weeds about how to best setup python projects for open source from the initial stages.  This fueled an exhausting dive into researching virtual environments, setuptools, packaging py projects, and various other approaches to building open source py projects and providing intuitive build processes that simplify setup, execution, and contribution process. &lt;/p&gt;

&lt;p&gt;Being that I just pulled myself out of that rabbit hole and settled on simply providing some simple scripts to install dependencies and run formatters, I think the various take aways from this exhausting deep dive will have to wait for a future post, once I’ve had a little time to process and try out the many, varied methods out there. &lt;/p&gt;

</description>
      <category>python</category>
      <category>flake8</category>
      <category>black</category>
      <category>opensource</category>
    </item>
    <item>
      <title>HacktoberRest</title>
      <dc:creator>Bryce Seefieldt</dc:creator>
      <pubDate>Wed, 01 Nov 2023 23:14:04 +0000</pubDate>
      <link>https://dev.to/bseefieldt/hacktoberrest-4oi5</link>
      <guid>https://dev.to/bseefieldt/hacktoberrest-4oi5</guid>
      <description>&lt;p&gt;As a wrap up of my first go round with Hacktoberfest, I want to share a bit about my final contribution of the month as well as some take aways from the experience.&lt;/p&gt;

&lt;p&gt;One of the most interesting projects I came across this month was &lt;a href="https://github.com/neokd/NeoGPT"&gt;NeoGPT&lt;/a&gt;. It's a GPT based application that is being built to converse with documents and videos.  While still in its infancy, the project has outlined a cool roadmap and has a very active base of contributors continuously expanding on its functionality.  The project appeals to my desire to learn how to work with AI and neural networks.  It is also at a development stage that it is not outside of the reach of my comprehension. Icing on the cake being it's Py based, which is my sharpest tool at the moment. I see it as a decent project to stay tapped into and grow my skills as the application develops.&lt;/p&gt;

&lt;p&gt;The work associated with contributing to NeoGPT was a great progression from some of the first tasks I took on this month. Throughout Hacktober, I was able to contribute a minor change to a very big project (NodeJS), contribute some more substantial working code to a couple of small JS based apps and finally make a larger, more noteworthy contribution to a medium sized, but very active project.  All of which provided &lt;/p&gt;

&lt;p&gt;Having identified a few issues that felt within reach to tackle, I requested and was approved to &lt;a href="https://github.com/neokd/NeoGPT/issues/67"&gt;contribute a logging function&lt;/a&gt; to be written as part of the program's database builder.  While the issue was straightforward, there was a great deal of upfront investment in building the environment to run locally.  This introduced me to an in-depth exploration of various frameworks and toolkits, including Anaconda, PyTorch, NVIDIA CUDA, just to name a few.  In all honestly, it was daunting at times. I was back and forth on how and where to setup the environment, toggling between the Windows and Linux setup options.  In the end, the functionality of the various project dependencies and add-ons was much more conducive to the Linux environment, which was learned the hard way at times, through significant trial and error.  Additionally, this project relies on building local database as part of its engine, and is storage intensive as a result. Almost immediately I began reevaluating my hardware needs as I set my sights on more demanding projects in the future.&lt;/p&gt;

&lt;p&gt;The setup curve just to be able to contribute as well as the greater contribution process, reinforced some of the observations I’ve been making through exploration of open-source development over the last couple of months. Some of those being:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The importance of being able to find and research well-suited projects before getting too involved.&lt;/li&gt;
&lt;li&gt;The value of the community behind a given project and how the level of activity can really make a difference in how much you get from your involvement.&lt;/li&gt;
&lt;li&gt;There are several factors that need to be cosidered when looking at a potential project for the first time. An "easy" issue may prove difficult than expected, depending on the project and its contributor community. &lt;/li&gt;
&lt;li&gt;On a personal level, I have gained insight into some technical requirements of efficiently contributing to open source.  For instance, I am really gravitating towards a robust and organized Linux environment as my prefered workspace.  My experience has been that many dependencies and procedures related to building projects from source, just seem to be more functional and stable in the Linux OS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, some of the main take aways I have from my Hacktoberfest experience are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The Hacktoberfest label doesn’t necessarily make finding worthwhile projects any easier. If anything, I felt that it was overrepresented by projects created strictly for the fest that didn’t really offer the experience to work on tangible products with long-term potential. &lt;/li&gt;
&lt;li&gt;  “Good First Issue” was a great entry point into some manageable projects, although also heavily saturated with very small projects. I got the impression some people were just looking for help on their schoolwork.&lt;/li&gt;
&lt;li&gt;  Whether the project is actively posting new issues and merging PRs is an important factor, as is the variety and progression in the types of issues being posted (open and closed).&lt;/li&gt;
&lt;li&gt;  On steadily growing or larger scale projects, one should try to jump on issues that fit within their general skill set right away.   Opportunities to contribute to relevant and interesting projects at your perfect level of difficulty aren’t guaranteed.  Seizing the opportunities that do arise can be a great entry point to much more substantial contribution in the future. &lt;/li&gt;
&lt;li&gt;  Timing is crucial when seeking contribution opportunities.  To find great issues you need to engage in the search process regularly and move with some expedience when you find a decent prospect. There are many eager contributors out there, hungry to tackle the juiciest issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So just to wrap up my diatribe, this final project contribution was an experiential peak for me in terms of the type of project and how I was able to contribute.  While disheartened at times by challenges faced in the environment setup and code examination process, the investment had a payoff as I was able to tackle a challenging and rewarding contribution. &lt;/p&gt;

&lt;p&gt;The initial &lt;a href="https://github.com/neokd/NeoGPT/issues/67"&gt;logging issue&lt;/a&gt;  that I set out to address was hijacked by another PR (despite my being assigned the issue by the owner). That just reinforced the need to act swiftly and maintain strong communications.&lt;br&gt;&lt;br&gt;
Nonetheless, I was able to move onto &lt;a href="https://github.com/neokd/NeoGPT/issues/43"&gt;a new issue&lt;/a&gt; that cheallenged me to explore how to use shell scripts to simplify the very setup process that had challenged me over the previous days.  I was able to develop a &lt;a href="https://github.com/neokd/NeoGPT/pull/88"&gt;bash script that streamlines the local application build&lt;/a&gt; for fellow users and contributors. While I'm increasingly confident with CLI and bash shell specifically, I had yet to delve into the potential of how to automate and streamline processes using scripts.  Tackling this opened a lot of avenues of interest for me and really helped grow my confidence.&lt;br&gt;&lt;br&gt;
As project contributors were also actively making updates, the process proved a great test in how to use GitHub to effectively manage my changes while keeping up with the project as it developed.  Another bit of positive reinforcement came as I was able to indentify and rectify a small but impactful error introduced as other changes were merged.  As the error immediately impacted the current issue I was addressing, I was able to spot it in real time and offer a quick fix via a [separate PR]( (&lt;a href="https://github.com/neokd/NeoGPT/pull/86"&gt;https://github.com/neokd/NeoGPT/pull/86&lt;/a&gt;) which was quickly accepted by the owner, who expressed appreciation for the catch. &lt;/p&gt;

&lt;p&gt;In the end the process helped me develop a good report with the project owner and led to a direct request to expand on my &lt;a href="https://github.com/neokd/NeoGPT/pull/88"&gt;shell script PR&lt;/a&gt;.  I accepted this request to add support for Python virtual environment. While I haven't yet been able to tackle it, I am excited by the invitarion to contribute further.  This looks like a fun project that I can work on for my upcoming Open-Source Release 0.3 project, and beyond.&lt;/p&gt;

</description>
      <category>python</category>
      <category>pytorch</category>
      <category>gpt</category>
      <category>hacktoberfest</category>
    </item>
  </channel>
</rss>
