<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alik Khilazhev</title>
    <description>The latest articles on DEV Community by Alik Khilazhev (@alikhil).</description>
    <link>https://dev.to/alikhil</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alikhil"/>
    <language>en</language>
    <item>
      <title>Kubernetes In-Place Pod Resize</title>
      <dc:creator>Alik Khilazhev</dc:creator>
      <pubDate>Mon, 29 Dec 2025 09:00:00 +0000</pubDate>
      <link>https://dev.to/alikhil/kubernetes-in-place-pod-resize-454i</link>
      <guid>https://dev.to/alikhil/kubernetes-in-place-pod-resize-454i</guid>
      <description>&lt;p&gt;About six years ago, while operating a large Java-based platform in Kubernetes, I noticed a recurring problem: our services required significantly higher CPU and memory during application startup. Heavy use of Spring Beans and AutoConfiguration forced us to set inflated resource requests and limits just to survive bootstrap, even though those resources were mostly unused afterwards.&lt;/p&gt;

&lt;p&gt;This workaround never felt right. As an engineer, I wanted a solution that reflected the actual lifecycle of an application rather than its worst moment.&lt;/p&gt;

&lt;p&gt;I opened an &lt;a href="https://github.com/kubernetes/kubernetes/issues/83111" rel="noopener noreferrer"&gt;issue&lt;/a&gt;  in the Kubernetes repository describing the problem and proposing an approach to adjust pod resources dynamically without restarts. The issue received little discussion but quietly accumulated interest over time (13 👍 emoji reaction). Every few months, an automation bot attempted to mark it as stale, and every time, I removed the label. This went on for nearly six years...&lt;/p&gt;

&lt;p&gt;Until the release of Kubernetes 1.35 where In-Place Pod Resize feature was &lt;a href="https://kubernetes.io/blog/2025/12/19/kubernetes-v1-35-in-place-pod-resize-ga/" rel="noopener noreferrer"&gt;marked as stable&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What In-Place Pod Resize Brings
&lt;/h2&gt;

&lt;p&gt;In-Place Pod Resize allows Kubernetes to update CPU and memory requests and limits without restarting pods, whenever it is safe to do so. This significantly reduces unnecessary restarts caused by resource changes, leading to fewer disruptions and more reliable workloads.&lt;/p&gt;

&lt;p&gt;For applications whose resource needs evolve over time, especially after startup, this feature provides a long-missing building block.&lt;/p&gt;

&lt;h2&gt;
  
  
  Impact on VerticalPodAutoscaler
&lt;/h2&gt;

&lt;p&gt;The new &lt;code&gt;resizePolicy&lt;/code&gt; field is configured at the pod spec level. While it is technically possible to change pod resources manually, doing so does not scale. In practice, this feature should be driven by a workload controller.&lt;/p&gt;

&lt;p&gt;At the moment, the only controller that supports in-place pod resize is the Vertical Pod Autoscaler (VPA).&lt;/p&gt;

&lt;p&gt;There are two enhancement proposals enable this behavior:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/kubernetes/autoscaler/tree/455d29039bf6b1eb9f784f498f28769a8698bc21/vertical-pod-autoscaler/enhancements/4016-in-place-updates-support" rel="noopener noreferrer"&gt;AEP-4016: Support for in place updates in VPA&lt;/a&gt; which introduces &lt;code&gt;InPlaceOrRecreate&lt;/code&gt; update mode&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/enhancements/7862-cpu-startup-boost" rel="noopener noreferrer"&gt;AEP-7862: CPU Startup Boost&lt;/a&gt; which is about temporarily boosting pod by giving more cpu during pod startup. This is conceptually similar to the approach proposed in my original issue.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is an example of Deployment and VPA using both AEP features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-heavy-java-app:stable&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1000m&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1024Mi&lt;/span&gt;
            &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2000m&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2048Mi&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;autoscaling.k8s.io/v1"&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VerticalPodAutoscaler&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-vpa&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;targetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;apps/v1"&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example&lt;/span&gt;
  &lt;span class="na"&gt;updatePolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;updateMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;InPlaceOrRecreate"&lt;/span&gt;
  &lt;span class="na"&gt;resourcePolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;containerPolicies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;app"&lt;/span&gt;
        &lt;span class="na"&gt;minAllowed&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;250m"&lt;/span&gt;
          &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;512Mi"&lt;/span&gt;
        &lt;span class="na"&gt;maxAllowed&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000m"&lt;/span&gt;
          &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8192Mi"&lt;/span&gt;
        &lt;span class="c1"&gt;# The CPU boosted resources can go beyond maxAllowed.&lt;/span&gt;
        &lt;span class="na"&gt;startupBoost&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Factor"&lt;/span&gt;
            &lt;span class="na"&gt;quantity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With such configuration pod will have doubled cpu requests and limits during startup. During the boost period no resizing will happen.&lt;/p&gt;

&lt;p&gt;Once the pod reaches the &lt;code&gt;Ready&lt;/code&gt; state, the VPA controller scales CPU down to the currently recommended value.&lt;/p&gt;

&lt;p&gt;After that, VPA continues operating normally, with the key difference that resource updates are applied in place whenever possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;p&gt;Does this feature fully solve the problem described above? Only partially.&lt;/p&gt;

&lt;p&gt;First, most application runtimes still impose fundamental constraints. Java and Python runtimes do not currently support resizing memory limits without a restart. This limitation exists outside of Kubernetes itself and is tracked in the OpenJDK project via &lt;a href="https://bugs.openjdk.org/browse/JDK-8359211" rel="noopener noreferrer"&gt;an open ticket&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz0qdilgn4tk1vvt6uno.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz0qdilgn4tk1vvt6uno.png" alt=" " width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Second, Kubernetes does not yet support decreasing memory limits, even with in-place Pod Resize enabled. This is a known limitation documented in the enhancement proposal for &lt;a href="https://github.com/kubernetes/enhancements/tree/758ea034908515a934af09d03a927b24186af04c/keps/sig-node/1287-in-place-update-pod-resources#memory-limit-decreases" rel="noopener noreferrer"&gt;memory limit decreases&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As a result, while in-place Pod Resize effectively addresses CPU-related startup spikes, memory resizing remains an open problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;In place Pod Resize gives a foundation for cool new features like StartupBoost and makes use of VPA more reliable. While important gaps remain, such as &lt;a href="https://github.com/kubernetes/kubernetes/issues/135670" rel="noopener noreferrer"&gt;memory decrease support&lt;/a&gt; and &lt;a href="https://github.com/kubernetes/kubernetes/issues/126891" rel="noopener noreferrer"&gt;scheduling race condition&lt;/a&gt;, this change represents a meaningful step forward.&lt;/p&gt;

&lt;p&gt;For workloads with distinct startup and steady-state phases, Kubernetes is finally beginning to model reality more closely.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>sre</category>
    </item>
    <item>
      <title>Contributing to Open Source: Why It Matters and How to Start</title>
      <dc:creator>Alik Khilazhev</dc:creator>
      <pubDate>Thu, 18 Dec 2025 09:53:48 +0000</pubDate>
      <link>https://dev.to/alikhil/contributing-to-open-source-why-it-matters-and-how-to-start-4m4i</link>
      <guid>https://dev.to/alikhil/contributing-to-open-source-why-it-matters-and-how-to-start-4m4i</guid>
      <description>&lt;p&gt;Whether you’re curious about open source or wondering how to make a meaningful impact, this post guides you through the process. You’ll learn why contributing is important, discover the different ways to get involved, and find practical steps to take your first contribution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;First of all, why should you contribute to open source?&lt;/p&gt;

&lt;h3&gt;
  
  
  Giving back
&lt;/h3&gt;

&lt;p&gt;Everyone, from freelance engineers to Big Tech companies and even governments, uses Open Source Software (OSS). Some use it less, others more, but almost everyone depends on it in one way or another. Most of us are &lt;em&gt;consumers&lt;/em&gt; of open source.&lt;/p&gt;

&lt;p&gt;Contributing to OSS means giving something back. This is especially important given the many cases of projects (e.g &lt;a href="https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/#history-and-challenges" rel="noopener noreferrer"&gt;nginx-ingress&lt;/a&gt;, &lt;a href="https://github.com/external-secrets/external-secrets/issues/5084" rel="noopener noreferrer"&gt;external-secrets&lt;/a&gt;) being deprecated due to maintainer burnout, lack of community support, or overwhelming workloads.&lt;/p&gt;

&lt;p&gt;It is true that some OSS projects are backed by large companies and maintained by engineers who are paid to work on them. However, the half of open source projects are still maintained by individuals in their spare time for free (&lt;a href="https://www.linuxfoundation.org/blog/open-source-maintainers-what-they-need-and-how-to-support-them" rel="noopener noreferrer"&gt;source&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;In this sense, contributing to OSS can be seen as a form of digital volunteering. Some companies (&lt;a href="https://www.linkedin.com/posts/matthieublumberg_when-running-a-platform-using-open-source-activity-7382301443698892800-3St-?utm_source=share&amp;amp;utm_medium=member_desktop&amp;amp;rcm=ACoAABsF7nEBwwglPayi0aSCflAk2mD2nv-HVCA" rel="noopener noreferrer"&gt;Criteo&lt;/a&gt;, &lt;a href="https://www.futurice.com/blog/year-2015-in-company-sponsored-open-sourcehttps://www.futurice.com/blog/year-2015-in-company-sponsored-open-source" rel="noopener noreferrer"&gt;Futurice&lt;/a&gt;) even offer paid volunteer time (VPTO) specifically for open-source contributions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Learning
&lt;/h3&gt;

&lt;p&gt;Another strong reason to contribute to open source is personal growth.&lt;/p&gt;

&lt;p&gt;Every contribution becomes a learning opportunity because it places you inside a real, production-grade codebase rather than a controlled tutorial environment. You learn how projects are structured, how architectural decisions are made, how backward compatibility is maintained, and how trade-offs are handled in practice. Often this means working with unfamiliar tools, languages, or ecosystems, which naturally expands your technical range.&lt;/p&gt;

&lt;p&gt;At the same time, open source strongly develops communication and collaboration skills. Issues and pull requests force you to articulate problems clearly, propose solutions in a way others can evaluate, and explain &lt;em&gt;why&lt;/em&gt; a particular approach makes sense. Feedback from maintainers and contributors exposes you to different perspectives and constraints, requiring you to adapt, clarify, and sometimes rethink your ideas.&lt;/p&gt;

&lt;p&gt;Because most collaboration happens asynchronously and in writing, you also improve your ability to communicate precisely and concisely. Over time, this structured, public collaboration sharpens how you discuss technical topics, handle reviews, and work effectively with distributed teams. These are the same skills required in modern engineering organizations, making open source a highly practical training ground.&lt;/p&gt;

&lt;h3&gt;
  
  
  Networking and professional reputation
&lt;/h3&gt;

&lt;p&gt;Open source contribution naturally leads to networking, even if you are not actively trying to “network.” By participating in issues, code reviews, and pull requests, you start interacting with maintainers and contributors from different companies, countries, and levels of seniority. Over time, these repeated interactions build familiarity and trust. People begin to recognize your name, your areas of expertise, and the quality of your work.&lt;/p&gt;

&lt;p&gt;Regular contributions can turn these lightweight interactions into professional relationships. Maintainers may invite you to collaborate more closely, grant you additional responsibilities, or even recommend you for roles on their teams. In many cases, job opportunities arise not from formal applications, but from someone already knowing how you work.&lt;/p&gt;

&lt;p&gt;Another important, and often underestimated, benefit of open source contribution is visibility. Most professional work is hidden behind NDAs and internal repositories, making it difficult to demonstrate your real impact. Open source work, on the other hand, is public by default. Your commits, pull requests, discussions, and design decisions are all visible and attributable to you.&lt;/p&gt;

&lt;p&gt;This public track record allows you to clearly show not only &lt;em&gt;what&lt;/em&gt; you built or improved, but also &lt;em&gt;how&lt;/em&gt; you collaborate, communicate, and respond to feedback. For recruiters and hiring managers, this is far more convincing than a list of skills on a résumé. In practice, open source contributions often function as a living portfolio and a long-term investment in your professional reputation.&lt;/p&gt;

&lt;h2&gt;
  
  
  How?
&lt;/h2&gt;

&lt;p&gt;Now that you understand why contributing matters, let’s look at how to get started.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Natural way
&lt;/h3&gt;

&lt;p&gt;You might think that contributing to open source requires skills you do not have, a brilliant idea for a new library, or deep expertise in a specific domain. These beliefs often make OSS contribution feel unreachable.&lt;/p&gt;

&lt;p&gt;That is not the case.&lt;/p&gt;

&lt;p&gt;You do not need a revolutionary idea or special credentials. If you are an engineer who already writes code, you are capable of contributing.&lt;/p&gt;

&lt;p&gt;Here is a simple approach: the next time you are solving a problem using an open source tool and notice that it does not work as expected (a bug) or lacks a feature you need, do not immediately abandon the tool. Use your skills, and an LLM if needed, to investigate and try to fix the issue.&lt;/p&gt;

&lt;p&gt;If you succeed, open a pull request to the upstream repository. If you do not, create an issue and share your findings. That is still a contribution, and you will have learned something in the process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Good first issue
&lt;/h3&gt;

&lt;p&gt;If everything you use works perfectly and you do not notice any gaps, you can take a more deliberate approach.&lt;/p&gt;

&lt;p&gt;Make a list of projects you like, use, or want to learn more about. Browse their open issues and look for ones labeled “good first issue” or similar. Pick something that matches your current skill level and try to tackle it.&lt;/p&gt;

&lt;p&gt;If your list is short or you cannot find suitable issues, there are also curated lists of projects actively looking for contributors.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://goodfirstissue.dev/" rel="noopener noreferrer"&gt;https://goodfirstissue.dev/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://forgoodfirstissue.github.com/" rel="noopener noreferrer"&gt;https://forgoodfirstissue.github.com/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://goodfirstissues.com/" rel="noopener noreferrer"&gt;https://goodfirstissues.com/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Open source is great, but keeping it that way requires people to contribute.&lt;br&gt;
It is not hard or unreachable. Anyone can do it, and the community needs more people like you.&lt;br&gt;
Start small, pick a project you love, and take your first step&lt;/p&gt;

</description>
      <category>opensource</category>
    </item>
  </channel>
</rss>
