DEV Community

ElshadHu
ElshadHu

Posted on

I Built a Tool to Stop Wasting Time on Toxic Open Source Projects

The Motivation

After contributing to several open source projects, I realized some of them have serious issues. Many maintainers don't provide help when you submit pull requests, and you end up wrestling with automated code reviews just to show one commit on your GitHub profile. So this time, instead of building more personal projects (which I've written about in my blogs like Building My Own HTTP Server in TypeScript and Building a CLI Tool That Made My Life Easier), contributing to random open source repositories, or grinding LeetCode problems, I wanted to create something impactful for the open source community. I decided to build repo-health (live demo) to help contributors choose projects they can successfully contribute to, and learn what works and what doesn't in open source collaboration.

My Tech Stack

  • Frontend: Next.js 16, React 19, Chakra UI
  • Backend: tRPC, Octokit, Zod
  • Data: MySQL (Prisma), Redis

I chose this stack to get familiar with current industry-standard tools and see how they work together in a real application(at the end of the day, they are just tools to build something cool).

First Challenge - What am I building?

When I started, I was just showing some data from GitHub and thinking that this is cool until I show my friends and college students. I realized that even though you spent a lot of time building one feature or solving one big bug it does not really matter if it doesn't solve the real world problem. So, I realized that before writing anything it is better to sit and think why it is needed to write that. So, I challenged myself by putting all my efforts to this product into 2 weeks.

Narrowing the Project Scope

I decided to focus on helping open source contributors with issues I've personally experienced and seen my classmates face in college: toxic communication environments on GitHub and wrestling with automated code reviews without proper guidance from maintainers.


First Feature: Overall Score

My system uses a Hybrid Approach that combines a deterministic formula grounded in industry standards with a qualitative Language Model Judge to account for real-world context.

The Algorithm (0-100 Score)

The base health score is calculated using a custom weighted average that I designed, inspired by standardized CHAOSS Metrics. I tuned the weights myself based on what I believe indicates a healthy modern project:

Score = (0.3 × Activity) + (0.25 × Maintenance) + (0.2 × Community) + (0.25 × Docs)

  • Activity (30%): Frequency of commits + Recency of updates + Unique authors.
  • Maintenance (25%): Issue response time + Open issue ratio + Repository age.
  • Community (20%): Logarithmic scale of Stars & Forks.
  • Documentation (25%): Existence of README, LICENSE, and CONTRIBUTING files.

The Language Model Adjustment

Standard formulas often misjudge "Feature-Complete" projects as "Dead." To solve this, I added a Judge Layer using a language model.

My Contribution:

I implemented a secondary logic layer where a language model analyzes the repository's purpose (through README content and file structure). I explicitly allow the model to override the algorithmic score by ±20 points if it detects that the metrics are misleading. This was my addition to bridge the gap between raw numbers and real-world context. For the MVP (minimum viable product), I'm currently using GPT-4 Mini due to its cost-effectiveness and fast response times. This allows me to validate the approach before potentially scaling to other models like Claude Sonnet or coming up with stronger ideas (I expect your ideas :)).

  • Example: A stable utility library with 0 commits in 6 months.
  • Algorithm: Penalizes it as "Old/Abandoned."
  • Language Model Judge: Recognizes it as "Completed/Stable" and awards a +20 Stability Bonus.

Implementation Snippet:

// I feed the calculated score into the AI prompt and ask for an adjustment:

prompt += `
  "scoreInsights": {
    "adjustment": {
       "shouldAdjust": true, 
       "amount": 20, // Range: -20 to +20
       "reason": "This is a stable utility library in maintenance mode. Low activity is expected and healthy.", 
       "confidence": "high"
    }
  }
`;
Enter fullscreen mode Exit fullscreen mode

Checking PR Metrics

I built the PR Metrics Analysis to solve the lack of communication in open source. Before contributing, you need to know:

  1. Speed: Is the average merge time hours or months?
  2. Humanity: Are you dealing with real people or just fighting bot reviews?
  3. Growth: Do new contributors actually stick around?

1. Handling Data Efficiently (Backend Concurrency)

To get these stats fast, I couldn't fetch everything one by one. I used Promise.all to fetch Open PRs, Closed PRs, and Template checks in parallel, cutting load time significantly.I dived into this topic in my http server blog post about event loop, here, the longest operation defines the total execution time.

// Efficiently fetching Open PRs, Closed PRs, and Template checks simultaneously
const [openPRs, closedPRs, template] = await Promise.all([
  fetchPRs(octokit, { owner, repo, state: "open" }),
  fetchPRs(octokit, { owner, repo, state: "closed" }),
  checkPRTemplate(octokit, { owner, repo }),
]);
Enter fullscreen mode Exit fullscreen mode

Keeping contributors around matters more than total count. I used a Sankey Diagram to visualize the flow from "First-time" to "Core Team," making it easy to see if contributors stay or leave immediately.

// Visualizing the contributor flow
const data = {
  nodes: [
    { id: "First PR", color: "#58a6ff" },
    { id: "2nd Contribution", color: "#3fb950" },
    { id: "Regular (3-9)", color: "#a371f7" },
    { id: "Core Team (10+)", color: "#f0883e" },
  ],
  links: [
    {
      source: "First PR",
      target: "2nd Contribution",
      value: funnel.secondContribution + funnel.regular + funnel.coreTeam,
    },
    // ... logic to map flows for Regular and Core contributors
  ].filter((link) => link.value > 0),
};
Enter fullscreen mode Exit fullscreen mode

Where I Made the Wrong Choice: The Security Scanner

I initially built a full Secrets Detection to catch exposed API keys, inspired by Gitleaks and TruffleHog.

How it worked:

  • Regex Pattern Matching: I used ~22 industry-standard patterns to catch known secrets (AWS keys, GitHub tokens, Stripe keys).
  • Randomness Detection: I implemented mathematics to detect highly random strings that "look" like secrets even if they don't match a pattern.

Why I removed it:

While building a security scanner was a great engineering challenge, I realized it drifted from my core mission. I decided to cut the feature to keep the project focused on community metrics rather than security auditing. Believe me, deleting it was painful, but that is just the way it is. Software Engineering is not about solving hard problems, it is about solving existing real life issues. Thanks to my friends' wake-up calls I opened my eyes and saw how I wasted my time.


Intelligent Issue Analysis

Analyzing Issues is the most effective way to understand a project's activity. I implemented several specific metrics to provide real insight to contributors:

  • Average Close Time: This measures the project's true speed. A repository with many open issues is still healthy if the average close time is short (e.g., 2 days vs. 6 months). I track both Average and Middle Value to filter out unusual cases (like issues that took 2 years to close).

  • Hot Issues: To help contributors find active discussions, I use a custom algorithm that prioritizes recent updates (last 48h), high engagement (comments/reactions), and security-related keywords.

  • Hidden Gems: This highlights "Old" but "High Impact" issues (like ignored feature requests). These are often ideal first contributions because they provide value without the conflict of highly active discussions.

  • Crackability Score: A calculated difficulty rating (0-100) based on documentation quality, file scope, and testing requirements. This filters complex issue lists into tasks that are feasible for new contributors to complete.


Project Overview & File-Issue Mapping

I have never been a frontend guy, but this journey pushed me to dive in. One problem I wanted to solve was reducing the complexity of exploring a new project. When you land on a repository with 500 files, where do you even start?

My Solution: Language Model-Powered Structure Analysis

I built a system that recursively fetches the entire file tree and feeds the structure to LLM . The LLM then tells you the entry points, key files, and which folders are responsible for which features. This way, the LLM gives you a proper project tour.

// Recursively fetch the entire file tree from GitHub
const { data } = await octokit.git.getTree({
  owner,
  repo,
  tree_sha: "HEAD",
  recursive: "true",  // Fetches entire tree structure at once
});
Enter fullscreen mode Exit fullscreen mode

File-Issue Mapping: Connecting Problems to Code

On top of that, I added File-Issue Mapping. Before reaching the LLM, I use regex to scan all issue descriptions for file paths. If an issue mentions src/components/Button.tsx, I link that issue directly to that file in the overview. This way, a user can click on a file and immediately see if this file has any open issues and whether this is a single-file fix or if the issue affects multiple files.

// Extract file paths mentioned in issue text using regex
const FILE_PATTERN = /[\w\-\/\.]+\.(ts|tsx|js|jsx|py|go|rs|java|cpp|c)/gi;
function extractFilePaths(text: string): string[] {
  const matches = text.match(FILE_PATTERN) || [];
  return [...new Set(matches)]; // Deduplicate
}
Enter fullscreen mode Exit fullscreen mode

Project Tree Visualization (Frontend)

For drawing the project structure, I got inspiration from the repo-visualizer project. On the frontend, I implemented a recursive function that builds a hierarchy from the flat file list. This function traverses each file path, splits it by /, and creates nested parent/child relationships to form a tree.

// Recursively build hierarchy from flat file paths
function buildHierarchy(files: FileNode[], repoName: string, maxDepth = 3): HierarchyNode {
  const root: HierarchyNode = { name: repoName, path: "", children: [] };
  files.forEach((file) => {
    const parts = file.path.split("/");
    let current = root;
    // Traverse and build tree structure
    parts.slice(0, maxDepth + 1).forEach((part, index) => {
      let child = current.children?.find((c) => c.name === part);
      if (!child) {
        child = { name: part, children: [] };
        current.children!.push(child);
      }
      current = child;
    });
  });
  return root;
}
Enter fullscreen mode Exit fullscreen mode

I also added collision reduction by limiting depth and file count. This keeps the visualization readable even for large repositories.

Known Limitation: I tried to add zoom and pan functionality, but the sensitivity was off and it broke normal scrolling (PRs are welcome). I decided to keep the visualization simple and stable rather than ship a broken interactive version. This feature needs more polish in future iterations.


Activity Pattern Detection

As you know, especially during Hacktoberfest, some people make commits and PRs just for the sake of doing it. Spam contributions, bulk deletions, and suspicious patterns are everywhere. To detect these activities, I built a Pattern Detection system based on commit metrics from the GitHub API.

What is a Suspicious Pattern?

A suspicious pattern is any commit activity that looks very different from normal development work. I track several types:

Mass Deletion Detection

"Deletion Rate" measures the ratio of code deleted vs total changes. A commit that deletes 90% of what it touches with 100+ lines removed is suspicious. It could be a cleanup, or it could be vandalism.

// Detect unusual deletion patterns
function detectChurnAnomalies(commits: CommitWithStats[]): PatternAnomaly[] {
  for (const commit of commits) {
    const total = commit.additions + commit.deletions;
    const churnRatio = commit.deletions / total;

    // Flag commits that delete >80% of touched code
    if (churnRatio > 0.8 && commit.deletions > 100) {
      anomalies.push({
        type: "churn",
        severity: churnRatio > 0.9 ? "critical" : "warning",
        description: `Deleted ${Math.round(churnRatio * 100)}% of code (${commit.deletions} lines)`,
      });
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Rapid-Fire Commits Detection

This catches contribution bombarding. When someone makes 10+ commits in under 10 minutes. Real development doesn't work that way.

// Detect rapid-fire commits (likely spam or farming)
function detectBurstActivity(commits: CommitWithStats[]): PatternAnomaly[] {
  for (let i = 0; i < sorted.length - 4; i++) {
    const windowStart = new Date(sorted[i].date).getTime();
    const windowEnd = new Date(sorted[i + 4].date).getTime();
    const diffMinutes = (windowEnd - windowStart) / (1000 * 60);

    // 5+ commits in under 10 minutes = suspicious
    if (diffMinutes <= 10) {
      anomalies.push({
        type: "velocity",
        severity: count > 10 ? "critical" : "warning",
        description: `Burst: ${count} commits in ${Math.round(diffMinutes)} minutes`,
      });
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Risk Grades

Grade Score Meaning
A 0-10 Normal activity
B 11-30 Minor anomalies
C 31-50 Review recommended
D 51-70 Suspicious
F 71-100 Critical review

Pivot: Checking from Maintainer's Perspective

When I started this project, I was only thinking about contributor's perspective, but reality hit hard. I read blog posts and articles such as :

From these blogs I understood how wild open source can be. Basically, users and companies can ask for features that don't suit the project's goals, or big companies use open source projects without sponsoring them. Also, there's toxicity from users, people who just add their name to the README and feel proud even though that's not a real contribution. So I decided to look at the bigger picture and examine projects from the maintainer's perspective.

Contribution Insights

I built a feature that analyzes rejected PRs and shows why they failed, helping future contributors avoid the same mistakes.

Spam Detection

First, I filter out obvious spam PRs that just add a name to the README:

const SPAM_TITLE_PATTERNS = [
  /add(ed|ing)?\s+(my\s+)?name/i,
  /update(d)?\s+readme/i,
  /hacktoberfest/i,
];

function detectSpam(pr, files): { isSpam: boolean; reason: string } {
  const isReadmeOnly = files.length === 1 && 
    files[0].filename.toLowerCase().includes("readme");

  if (isReadmeOnly && files[0].additions < 5) {
    return { isSpam: true, reason: "Trivial README change" };
  }
  return { isSpam: false, reason: "" };
}
Enter fullscreen mode Exit fullscreen mode

Automated Failure Analysis

For legitimate rejected PRs, I send the code diff and reviewer comments to a language model. It categorizes each failure:

type PitfallAnalysis = {
  prNumber: number;
  mistake: string;
  reviewFeedback: string;
  advice: string;
  category: "tests" | "style" | "scope" | "setup" | "breaking" | "docs";
};
Enter fullscreen mode Exit fullscreen mode
Category Meaning
tests Missing or broken tests
style Code formatting violations
scope Change too large or out of scope
setup Build/environment issues
breaking Introduced breaking changes
docs Missing documentation

This turns rejected PRs into a learning resource for the community.


Weird Bugs

Cache Security Vulnerability

I wasn't familiar with the stack when I started, and I made a mistake by using the same cache key for both public and private repositories. The cache was making the user experience much faster and smoother, so I was feeling proud of myself until I saw my friend's private project showing up in my account.

Out of curiosity, I asked my friend to tell me their private repo name, and in the fuzzy search and analysis, I saw the project. Boom! Of course, GitHub doesn't allow you to visit someone's private repo, but this was still a security vulnerability. I realized I wasn't creating different cache keys for each user.

The Fix: Token-Based Cache Isolation

I implemented a function that creates a unique hash from the user's access token:

import crypto from "crypto";

export function getTokenHash(token?: string | null): string {
  if (!token) return "public";
  return crypto.createHash("sha256").update(token).digest("hex").slice(0, 8);
}
Enter fullscreen mode Exit fullscreen mode

How It's Used

Every cache key now includes the token hash to isolate private repo data per user:

const tokenHash = getTokenHash(accessToken);
const cacheKey = `repo:info:${owner}:${repo}:${tokenHash}`;
Enter fullscreen mode Exit fullscreen mode
Scenario Token Hash Cache Key Example
Public repo public repo:info:facebook:react:public
User A private k3m7p2q9 repo:info:userA:secret:k3m7p2q9
User B private x4y9z2a5 repo:info:userA:secret:x4y9z2a5

This ensures User B can never see User A's cached private repo data.

Future Improvement: Cache Manager Class

Right now I'm calling getTokenHash() in every service file. I'm planning to create a centralized CacheManager class to handle this consistently.

React Hydration Mismatch

Another issue I faced was the infamous React hydration error. The sign-in button was causing a mismatch between what the server rendered and what the client expected.

The Problem

I was using useSession() but not checking the status properly:

// Before: Not checking status
const { data: session } = useSession();

// This caused hydration mismatch because:
// - Server: session is undefined → renders "Sign In" button
// - Client: session loads → renders user avatar
// React panics because the HTML doesn't match
Enter fullscreen mode Exit fullscreen mode

The Fix

I added a loading state that renders the same placeholder on both server and client:

// After: Checking status properly
const { data: session, status } = useSession();

// In the JSX:
{status === "loading" ? (
  // Loading state - same on server and client
  <Box
    w="100px"
    h="32px"
    bg="#21262d"
    borderRadius="md"
    opacity={0.5}
  />
) : session?.user ? (
  // User is logged in - show avatar
  <UserMenu />
) : (
  // Not logged in - show sign in button
  <SignInButton />
)}
Enter fullscreen mode Exit fullscreen mode

Why This Works

State Server Renders Client Renders Match?
Loading Placeholder Placeholder Yes
Logged In Placeholder Avatar Yes
Logged Out Placeholder Sign In Yes

The important point was here during initial hydration, both server and client render the same placeholder. Only after hydration completes does the client update to show the actual state.

Recommended Resource

I highly recommend reading The Perils of Rehydration. This article helped me understand Next.js server-side rendering and React hydration issues properly.


What I'm planning Further

I realized that in this project I need to think about both contributor's and maintainer's perspectives to create a proper product. So, my future ideas are:

  • Replace Recent Commits with Significant Commits: Instead of showing all recent commits, I want to highlight commits that made meaningful changes to the project. This means filtering out trivial updates (like typo fixes or formatting changes) and showing commits that added features, fixed bugs, or made architectural improvements.
  • Check if feature requests are appropriate for project scope: Help maintainers identify feature requests that don't align with the project's goals.
  • Promote funding platforms: Highlight GitHub Sponsors, Open Collective, and Buy Me a Coffee for open source projects (maintainers deserve it, especially if we use their projects extensively).
  • Show project lifecycle status: Display whether a project is Active, in Maintenance Mode, Archived, Company-backed, or maintained by a Solo Developer.
  • Show common setup mistakes: Extract patterns from CONTRIBUTING.md and failed CI builds to help contributors avoid common errors. I'm planning to remove my dependency section because it's not related to this scope, even though I spent a lot of time building and thinking about this feature by relating PRs to dependency vulnerabilities.
  • Dropdown with 4 levels: First Contributor, Beginner, Expert at Stack, I am Cooked
  • Add comprehensive testing: Implement unit tests, integration tests, and end-to-end tests to ensure the platform's reliability and make it easier for contributors to add features confidently.

Conclusion

I started this project 2 weeks ago and if you visit my GitHub you can see that this project is being built with pain :). So, again that project does not contain the cleanest code that I have written. I thought of writing this project and ship quickly in order to show you guys, what is my idea and Where I am heading. At the end of the day, code speaks louder than words. Btw, I used LLM for helping me do repetitive tasks that I already know how to write and find the answer from StackOverflow. I wouldn't produce this much of code within short amount of time otherwise. But, ideas, ideas can never be created by LLM, and that is where I need your help and your vision. Also, ideas never come full-formed, most of the time they are ill-formed. Let's make open source safer and better by improving this project together. Right now, this project has many areas that need improvement. But with the help of the open source community, we can build something that saves time for both maintainers and contributors. Lastly, you can follow me if you want on GitHub. If you star the repo-health, that would be amazing. You can contact me on Discord, that is my username: elshad_02838

Top comments (20)

Collapse
 
gesslar profile image
gesslar

This was a very long read and I admit I did not read it all. I read a lot of it though, the parts about patterns were a delight.

I make tonnes of shit, but nobody knows about it, so I don't have to deal with issues/PRs/community and I typically disable those features on my projects.

But, I release under the Unlicense and am consequently very unbothered. I suppose that might make me and my projects toxic, idk. Maybe. I am a little territorial. I'm not sure how I would feel if someone were to PR one of my things. But, being licensed (as it were) as they are, anybody is free to take it and do what they want; my level of fucks could not be lower. 😇

That said, I'm not generally into self-promotion, but I see that you're doing pipeline and concurrency things, and thought maybe you might want to peek at one of my projects. I'm right now in the process of evaluating how I'm going to make it work in browsers, but server-side works great (for me). Take a peek or not at Actioneer.

Great article on solving friction in the community. 🤗

Collapse
 
elsad_humbetli_0971c995ce profile image
ElshadHu

Thanks for the thorough comment! I really appreciate you taking the time. Your Unlicense approach is a valid way to handle it , different philosophy . I just took a glance to Actioneer. This project looks like some cool stuff! 🔥 I will definitely dive deeper into the pipeline/concurrency implementation. Thanks for sharing your project 🙌.

Collapse
 
gesslar profile image
gesslar

Ha! At least you get reads. I have two articles and a total of checks again 23 combined. 😆

Thread Thread
 
elsad_humbetli_0971c995ce profile image
ElshadHu

I will take a look at your articles, and I’m sure they’re great! Sometimes it just takes time for content to get discovered. Keep writing😅

Collapse
 
lord_of_noob profile image
Peter Kovacs

Thank you for the write up. Interesting thoughts. The project I am working on is not your fish. That's fine. I just hope I do not have to argue now with people that think my project is now not only dead but toxic too.. 😂 that would be very annoying.
I would like to add some thoughts, I hope they add some value.

An high activity project is that good or bad?
I am not sure if a project where you have around 400 concurrent discussions, with closer times under 2days is something you easily can contribute too. It might pressure you into a higher commit to the project. Or am I wrong?

I wonder how you want to measure project complexity. Some projects are complex by nature, through historical aspects or other reason and not simply accessable. I miss there a category in your assent.
Maybe also a difficulty rating is something advisable?

What do you assume is happening if new contributors do not stick around? That's not clear to me.

Have you thought about multi repository projects?

How do you measure project structures? I mean you have some points there like bots , static or human projects. Some have mentoring programs, or other gate keepers. Others might require you have to bring the motivation and they provide help if you ask the "right" question.
(With right I mean a question that someone can answer)

What I really like is the plan to go for significant commits. I had this (stupid, toxic) discussion on the question why I commit to a project the other one has much more commits and my had almost none. And the amount of commits just say nothing. And I like the other roadmap points, keep it flowing. 😊

I think the approach really could improve to benchmark attractiveness to projects. I think it could be a tool to give insights where to take measurements.

Thanks for the article, and I will keep an eye on your work. Maybe I find the time to play around with your idea. At least I like it.

Collapse
 
elshadhu profile image
ElshadHu

Thanks for the thoughtful comment🙏. On high activity, you're right. 400 discussions with 2 day closes might actually feel overwhelming rather than healthy. I didn't think about that pressure side.That is the thing I need to consider again. On complexity, I don't have a good way to measure it yet. File count and folder depth don't capture history or domain knowledge. A difficulty rating makes sense though, something like beginner friendly or requires domain expertise.

On contributors leaving, I was assuming it's bad but it is normal that it is not clear to you, sometimes it's just the nature of the project. People can contribute once and go. But overall, it gives an idea whether people keep contributing or leave after their first PR. On multi repo and mentoring structures, I haven't tackled these yet. Some projects have mentorship programs, others expect you to figure things out. it might be hard to detect but worth surfacing. This is a gap I need to fill. On significant commits, I am glad that you liked🙂. I had similar discussions before. Commit count means nothing. Quality is more important than quantity.

These are exactly the gaps I need help seeing. My idea started messy and feedback like this helps me a lot make it full-formed.

Collapse
 
vineyrawat profile image
Viney Rawat

I'm gonna test your tool with your tool

Collapse
 
elsad_humbetli_0971c995ce profile image
ElshadHu

😂😂😂

Collapse
 
fidancyhn profile image
Fidan Mammadova

👏🏻👏🏻👏🏻

Collapse
 
elsad_humbetli_0971c995ce profile image
ElshadHu

Thanks,🙌.

Collapse
 
osmankahraman profile image
Osman

I am going to pıttık UI contribution for that project 😎

Collapse
 
sloan profile image
Sloan the DEV Moderator

We loved your post so we shared it on social.

Keep up the great work!

Collapse
 
elshadhu profile image
ElshadHu

Thank you so much for sharing! Means a lot to have the DEV Community's support. The feedback and discussions from the community have been incredibly valuable in shaping the direction of this project.🙏

Collapse
 
fwyzard profile image
Andrea Bocci

If your idea of contributing to a project is

just to show one commit on your GitHub profile

I'd say the problem are not the maintainers that don't "help you", you are the problem.

Collapse
 
elsad_humbetli_0971c995ce profile image
ElshadHu

Leaving this up for transparency, but I need to address this.

Andrea, you've completely misinterpreted my post. Check my GitHub contributions and see how I actually engage with open source projects before making assumptions. The issue isn't about collecting commits for show - it's about new contributors wasting time on unresponsive or hostile projects. That's a real problem many developers face. Constructive criticism? Always welcome. Nitpicking and twisting my words? That's not helpful and goes against what open source is about. Feel free to read the actual blog post or check out the project if you want to understand what I built and why.

Collapse
 
samelard profile image
Info Comment hidden by post author - thread only accessible via permalink
same lard

I thought the exact same thing as Andrea when I read what you wrote. They're not making assumptions, and no one is nitpicking or "twisting your words". If English isn't your first language, perhaps it just isn't obvious how it comes across.

Glancing at your other posts, I see you're just a student, which is great. I'm glad you're enthusiastic about open source. But maybe don't go around referring to other projects as "Toxic" when you're just getting started yourself, or lecturing others about "what open source is about".

It does seem like the whole point of your current project is to maximize the chances of getting your PR accepted in a project. That is most definitely not what open source is about. You should contribute because you genuinely find the software interesting and useful yourself, and plan to stick around and maintain your contributions. Eventually, you might even get to assist with the boring chores like triaging bug reports and updating dependencies! Good luck, and keep contributing to open source!

 
elsad_humbetli_0971c995ce profile image
ElshadHu

This one does not even require transparency when the pattern is this obvious. Accounts created Dec 29 and Dec 30, both commenting? Pattern recognized. Good luck to you guys 🫡

Collapse
 
shazin profile image
Shazin

I didn't expect to see Toxic and Open Source in the same sentence. Is this a ragebait? Didn't read the article. Title seems ragebait.

Collapse
 
elsad_humbetli_0971c995ce profile image
ElshadHu

Thanks for asking, I get why the title might seem strong, but the article is actually pretty balanced. I cover both contributor and maintainer perspectives - especially in the "Pivot" section where I talk about maintainer burnout and challenges. Give it a read if you're interested, I think you'll see it's not ragebait but addressing a real problem many people face. Hope you enjoy it

Some comments may only be visible to logged-in visitors. Sign in to view all comments. Some comments have been hidden by the post's author - find out more