Earlier this year, I took over maintenance duties on Cloudinary's Community Libraries. Fast forward a few months, and my team was gearing up to participate in Digital Ocean's annual Hacktoberfest.
Reader, I was worried.
I was still very much familiarizing myself with how the libraries worked, and trying to wrap my head around their issue backlogs. And while I'd never participated in a Hacktoberfest in any capacity before, I had read some scary things from other maintainers, who complained of an annual avalanche of spam.
What’s worse than an avalanche? A turbo-avalanche powered by the recent explosion of AI-powered coding tools. As September drew to a close, I imagined hundreds of AI-written PRs coming in, each hundreds of lines long, all based on prompts that poorly understood the underlying issues, containing code that the submitters themselves didn't understand, but which we would be expected to review.
The good news: It wasn't that bad. While the underlying incentive structures of Hacktoberfest did generate some spam, and while almost all of that spam smelled like AI, our team was able to get ahead of many problems with strong policies, and would have been able to prevent many more with a bit of additional preparation and planning.
The worst PRs were about as bad as I expected, but there weren't as many of them as I'd feared, and the best PRs were actually, dare I say, really good? Hackathon participants tackled some of the issues I'd been putting off addressing for months, helping me push the libraries forward.
All in all, while participation as a maintainer did take a lot of time (and we're in active discussions about whether or not we want to participate next year), it also provided some tangible value, without overwhelming us. And hopefully it helped some folks get more comfortable with the mechanics of open source along the way.
If you're thinking about participating as a maintainer in years to come, read on to learn about what worked for us — and what didn't.
Narrowing Scope
The best decision we made was to limit participation to fixes for a specific set of existing issues, which we identified with a Hacktoberfest tag. This addressed a few issues:
- We had a clear mandate to quickly close any drive-by, low-value PRs (e.g. "improve docs").
- We could spend our time during Hacktoberfest reviewing PRs rather than trying to reproduce new issues or define new features.
- We could further limit contributions to issues whose fixes would be reasonably straightforward, allowing for quick reviews.
It was a good theory! In practice, we applied the Hacktoberfest tag fairly liberally, and did spend some time reproducing issues, specifying new features, and reviewing complex PRs. This led to a lot of work and some very long review delays.
We weren't stingy enough with the tag because we set a goal for how many contributions we wanted to accept by the end of the month, up front, and tagged that many issues, rather than seeing how many issues were going to be a good fit for Hacktoberfest first, and using that number to set the goal.
We had good reasons to set a number ahead of time. For one, it's always good to define a success metric up front so that you can quantify (and communicate) success or failure; we set the quota based on the number of successful submissions that we received in 2024?utm_source=dev-dot-to&utm_content=hacktoberfest2025). For two, we were offering our own swag (a plushie Cloudinary unicorn) to contributors of accepted PRs, and it's best to know how many of those you're going to need well ahead of time.
The end result, though, was a Hacktoberfest tag on a number of issues that I only poorly or vaguely understood, which came back to bite us later.
Knowing that we had a quota to hit did motivate me to do one good thing; I did a full docs review and identified and filed a handful of minor issues, which would be easy to fix and review. It would have been much (much!) faster to fix these myself, as soon as I identified them, but:
- I wouldn't have done the docs review without Hacktoberfest.
- One of the best things Hacktoberfest does is invite people who are new to open source to learn the mechanics of it. In some ways, it doesn't matter what the contributions are, as long as new contributors are "getting their feet wet" forking, committing, PRing, and responding to feedback.
So, the stage was set. As October came around, the PRs started to come in.
Hacktoberfest Begins
Most of the PRs we received for the month came in during the first week. People were also very good at identifying which issues would be easiest to fix. All of my "easy" issues were taken off of the board immediately. We received a number of the predicted nonsensical drive-bys that were easy to close, but after that, it was time to chew through dozens of substantive PRs.
It wasn't long before I started to regret that we'd applied the tag to some poorly defined and understood issues. The most embarrassing of these was: Someone submitted a "fix" for an issue, but their changes seemed unrelated to the problem at hand. Turns out, the issue had already been fixed a year ago, as part of a much broader update, but the issue was never closed. My guess at what happened is that the submitter fed the issue into Cursor, which, being asked to solve a solved problem, did its best.
It was clear that many submissions were created with the help of AI. It was also clear — both in the initial PRs that came in and in the reviews and discussions that followed — that the humans behind these submissions had varying levels of understanding of what they were trying to accomplish, and how their PRs were accomplishing it. The more understanding they had, the better things went. A number of follow-up conversations — where we tried to clarify the issue or suggest possible alternate paths forward — went nowhere, leading to closed PRs and wasted time on both sides.
However, the quality of the highest-quality submissions was well beyond my expectations. A handful of contributors clearly understood the underlying frameworks better than I do, and took the time to suggest thoughtful solutions to nuanced problems that we hadn't yet been able to tackle. I'm extremely grateful for their time and effort, which was worth far more than a unicorn plushie and a Digital Ocean T-shirt.
Overwhelmed, but Just a Little
Because a number of the issues turned out to be rather thorny (and because I and the other technical reviewers had more to do in October than review Hacktoberfest PRs), review times stretched to weeks, which I know was frustrating for contributors. In a couple of cases, we were not able to either accept or reject PRs before the November 15 deadline. We sent those contributors a unicorn anyway, but if the real purpose of Hacktoberfest is to familiarize folks with the mechanics of open source, learning that things are surprisingly complicated and your code might never land isn't a great lesson. Again, I wish we'd been a little more careful about which issues we invited people to solve.
Worth It?
From Cloudinary's side, it's hard to measure the value of a thing like Hacktoberfest. Our community library docs are in better shape now, and dozens of small fixes that would have been easy to put off have now been done. In addition, we made real progress on a handful of meaty issues that I'd been putting off tackling for months. And, hopefully, the participants are more familiar with Cloudinary and our SDKs, and are more likely to use us as they continue to build and grow their careers.
At the same time, the slow (and in a couple of cases, unresolved) reviews may have had the opposite effect, driving participants away. And the cost in time to Cloudinary was real, as we prioritized Hacktoberfest work over other projects in October and November.
Personally, I'm definitely glad I did it, once. Whether we as a company decide to do it again next year remains to be seen.
Happy hacking!
Top comments (1)
Really interesting to have watched the process from the inside :)