Recently, I started issue hunting again and improving my knowledge in different parts of programming such as networking, concurrency and LLM engineering. What I realized in open source is that it might become dead because of AI slop and lack of communication between maintainers and contributors. This blog post gives information about how communication beats AI slop. I'll represent my 2-3 PRs that I did recently and I'll iterate through the process. Nowadays, I contributed to next-ai-draw-io and streamplace meanwhile I will mention how I experienced being a maintainer of an open source project which got popularity thanks to you guys :). That is repo-health which I talked about in my previous blog post.
Communication Matters More Than You Think
Even if I know all tricks in one language and use idiomatic syntax it does not mean anything if I am not capable of coworking with other developers. When I contributed to next-ai-draw-io I wrote under issue that I can tackle it and I got assigned.
Nowadays, I'm sticking to 1-2 projects instead of jumping around other projects. Why? Because spending less time on issues without understanding technical depth and architecture means you just submit shallow PRs that create more issues. I want to actually understand the codebase and architecture by playing around the same projects.
First Attempt at Adding Vertex AI Support
I was trying to activate Vertex AI on the server side via environment variables instead of handling it through the API.
const vertexProject = process.env.GOOGLE_VERTEX_PROJECT
const vertexLocation =
process.env.GOOGLE_VERTEX_LOCATION || "us-central1"
I was handling it in the router without API authentication like some of other models:
else if (
provider !== "ollama" &&
provider !== "edgeone" &&
provider !== "vertexai" &&
!apiKey
)
I documented everything and made a PR. I faced AI review from GitHub Copilot, which was fair because it caught naming conventions and small mistakes. This makes sense for initial review before maintainers spend time on it. I might use this pattern in my own projects only as first review. But I was missing one crucial thing: I was only handling it server-side. Even though my changes were well documented, they weren't suitable for this feature.
Maintainer Told Me to Switch Approach
I got nice feedback from one of the maintainers to make project id and location configurable. So, I switched to this and added extra fields to ValidateRequest interface in route.ts:
interface ValidateRequest {
provider: string
apiKey: string
baseUrl?: string
modelId: string
// AWS Bedrock specific
awsAccessKeyId?: string
awsSecretAccessKey?: string
awsRegion?: string
// Vertex AI specific
vertexProject?: string
vertexLocation?: string
}
I also changed the Vertex AI case to allow client-provided project/location OR fallback to environment variables:
case "vertexai": {
// OR client-provided project/location
const project =
vertexProject || process.env.GOOGLE_VERTEX_PROJECT
const location =
vertexLocation ||
process.env.GOOGLE_VERTEX_LOCATION ||
"us-central1"
if (!project) {
return NextResponse.json(
{
valid: false,
error: "Project ID is required (set in Settings or via GOOGLE_VERTEX_PROJECT)",
},
{ status: 400 },
)
}
const vertex = createVertex({
project: project,
location: location,
})
model = vertex(modelId)
break
}
So, maintainer who asked me to do this change told that this PR is solid and ready to merge as is and there was another issue that is not related to that PR. He recalled the creator of the project and he gave me strong feedback with instructions to add vertex-ai via API key instead.
Got Stuck on Authentication
After getting the last feedback, I didn't lose my interest on that PR because of clear instructions and maintainers are always in contact with you and try to help you. For the record, I highly suggest you to read this post which talks about ego olympics how some developers make code reviews in a brutal way. I started reading about express-mode and thinking model. When I started, I got no idea how to do that in that codebase, exactly no idea. I got stuck and wrote under the issue that I am not sure how to handle authentication error because 1/3 times it was failing when I tested API.
Maintainer explained to me how I have to wire up and fix the issue. I needed to pass the API key through headers in the frontend. I added to chat-panel.tsx:
// Vertex AI credentials (Express Mode)
...(config.vertexApiKey && {
"x-vertex-api-key": config.vertexApiKey,
}),
Then I had to update the model creation logic in route.ts to use the Vertex AI properly:
export async function POST(req: Request) {
awsAccessKeyId,
awsSecretAccessKey,
awsRegion,
// Note: Express Mode only needs vertexApiKey
vertexApiKey,
} = body
After all the changes my PR got accepted and I am willing to contribute more to this project and learn more. Meanwhile searching about that repo I used my tool repo-health and see how maintainers make fair decisions and pay attention to PRs. Also, I saw PRs that haven't been merged yet (both open and closed ones) via my tool and visited them to understand the patterns that made them unsuitable for merging.
Contributing to streamplace
Software development is not easy and has never been easy. Nowadays, I explored streamplace again which I talked about this project in one of previous blog posts. I added a scroll button to chat message with animation for issue in PR and exactly just one scroll button.
We have to think that when we contribute maintainers don't want the code that isn't documented and solves the problem in a vague way because of not paying attention to details in implementation. Believe me, they can do the vague way of solving within 5 minutes but they respect their software which means also we have to do. I read a blog post which gives info how github graph does not mean anything and I absolutely agree. Making one PR to the critical software and solving one weird bug is more important than anything for me.
Anyway, when it comes to contribution, I implemented animation.
const flatListRef = useRef<FlatList>(null);
// Animation for scroll-to-bottom button
const buttonOpacity = useSharedValue(0);
const buttonTranslateY = useSharedValue(20);
useEffect(() => {
buttonOpacity.value = withTiming(isScrolledUp ? 1 : 0, { duration: 200 });
buttonTranslateY.value = withTiming(isScrolledUp ? 0 : 20, {
duration: 200,
});
}, [isScrolledUp]);
const buttonAnimatedStyle = useAnimatedStyle(() => ({
opacity: buttonOpacity.value,
transform: [{ translateY: buttonTranslateY.value }],
}));
const scrollToBottom = () => {
flatListRef.current?.scrollToOffset({ offset: 0, animated: true });
};
Why I chose useRef? Because I needed a reference to the FlatList component that persists across re-renders. If I stored this in state, every state update would cause unnecessary re-renders. useRef gives me a mutable reference that doesn't trigger re-renders when it changes.
Second fix in streamplace
This one could seem boring and easy, but I thought about it and analyzed the project :). I just fixed the issue by making a PR that I added one line of code, but sometimes it is just the way it is and I documented it to be clear even for one line :).
{ overflow: "hidden" }
What I Changed in repo-health
I got nice feedback for repo-health and stars that I was not expecting :). Since my last blog, I tried to fix bugs in that repo and create issues and a discussion to make it more engaging.
Because of the traffic after the blog, I fixed the rate limiting in a temporary way by only allowing one repository analysis per hour when unsigned, but when signed you can search as much as you want. I know it's not ideal right now, but that was required at that time, and I couldn't think of a better option.
I recently integrated ungh, which helps reduce GitHub API requests through intelligent caching and API call optimization. You can read their README to see how it works, but basically it makes fewer requests to GitHub API, which helps with rate limits.
Also, I applied to the GitHub Developer Program and I am now a GitHub Developer Program Member. This gives me access to additional tools and GitHub Actions workflows, which makes development easier.
I removed the dependency section and added setup insights for keeping the project's purpose clear. During the way, I used the Vercel AI SDK:
const context = buildContext(files, issues);
const prompt = buildPrompt(context);
try {
const response = await getOpenAI().chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: prompt }],
response_format: { type: "json_object" },
});
const content = response.choices[0].message.content || "{}";
const result = JSON.parse(content) as AISetupResult;
I added it for the sake of reducing runtime errors and handling data with proper structure instead of sending raw JSON as a prompt with string interpolation. The response_format: { type: "json_object" } forces the AI to return valid JSON, which eliminates parsing errors.
I also added a funding section by analyzing YAML files because I wanted to show it in search results. I created docker-compose to solve the problem related to mitigating the setup process.
Burnout
After several days, I realized I wasn't getting anywhere in repo-health. I immediately stopped adding new features and thinking about problems because I was writing code with anger :(. I realized it was time to switch to contributing to open source projects and starting a new project: verdis. I know that I have a long way to improve repo-health. I am in research hunting and see the tldraw's announcement where they closed their PR section from the community due to untested PRs and lack of technical depth. I have a lot of ideas, but I need a clear mind to convert them into code in that repo.
Also, due to job hunting, life has become hectic for me, but that is the process I have to go through. I need to consume more knowledge, build in the open, contribute, and improve my skills. Btw, my friend and I did our first podcast on Youtube. I know the camera makes it look like we recorded with fridge camera π, but we will improve.
Intro to Verdis
My friends and I are planning to create a versioned Redis: a key-value store with multi-version concurrency control (MVCC) and historical data access. In the next blog, I will talk about the books I am reading related to database internals, network programming in Go, concurrency in Go, and package-oriented design and why we are building Verdis.
We discuss while writing code and even argue about variable names and comments :). I also read another post from the same author who wrote about ego olympics in code review. This post talks about not over-engineering like we're building Netflix. I am trying to keep it simple and harmonize the business logic with my ideas.
Lastly
Communication with maintainers beats submitting AI-generated code that doesn't understand the project architecture and not tested code. You can stick to 1-2 projects so you actually learn the technical depth instead of just fixing random issues everywhere. It is better to document your changes even if it's one line. And when you burn out, take a break instead of writing angry code :).
Top comments (2)
THIS is what most underestimate. Communication. I can never stress enough how important it is above anything else.
You handled all three of your PRs with care, properly addressing the changes and the decisions made. When I was diving deeper into open-source and reviewing existing projects, I noticed many flawed PRs where either the issue isn't described properly or the contributor doesn't respect the rules and maintainers' feedback.
My own latest contributions only confirm your point. After fixing a bug in a semi-popular C++ project and simply taking the time to write a description for my fix, I was surprised at the feedback I received: "Most well-documented PR for this repo I have ever come across."
I thought to myself "No way, what?", "Isn't that supposed to be the general consensus?", "Could developers be this careless?" π
In another PR - I fumbled, and this time I had to make a good amount of refactoring. Instead of lashing out at the maintainer or dismissing the conversations, I resolved them, left questions where I was confused or needed guidance. That's how it should be!
It's been thanks to you that I actively started contributing to various projects. I've learned a lot about the ins and outs of open-source from your experience alone.
And about burnouts, I can 100% relate. It's important that we stay fresh and not overwork ourselves on something. Which is EXACTLY why we're building verdis together π I'm excited to continue the grind by building yet another project in the open.
We need to keep going and make our podcasts π. Of course, we also need to build in the open :)