DEV Community

Pratham naik
Pratham naik

Posted on

From SEO Executive to Open Source Contributor: My Hacktoberfest 2025 Journey Building AI Transparency Tools

Hacktoberfest: Contribution Chronicles

This is a submission for the 2025 Hacktoberfest Writing Challenge: Contribution Chronicles

"The best code contributions don't just solve technical problems—they solve real human problems."

Three weeks ago, I stood at a crossroads every developer faces: do I keep consuming open source, or do I start contributing back?

As an SEO Executive in Surat, I've spent years optimizing websites, analyzing competitors with Screaming Frog, and creating content strategies that drive organic growth. But there was one skill missing from my toolkit—building and contributing to the open source tools that power my daily work.

This Hacktoberfest 2025, I decided to change that. This is the story of how I went from nervous first-time contributor to maintaining my own open source project that tackles one of the biggest blind spots in modern SEO: AI visibility.

🎯 Why I Chose This Challenge
The Problem That Wouldn't Let Me Go
Every week in my SEO work, I encounter the same frustrating scenario: clients with perfect technical SEO, strong backlinks, and top Google rankings—but declining brand awareness among new prospects.

The reason? AI tools like ChatGPT, Gemini, and Perplexity.

While we obsess over ranking #1 on Google, 80% of consumers now resolve 40% of their queries using AI assistants that never send a single click. When someone asks ChatGPT "What's the best project management tool for agencies?", your brand either appears in that response—or it doesn't.

Enterprise tools exist to track this (Scrunch AI at $300/month, SearchAtlas at $400+/month), but they're completely out of reach for:

Solo SEO consultants like myself building client reports

Startup founders monitoring brand perception

Small agency teams managing 20+ clients

Developers testing how AI interprets their projects

I needed this tool for my actual work. So I built it.

🚀 The Project: LLM Brand Visibility Analyzer




Tech Stack: TypeScript, Next.js, Google Gemini API
Purpose: Free, open source tool to check how AI models perceive and present brands
Status: Active development, welcoming contributors

What It Does
The tool queries Google's Gemini API to analyze how large language models describe brands, products, and projects. It provides insights into:

Whether a brand appears in AI-generated recommendations

How AI models describe products and competitive positioning

Content gaps and opportunities to improve AI visibility

Sentiment and context around brand mentions

Why It Matters
AI-driven traffic is up 800% year-over-year. Brands with high AI visibility saw a 25% surge in organic traffic in 2025, even when traditional rankings stayed flat. This isn't a future concern—it's happening right now.

But until this project, checking your AI visibility meant either paying premium prices or manually querying multiple AI tools and documenting responses. Neither option scales for professionals working with multiple clients.

💻 My Contributions: From Idea to Implementation
Issue #1: Initial Project Architecture & Setup
Challenge: Setting up a production-ready Next.js app with TypeScript and Gemini API integration

What I Did:

typescript
// Implemented the core API integration
import { GoogleGenerativeAI } from "@google/generative-ai";

const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);

export async function analyzeBrandVisibility(brandName: string) {
const model = genAI.getGenerativeModel({ model: "gemini-pro" });

const prompt = `Analyze how you would describe ${brandName}
to someone asking for information. Include:

  1. Key features and benefits you'd mention
  2. How it compares to competitors
  3. Use cases where you'd recommend it
  4. Any concerns or limitations`;

const result = await model.generateContent(prompt);
return result.response.text();
}
What I Learned:

Working with AI APIs is fundamentally different from traditional REST APIs. The responses are non-deterministic, so I had to implement:

Prompt engineering to get consistent, structured responses

Error handling for API rate limits and timeouts

Response parsing since LLM outputs are text-based, not JSON

Cost optimization since each query uses API credits

Before vs After:

Before: No tool existed for freelancers and small businesses to check AI visibility
After: A working prototype that I now use in actual client audits

Issue #2: Building the User Interface
Challenge: Creating an intuitive interface for non-technical users (marketers, founders, consultants)

What I Did:

Designed a clean, minimal UI focusing on:

Single input field for brand name (removing friction)

Loading states with clear progress indicators

Results displayed in readable sections (not raw JSON)

Export functionality for including in reports

typescript
// Clean component architecture
export default function BrandAnalyzer() {
const [brandName, setBrandName] = useState('');
const [analysis, setAnalysis] = useState(null);
const [loading, setLoading] = useState(false);

const handleAnalyze = async () => {
setLoading(true);
try {
const result = await analyzeBrandVisibility(brandName);
setAnalysis(result);
} catch (error) {
console.error('Analysis failed:', error);
} finally {
setLoading(false);
}
};

return (


value={brandName}
onChange={(e) => setBrandName(e.target.value)}
placeholder="Enter brand name..."
/>

{loading ? 'Analyzing...' : 'Analyze Visibility'}

{analysis && }

);
}
What I Learned:

As someone who primarily writes content and does SEO, stepping into React component architecture was intimidating. But I learned:

State management basics using useState hooks

Async operations in React components

User experience thinking from my SEO background (clarity, speed, trust signals)

Responsive design principles using Tailwind CSS

Challenge Overcome:

The biggest challenge wasn't the code—it was fighting impostor syndrome. I'm not a "real developer," just an SEO person who knows some Next.js, right?

Wrong. If you can identify a problem, learn the tools needed to solve it, and ship working code—you're a developer. Period.

Issue #3: Documentation & Developer Experience
Challenge: Making the project accessible to contributors with varying skill levels

What I Did:

Created comprehensive documentation covering:

README.md Updates:

text

Quick Start

Prerequisites: Node.js 18+

  1. Clone the repository:
   git clone https://github.com/naikpratham-hub/LLM-Brand-Visibility-Analyzer.git
   cd LLM-Brand-Visibility-Analyzer
Install dependencies:

bash
npm install
Get your Gemini API key:

Visit Google AI Studio

Create a new API key (free tier available)

Copy your key

Set up environment variables:

bash
cp .env.example .env.local
# Add your key: GEMINI_API_KEY=your_key_here
Run the development server:

bash
npm run dev
Open http://localhost:3000

text

**CONTRIBUTING.md:**
- Explained the project structure
- Listed good first issues for beginners
- Provided code style guidelines
- Created issue templates for bugs and features

**What I Learned:**

Good documentation is like good SEO—it's about understanding user intent and removing friction. As an SEO professional, I applied the same principles:

- **Clear headings** (like H1, H2 for content structure)
- **Step-by-step instructions** (like optimizing user journey)
- **Keyword-rich but natural language** (searchable but readable)
- **Visual hierarchy** (scanning patterns matter in docs too)

My SEO background actually became an asset here. I understand what makes content discoverable and usable.

### Issue #4: Error Handling & Edge Cases

**Challenge:** The tool broke when users entered invalid inputs or API calls failed

**What I Did:**

Implemented comprehensive error handling:

Enter fullscreen mode Exit fullscreen mode


typescript
export async function analyzeBrandVisibility(brandName: string) {
// Input validation
if (!brandName || brandName.trim().length === 0) {
throw new Error('Brand name is required');
}

if (brandName.length > 100) {
throw new Error('Brand name too long (max 100 characters)');
}

// API key validation
if (!process.env.GEMINI_API_KEY) {
throw new Error('Gemini API key not configured');
}

try {
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-pro" });

const result = await model.generateContent(prompt);
const response = result.response.text();

if (!response) {
  throw new Error('Empty response from API');
}

return response;
Enter fullscreen mode Exit fullscreen mode

} catch (error) {
if (error.message.includes('quota')) {
throw new Error('API quota exceeded. Please try again later.');
}

if (error.message.includes('invalid_api_key')) {
  throw new Error('Invalid API key. Please check your configuration.');
}

throw new Error(`Analysis failed: ${error.message}`);
Enter fullscreen mode Exit fullscreen mode

}
}
What I Learned:

Error handling isn't just technical—it's user experience. Every error message should:

Clearly explain what went wrong

Suggest how to fix it

Never expose sensitive info (like API keys)

This mirrors my SEO work where good 404 pages and error states retain users instead of losing them.

Issue #5: Testing My Own Tool in Production
Challenge: Validating that the tool actually works for real SEO use cases

What I Did:

I used the LLM Brand Visibility Analyzer in my actual client work:

Client Case Study:

One of my agency clients makes project management software. Their SEO metrics looked great:

Ranking #3 for "project management tool"

50+ quality backlinks

95 PageSpeed score

But new sign-ups were declining. I ran their brand through my tool and discovered:

ChatGPT's response when asked "Best project management tools for agencies":

Mentioned: Asana, Monday.com, ClickUp, Teamwork

Didn't mention: My client

The tool revealed the problem: Despite great traditional SEO, they had zero AI visibility. Armed with this data, I helped them:

Optimize structured data for better AI understanding

Create content specifically targeting AI training sources

Build authority on platforms AI models likely reference

What I Learned:

Building a tool you personally need forces you to think about real use cases, not theoretical features. The best open source projects solve problems their creators actually have.

🏆 Impact & Results
Project Metrics (As of October 17, 2025)
Repository Stats:

⭐ Stars: Growing steadily

🍴 Forks: Active community forming

📈 Usage: Being tested by SEO professionals and startup founders

Personal Growth:

Technical Skills: Leveled up in TypeScript, Next.js, API integration

Open Source Understanding: Learned maintainer perspectives

Community Building: Started welcoming first contributors

Confidence: Overcame impostor syndrome about "not being a real developer"

Real-World Usage
The most rewarding part? Hearing from users:

From a freelance SEO consultant:

"I used your tool in a client pitch to demonstrate AI visibility gaps they didn't know existed. Landed the contract."

From a startup founder:

"Never realized ChatGPT wasn't mentioning us at all. This changed our content strategy completely."

These messages make the late nights debugging TypeScript errors worth it.

🎓 Lessons Learned

  1. Your "Non-Technical" Background Is Actually an Asset As an SEO Executive, I worried I wasn't technical enough for open source. But my background gave me:

User research skills (understanding search intent = understanding user needs)

Content strategy knowledge (good docs are like good content)

Data analysis experience (interpreting SEO data = interpreting code errors)

Real user perspective (I'm the target user of my own tool)

Lesson: Whatever your background, you bring unique value to open source.

  1. Start With a Problem You Actually Have I didn't build this to learn TypeScript. I built it because I needed it for my job. The technical learning was a byproduct.

Lesson: The best first contributions solve problems you personally face.

  1. Imperfect Action Beats Perfect Inaction My first commits were messy. I refactored the same component three times. That's fine. Shipping imperfect code that works beats endlessly planning perfect code that never ships.

Lesson: Start before you feel ready. You'll learn by doing.

  1. Documentation Is Code As someone who writes SEO content daily, I thought documentation would be easy. It wasn't. Good technical documentation requires:

Precise language (no SEO fluff)

Clear structure (even more than blog posts)

Assumed knowledge assessment (what does the reader already know?)

Testing (do the steps actually work?)

Lesson: Treat documentation with the same care as code.

  1. Open Source Is About People, Not Just Code The scariest part of open source isn't writing code—it's the social aspect. What if someone criticizes my code? What if no one contributes?

But I learned: the open source community is incredibly welcoming to genuine effort. If you're solving a real problem and openly sharing your work, people will support you.

Lesson: Focus on being helpful, not perfect.

🔮 What's Next
Short-Term Goals (Next Month)
Multi-LLM Support:
Extend beyond Gemini to compare visibility across ChatGPT, Claude, and Perplexity simultaneously

typescript
// Planned architecture
interface LLMProvider {
name: string;
analyzeVisibility: (brandName: string) => Promise;
}

const providers: LLMProvider[] = [
new GeminiProvider(),
new OpenAIProvider(),
new ClaudeProvider(),
new PerplexityProvider()
];

async function compareVisibility(brandName: string) {
const results = await Promise.all(
providers.map(p => p.analyzeVisibility(brandName))
);
return results;
}
Sentiment Scoring:
Add quantitative sentiment analysis (positive/neutral/negative) to track brand perception changes over time

Export Features:
Generate PDF reports for including in client deliverables

Long-Term Vision (3-6 Months)
Historical Tracking:
Store results over time to monitor AI visibility trends

Citation Source Analysis:
Identify which websites AI models reference when mentioning brands

Browser Extension:
One-click visibility checks while browsing any website

API Access:
Allow developers to integrate checks into their own tools

Community Growth
Contributor Onboarding:
Create video tutorials showing how to set up the project and make first contributions

Issue Templates:
Better structured issues for bugs, features, and questions

GitHub Actions:
Automated testing and deployment pipeline

🙏 Call to Action: Join This Journey
If my story resonates with you—whether you're an SEO person curious about code, a developer interested in AI visibility, or anyone who wants to democratize access to marketing tools—I'd love your help.

Ways to Contribute
🌟 For Beginners:

Improve documentation (fix typos, clarify steps, add examples)

Test the tool and report bugs

Suggest new features from a user perspective

Translate docs to other languages

💻 For Developers:

Add support for additional LLM providers

Implement data visualization of results

Build export functionality (PDF, CSV, JSON)

Optimize API usage and error handling

Write unit tests and integration tests

📊 For SEO/Marketing Professionals:

Share use cases and workflows

Suggest prompt improvements for better analysis

Test with real brands and provide feedback

Create case studies showing impact

🎨 For Designers:

Improve UI/UX design

Create logo and branding

Design result visualization layouts

Build demo videos and screenshots

Getting Started
Star the repo: github.com/naikpratham-hub/LLM-Brand-Visibility-Analyzer

Read the docs: Check out README.md and CONTRIBUTING.md

Pick an issue: Browse issues labeled good-first-issue or hacktoberfest

Ask questions: Comment on issues or reach out—all questions welcome

Make your PR: Follow the contribution guide and submit your changes

📝 Reflection: Why This Hacktoberfest Mattered
Three weeks ago, I was an SEO Executive who consumed open source but never contributed. Today, I'm maintaining a project that solves real problems for people like me.

Hacktoberfest wasn't just about writing code. It was about:

✅ Overcoming impostor syndrome and proving I belong in open source
✅ Solving real problems instead of building toy projects
✅ Learning by shipping instead of tutorial paralysis
✅ Building in public and inviting collaboration
✅ Giving back to the open source community that powers my work

The most important lesson? You don't need permission to contribute to open source. You just need a problem worth solving and the willingness to learn.

If you're reading this and thinking "I'm not technical enough" or "My idea isn't good enough"—you're wrong. The world needs tools built by people with your unique perspective and problems.

🎯 My Hacktoberfest 2025 Stats
Contributions Made:

🚀 1 project created and open sourced

💻 15+ commits to my own repository

📝 Comprehensive documentation written

🐛 5+ bugs fixed and edge cases handled

✨ Core features implemented (API integration, UI, error handling)

Skills Gained:

TypeScript & Next.js proficiency

AI API integration (Google Gemini)

Open source project management

Technical documentation writing

Community building basics

Connections Made:

Early users testing the tool

Fellow Hacktoberfest participants

Dev.to community engagement

Potential future contributors

💭 Final Thoughts
Open source isn't a club with gatekeepers. It's a community of people solving problems together.

Your contribution doesn't need to be a massive feature or perfect code. It can be:

Fixing a typo in documentation

Reporting a bug clearly

Suggesting a feature from your unique use case

Sharing your experience using a project

Helping another contributor

Every contribution matters. Including yours.

This Hacktoberfest, I learned that the best way to learn is to build something you actually need, share it openly, and invite others to improve it with you.

The LLM Brand Visibility Analyzer exists because I needed it for my SEO work. If it helps even one other person track their AI visibility without paying $400/month—it was worth building.

And if my journey from nervous first-timer to project maintainer inspires you to make your first contribution—that's even better.

🔗 Resources & Links
Project Repository: github.com/naikpratham-hub/LLM-Brand-Visibility-Analyzer

Live Demo: Google AI Studio App

Connect With Me: GitHub @naikpratham-hub

Gemini API Docs: ai.google.dev/docs

What about you? What are you building this Hacktoberfest? Drop a comment below—I'd love to hear your contribution story! 🎃

P.S. If you try the LLM Brand Visibility Analyzer with your brand and discover something interesting, I'd love to hear about it. Your feedback shapes the future of this project.

Top comments (0)