DEV Community

Cover image for How I Built an AI Research Review Site with Next.js — and What I Learned About Choosing the Right AI Tool for Each Research Stage
weiwei yang
weiwei yang

Posted on

How I Built an AI Research Review Site with Next.js — and What I Learned About Choosing the Right AI Tool for Each Research Stage

I’m a frontend developer, and over the past few weeks I’ve been building a small content site around AI research tools.

The project started from a simple question: how do researchers, PhD students, and knowledge workers decide which AI tool to use at each stage of their research workflow?

There are now many AI tools that claim to help with research: literature discovery, paper reading, note synthesis, outlining, drafting, summarization, citation workflows, and more. But from what I observed, the hard part is not finding “an AI tool.” The hard part is knowing which tool is appropriate for which research task.

That became the motivation behind my side project: AI Research Reviews.

It is not a SaaS product, at least not for now. It is a developer-built content site where I review and compare AI tools for research workflows, with a focus on practical use cases rather than hype.

The Tech Stack

The site is built with Next.js 16 App Router and React 19.

I chose the App Router because the site is content-heavy but still benefits from a modern routing and rendering model. Most pages are static or semi-static, and the structure maps well to topic-based content sections like comparisons, reviews, best tools, and guides.

The content layer is based on MDX. This was important because I wanted the writing experience to stay close to Markdown, while still having the option to use custom React components inside articles later.

The styling is done with Tailwind CSS v4. I kept the design intentionally simple: readable typography, clear page hierarchy, and comparison sections that are easy to scan. For this type of site, visual clarity matters more than complex UI.

The project uses TypeScript in strict mode, which is probably more discipline than a small content site strictly needs, but I prefer it. Content sites can become messy quickly when frontmatter, routes, slugs, metadata, and article lists start to diverge.

A simplified version of the stack looks like this:

Framework: Next.js 16 App Router

UI: React 19

Content: MDX

Styling: Tailwind CSS v4

Language: TypeScript strict mode

Deployment: Vercel

SEO: JSON-LD, dynamic sitemap, metadata from frontmatter

Deployment is on Vercel, which makes sense for a Next.js project. The workflow is straightforward: push to GitHub, deploy, validate, and iterate.

I also added some SEO-specific engineering work early: structured data with JSON-LD, a dynamic sitemap, and topic-based routing driven by frontmatter. These are not glamorous features, but they make the site easier to maintain as the article count grows.

The Content Architecture Lesson

The biggest lesson from this project was that a content site should not be treated as a random blog.

At first, it is tempting to just write articles whenever an idea appears. But that creates a weak information architecture. The site becomes a pile of posts instead of a system.

So I organized the content into topic clusters:

  • comparisons
  • reviews
  • best-tools
  • guides

Each type of article has a different job.

A comparison article helps users choose between two tools. A review article explains what one tool is good at and where it fails. A best-tools article groups options by use case. A guide explains the overall workflow and links to more specific pages.

That changed how I thought about writing.

Instead of asking, “What article should I write today?”, I started asking:

What job does this page perform in the site architecture?

What search intent does it answer?

Which related pages should it connect to?

What should the reader do or understand after reading it?

This is where my developer mindset helped. I started treating content like code.

Each article became a module. Each module has a responsibility. Internal links are not just SEO tricks; they are dependency edges between related pieces of knowledge.

For example, an article about AI literature search should naturally connect to tool comparisons like Elicit vs Consensus. A guide about the full AI research workflow should connect to discovery tools, reading tools, note-taking tools, and drafting tools.

That internal structure is what makes the site feel less like a blog and more like a small knowledge system.

What I Learned About AI Research Tools

While building the site, I also learned something important about the AI research tools themselves.

The biggest mistake researchers make is using one AI tool for everything.

This is understandable. If a tool feels powerful, the natural instinct is to use it for search, reading, summarization, note-taking, drafting, and editing. But research is not one activity. It is a chain of different activities.

Discovery tools solve a different problem from reading tools.

For example, tools like Elicit and Consensus are useful when the goal is to search across research papers, identify relevant studies, and get a structured view of evidence. They are closer to discovery and literature search tools.

Reading tools solve a different problem. NotebookLM, for example, becomes useful when you already have sources and want to understand, compare, and synthesize them. Its strength is not that it searches the whole web better than everything else. Its strength is grounded reading over the materials you provide.

Drafting tools solve another problem again. ChatGPT is excellent for outlining, rewriting, explaining concepts, generating alternative framings, and helping turn notes into structured prose. But it should not always be treated as the source of truth for research discovery.

That distinction became one of the main ideas behind the site.

I wrote a detailed comparison of Elicit vs Consensus for AI research search that breaks down when to use each.

The full AI research workflow guide maps out which tool fits which stage.

The more I worked on the site, the more I felt that “best AI tool” is usually the wrong question.

A better question is:

At this stage of my research workflow, what kind of cognitive task am I trying to complete?

If the task is discovering papers, use a discovery-oriented tool.

If the task is understanding a known set of documents, use a reading and synthesis tool.

If the task is drafting or restructuring your own ideas, use a writing assistant.

This sounds obvious, but it is easy to forget when every AI product page claims to be an all-in-one research assistant.

SEO as an Engineering Problem

Building this site also changed how I think about SEO.

I used to think of SEO mostly as marketing. After building a content site from scratch, I now see it as partly an engineering problem.

You have inputs, constraints, architecture, feedback loops, and performance metrics.

The inputs are user questions and search intent. The architecture is the way pages, routes, metadata, and internal links are organized. The feedback loop comes from impressions, clicks, rankings, indexing, and user behavior.

That does not mean SEO is purely technical. The content still has to be useful. But the system around the content matters a lot.

For a developer, this is actually encouraging. Many parts of a good content site are things we already understand: structure, maintainability, routing, schema, performance, iteration, and observability.

Final Thoughts

AI Research Reviews is still a small side project, but building it has been useful for me as a developer.

It helped me think more clearly about content architecture, SEO, and how AI research tools fit into real workflows instead of abstract product categories.

If you are a researcher, PhD student, or knowledge worker trying to figure out your AI tool stack, the site might help you think through the workflow more clearly.

And if you are a developer building a content site, my main takeaway is this:

Don’t treat content as random posts. Treat it like a system.

Top comments (0)