Why Build a Pipeline?
I wanted to grow my online presence, a dedicated corner of the internet to showcase my projects and skills, point people to my GitHub, LinkedIn, and socials, and share my thoughts on things I was building or learning. Simple enough goal. The execution, though, turned out to be a learning process.
My first attempt at blogging was manual and messy. Write a post, publish it somewhere, copy it somewhere else, and hope people find it. I had set up the Dev.to RSS feed fetching early on, but that was about it. There was no real visibility into whether anyone was reading, and no reason to keep going. Unsurprisingly, I took a year-long gap between posts, not because I stopped having things to say, but because the friction was too high and life got in the way. When I started a new job, I focused on learning my new responsibilities and let the blog quietly go dormant.
When I came back to it, I wanted to do things differently. The goal was simple: write once, publish everywhere, no copy-pasting across platforms, no manual distribution, and some way to know if the work was actually reaching anyone. The RSS feed was already there but the imports were messy. The frontmatter in my blog posts didn't match the RSS schema, so every import came in inconsistent and making changes or adding new frontmatter fields meant manually fixing every post. What I did this time was get my Astro frontmatter to match the RSS schema properly, so imports were clean and uniform from the start. Now it was about connecting the remaining pieces into a proper pipeline.
The Stack
Astro Before launching my personal site, I had tried to build it with Next.js a year earlier. I didn't fully understand it at the time and kept running into hydration issues. At a local Code and Coffee meetup, I asked one of the attendees for advice and he pointed me to Astro. Within days of going through a tutorial, I had my site live for the first time. I paired it with Tailwind CSS to make theming easy to maintain and update down the road.
Pexels Every post needs a cover image. Pexels is my go-to for finding images, royalty-free stock photos and videos shared by creators. High quality, free to use, and no licensing headaches.
Dev.to This came much later in the process. After some digging I discovered that Astro can generate an rss.xml feed if set up correctly, and Dev.to can periodically pull from that feed to automatically import new posts. All I need to do is keep my frontmatter in sync with what Dev.to expects from the feed, no manual copying.
Umami I wanted visibility into who was actually reading. After some research I landed on Umami: privacy-friendly, no cookie banners, and a free hobby tier that covers up to 3 websites.
Mastodon After Twitter was acquired I decided to leave and delete my profile. Mastodon turned out to be a great fit, especially for the tech community, and the fediverse has a different energy that I appreciate.
LinkedIn Where I manually share each post to take advantage of how LinkedIn's algorithm circulates content to your network and extended network for a few days after publishing.
Setting Up Astro
Each blog post is written in Markdown format, which Astro handles exceptionally well out of the box. One of the keys to making everything work smoothly is Astro's content collections configuration, this is how you tell Astro the shape of your content so it can validate and query it consistently. For this project I have Markdown files for blog entries and JSON files to define my projects. The Astro content collections documentation is worth reading carefully to understand how to define your schema.
Before looking at the frontmatter, it helps to see the schema that enforces it. This lives in src/content.config.ts and uses Astro's content collections with Zod validation. If a frontmatter field is missing or the wrong type, Astro will throw an error at build time rather than silently generating a broken RSS feed:
import { defineCollection, z } from 'astro:content';
import { glob } from 'astro/loaders';
const blog = defineCollection({
loader: glob({ pattern: '**/*.md', base: './src/content/blog' }),
schema: z.object({
title: z.string(),
description: z.string(),
pubDate: z.date(),
author: z.string(),
image: z.string(),
categories: z.array(z.string()),
}),
});
export const collections = { blog };
Each blog post starts with a frontmatter block that drives everything downstream, from the RSS feed to Dev.to imports:
---
title: "Building a Terminal Text Editor: The View (Part 3)"
description: "In Part 3 of building wordNebula, I cover the View layer, why I chose FTXUI over ncurses..."
pubDate: 2026-04-05
author: "Ilean Monterrubio Jr"
image: '/src/content/images/pexels-markus-winkler-1430818-4065400.jpg'
categories:
- 'cpp'
- 'terminal'
- 'architecture'
- 'programming'
---
Keeping this frontmatter consistent and aligned with the RSS schema is what makes the rest of the pipeline work without any manual cleanup. This also makes the rss.xml.ts configuration straightforward to set up. All it needs to do is read the frontmatter content and it will generate the rss.xml file at build time.
Here is what a basic rss.xml.ts file looks like:
import rss from '@astrojs/rss';
import { getCollection } from 'astro:content';
export async function GET(context) {
const posts = await getCollection('blog');
return rss({
title: 'Your Blog Name',
description: 'Your blog description',
site: context.site,
items: posts.map((post) => ({
title: post.data.title,
pubDate: post.data.pubDate,
description: post.data.description,
author: post.data.author,
categories: post.data.categories,
link: `/blog/${post.id}/`,
})),
});
}
The fields in the items map directly to the frontmatter fields you define. As long as the frontmatter is consistent, the RSS output will be clean and Dev.to will be able to import it without any issues. For additional information visit the Astro RSS documentation.
Auto-Importing to Dev.to via RSS
For this part you will need a Dev.to account. Once your website is live and the RSS feed URL is active, navigate to your Dev.to dashboard. On the left-hand sidebar you will find RSS Import Feeds.
From there, click + Add a Feed Source to expand the form. Enter your rss.xml URL in the RSS Feed URL field. A couple of settings worth paying attention to:
- Mark the RSS source as canonical URL by default: keep this checked. This is one of the most important settings in the whole pipeline. It tells Dev.to that your personal site is the original source of the content, which means search engines will credit your site and not the Dev.to version. Get this wrong and Dev.to can end up outranking your own site for your own writing. Always leave this on.
- Replace self-referential links with DEV Community-specific links: leave this unchecked unless you are migrating your entire blog to Dev.to permanently.
Once you click Add Feed Source , Dev.to will periodically check your feed and pull in any new posts. Imported posts land as drafts , so before publishing you will want to review each one and add a series name if it is part of a multi-part post, pick tags that match popular Dev.to tags for discoverability (for example: cpp, terminal, architecture, programming), and add a cover image sourced from Pexels. When you are ready to go live, open the post editor and flip published to true in the Dev.to frontmatter at the top of the post.
Mastodon Verification and Author Attribution
I wanted to establish a presence on Mastodon, and it turns out Astro makes it easy to connect your blog to the fediverse with just two small additions.
Verification with rel="me"
Mastodon uses the rel="me" link standard to verify that you own a website. Adding it to your Astro layout head tag gives you the green checkmark on your Mastodon profile:
<link rel="me" href="https://mastodon.social/@yourusername" />
Author Attribution with fediverse:creator
The fediverse:creator meta tag is a newer addition that connects your blog posts to your Mastodon identity. When someone shares one of your posts on Mastodon, it automatically credits your account:
<meta name="fediverse:creator" content="@yourusername@mastodon.social" />
Add this to your blog post layout so it only appears on post pages rather than every page on the site. Together these two tags make your blog a proper fediverse citizen, verified, attributable, and discoverable by the tech community that has made Mastodon their home.
Adding Privacy-Friendly Analytics with Umami
When I started getting posts out I wanted to know if anyone was actually reading them. Google Analytics was the obvious choice but it felt like overkill, heavy, cookie-dependent, and requiring a consent banner just to get started. After some digging I found Umami, a lightweight privacy-friendly alternative that ticked every box. The free hobby tier covers up to 3 websites, 100K events per month, and 6 months of data retention. No cookies, no consent banners, no tracking across sites.
Setup is straightforward. Once you create your Umami account and add your site, you get a script tag to drop into your Astro layout file:
<script
defer
src="https://cloud.umami.is/script.js"
data-website-id="your-website-id">
</script>
Add this to your main Layout.astro file and it will be included on every page automatically. I also added a small notice in the footer to let visitors know the site uses privacy-friendly analytics with no cookies or personal data collected, a small but honest touch that sets the right expectations.
The Umami dashboard covers all the basics: visitors, page views, referrers, locations, devices, and bounce rate. It is everything you need to understand how your content is performing without compromising your readers' privacy.
The Publishing Workflow
I typically write about something I worked on professionally, a personal project, or a topic I consider myself an expert in. Here is the end-to-end process I follow from idea to published post:
1. Write the draft in Notion I started out using Google Docs but since returning to the blog I migrated to Notion. It handles code blocks and tables properly and exports to Markdown in a way that actually translates correctly, which makes the next step much smoother.
2. Generate the Astro-ready Markdown file with frontmatter Once the draft is ready I convert it into a Markdown file with the correct frontmatter fields — title, description, pubDate, author, image, and categories — making sure everything aligns with the RSS schema.
3. Deploy to ilean.me on Sunday morning Sunday is my publishing day. I push the new post and deploy the site.
4. Dev.to picks up via RSS Dev.to periodically polls the RSS feed URL, so the post will appear as a draft in my Dev.to dashboard within a few hours depending on when it last checked.
5. Post on Mastodon the same day I share the post on Mastodon with hashtags, typically the same ones used in the post frontmatter for consistency.
6. Schedule LinkedIn for Monday morning LinkedIn has a post scheduling feature which has been a game changer. I schedule it to go live Monday morning at 8-9 AM, when recruiters and professionals are most active, letting LinkedIn's algorithm circulate it to my network and extended network over the following days.
7. Review and publish the Dev.to draft Once it lands in Dev.to I review it, add a series name if it is part of a multi-part post, set the tags, add a cover image from Pexels, and flip published to true.
Results So Far
I revived the blog in February and March of 2026. On Dev.to, in February the most views I received in a single day was 16. Since then that number has climbed to 36 views in a single day. As of April 25th 2026 Dev.to shows 435 total views, though down 24% compared to the previous 7 days, a good reminder that consistency matters.
On ilean.me, Umami shows 48 unique visitors so far. What surprised me was the geographic spread. A large portion of visitors are coming from Asia, which makes sense given that embedded systems and C++ are a common stack there. The majority are from the US, with California showing up frequently, which tracks given the tech industry concentration there.
The numbers are small but the trend is encouraging. Search engine traffic compounds over time; every post indexed is another entry point for someone to find your work. The goal was never overnight virality, it was to slowly and consistently build an online presence, and the data shows that is exactly what is happening.
What I'd Do Differently
A few honest lessons from getting this pipeline up and running.
Double check your publication dates. One post did not show up on Dev.to on time and after some digging I realized the date in the frontmatter was off by a day. Dev.to pulls from the RSS feed based on the pubDate field, so if that is wrong the post gets skipped or delayed. Always verify the date before deploying.
Be patient with the RSS fetch timing. Dev.to does not poll your feed instantly. Depending on when it last checked, it could take several hours for a new post to appear as a draft. Do not panic and assume something is broken; just give it time.
Filter out your own visits. Early on I got excited seeing a visitor from Houston in Umami before realizing it was just me clicking my own links. The free Umami tier does not have a built-in way to exclude your own IP, so just keep that in mind when reading your early numbers and take them with a grain of salt.
Find your community. One thing I want to do going forward is find Houston based tech Discord servers with software engineers and developers to share my content in. Posting in relevant communities can accelerate growth in a way that passive RSS and social posts alone cannot.
Do not check analytics too often. It is tempting at the start but the numbers move slowly and checking constantly just creates unnecessary anxiety. Set a cadence, maybe once a week, and focus on writing the next post instead.
The whole point of building this pipeline was to remove friction, and it worked. There is no year-long gap waiting to happen again; there is just the next post.

Top comments (0)