Someone visiting your GitHub profile wants to get to know you. They want to see who you are, what you build, what you write. But most profile READMEs are written once and then left untouched for years. A "latest posts" list from 2021, expired projects, a tech stack still waiting to be updated.
What if your README updated itself every morning?
In this article we will build a system that automatically pulls your Medium and dev.to articles, runs daily via GitHub Actions, and keeps your README alive. We will walk through every step: the Python script, the workflow file, placeholder markers, and manual triggering.
1. How Does It Work? 🔄
The system we are building has four parts:
We place special HTML comment lines inside README.md (like <!-- MEDIUM-ARTICLES:START -->). The script finds the content between these markers and replaces it; it does not touch anything outside the markers.
The Python script calls Medium's RSS feed and dev.to's REST API, collects article titles and URLs, then updates the README.
The GitHub Actions workflow runs this script on a schedule. It triggers every day at 06:00 UTC; if the script makes any changes, it automatically commits and pushes.
The manual trigger option means you do not have to wait. The moment you publish a new article, you can run it with a single click from the Actions tab.
💡 The entire system runs on GitHub's own infrastructure. No external server, no paid service, and your machine does not need to be on. GitHub Actions is free for public repos.
2. What Is a GitHub Profile README? 👤
When you create a repository with the same name as your username, GitHub automatically displays the README.md inside it on your profile page. This is a special GitHub feature.
How to Create the Profile README Repo
- Click the New repository button on GitHub
- Enter your own username as the repository name (e.g.
yasinatesim) - Make sure Public is selected
- Check the Add a README file checkbox
- Click Create repository
When the repo is created, GitHub greets you with "✨ yasinatesim/yasinatesim is a special repository." Everything you write in README.md starts appearing on your profile.
⚠️ The repository name must be exactly the same as your username, including case.
Yasinatesimandyasinatesimare different repos and the profile feature will not work.
3. Adding Placeholder Markers to the README 📍
How does the script know what to update in the README? Through special markers we place as HTML comment lines. These markers are invisible in the browser and on GitHub, but they act as coordinates for the script.
Add the following block wherever you want articles to appear in your README:
## ✍️ Latest Medium Articles
<!-- MEDIUM-ARTICLES:START -->
<!-- MEDIUM-ARTICLES:END -->
## 📝 Latest dev.to Articles
<!-- DEVTO-ARTICLES:START -->
<!-- DEVTO-ARTICLES:END -->
When the script runs, it deletes everything between the two markers and writes the current article list in its place. It does not touch any line outside the markers. After the first run the output will look like this:
## ✍️ Latest Medium Articles
<!-- MEDIUM-ARTICLES:START -->
- [Micro Frontend Architecture (with React Examples)](https://medium.com/...)
- [How We Integrated Google Lighthouse into Our Dev Process](https://medium.com/...) *(Hepsiburadatech)*
- [TypeScript from A to Z](https://medium.com/...)
<!-- MEDIUM-ARTICLES:END -->
💡 The example above is from my own GitHub Profile README. Publication info (the channel name in parentheses) is extracted automatically from the article URL by the script. When it sees a URL like
medium.com/publication/..., it grabs the slug and adds it in parentheses next to the article title (shown on the second line above). For personal profile articles (medium.com/@yasinatesim/...) it shows no parentheses.
4. Python Script: Fetching Article Data 🐍
The entire script uses two external libraries: feedparser (for the Medium RSS feed) and requests (for the dev.to API). No other dependencies.
scripts/
└── fetch_articles.py
4.1 Basic Structure
"""
fetch_articles.py
-----------------
Fetches all articles from Medium RSS and the dev.to API,
then automatically updates the placeholders in README.md.
"""
import re
import feedparser
import requests
MEDIUM_USERNAME = "your-username" # without the @ sign
DEVTO_USERNAME = "your-username"
README_PATH = "README.md"
MEDIUM_START = "<!-- MEDIUM-ARTICLES:START -->"
MEDIUM_END = "<!-- MEDIUM-ARTICLES:END -->"
DEVTO_START = "<!-- DEVTO-ARTICLES:START -->"
DEVTO_END = "<!-- DEVTO-ARTICLES:END -->"
4.2 Fetching Medium Articles
Medium provides a public RSS feed for every user. feedparser converts this XML feed into Python objects. We do not set a limit; we process the entire feed.entries list.
def fetch_medium_articles():
url = f"https://medium.com/feed/@{MEDIUM_USERNAME}"
feed = feedparser.parse(url)
articles = []
for entry in feed.entries:
publication = None
link = entry.link
# Extract publication info from the URL
# medium.com/publication-slug/... → publication exists
# medium.com/@username/... → personal profile, no publication
subdomain_match = re.match(r"https://([^.]+)\.medium\.com/", link)
path_match = re.match(r"https://medium\.com/([^@/][^/]*)/", link)
if subdomain_match:
slug = subdomain_match.group(1)
publication = slug.replace("-", " ").title()
elif path_match:
slug = path_match.group(1)
# Filter out system paths (tag, search, topic, etc.)
system_paths = {"tag", "tags", "search", "topic", "topics",
"m", "about", "membership"}
if slug not in system_paths:
publication = slug.replace("-", " ").title()
articles.append({
"title": entry.title,
"url": link,
"publication": publication,
})
return articles
⚠️ Medium's RSS feed returns only the last 10 articles by default. This is a limit set by Medium; no matter what your script does, it cannot pull more than 10 via RSS. If you want to show all your articles, you will need to look into scraping approaches.
I do not yet have 10 published articles on Medium, so I have not integrated this into my own GitHub Profile README yet. 😁 I will update the repo once I cross 10 articles. The repo link is in the Demo section at the end.
4.3 Fetching dev.to Articles
dev.to has an open REST API that requires no API key. Since it supports pagination, we can pull all articles with a while loop:
def fetch_devto_articles():
page, articles = 1, []
while True:
url = (
f"https://dev.to/api/articles"
f"?username={DEVTO_USERNAME}&per_page=100&page={page}"
)
response = requests.get(url, timeout=10)
if response.status_code != 200:
break
batch = response.json()
if not batch: # Empty page → no more data
break
for item in batch:
articles.append({
"title": item["title"],
"url": item["url"],
})
page += 1
return articles
4.4 Building Markdown Rows
def build_medium_rows(articles):
if not articles:
return "_No articles found._"
rows = []
for a in articles:
# Show publication in italic parentheses if present
pub = f" *({a['publication']})*" if a.get("publication") else ""
rows.append(f'- [{a["title"]}]({a["url"]}){pub}')
return "\n".join(rows)
def build_devto_rows(articles):
if not articles:
return "_No articles found._"
return "\n".join(
f'- [{a["title"]}]({a["url"]})' for a in articles
)
4.5 Updating the README
We use regex to find the block between the markers and replace it with the new content. The re.DOTALL flag makes . match newlines as well; without it, multi-line blocks are not captured.
def replace_section(content, start_marker, end_marker, new_body):
pattern = re.compile(
rf"{re.escape(start_marker)}.*?{re.escape(end_marker)}",
re.DOTALL,
)
replacement = f"{start_marker}\n{new_body}\n{end_marker}"
return pattern.sub(replacement, content)
def main():
print("📡 Fetching Medium articles...")
medium_articles = fetch_medium_articles()
print(f" ✅ {len(medium_articles)} articles fetched.")
print("📡 Fetching dev.to articles...")
devto_articles = fetch_devto_articles()
print(f" ✅ {len(devto_articles)} articles fetched.")
with open(README_PATH, "r", encoding="utf-8") as f:
content = f.read()
content = replace_section(
content, MEDIUM_START, MEDIUM_END, build_medium_rows(medium_articles)
)
content = replace_section(
content, DEVTO_START, DEVTO_END, build_devto_rows(devto_articles)
)
with open(README_PATH, "w", encoding="utf-8") as f:
f.write(content)
print("✅ README.md updated successfully!")
if __name__ == "__main__":
main()
5. GitHub Actions Workflow File ⚙️
The workflow file goes inside the .github/workflows/ folder. GitHub automatically scans this folder and treats YAML files as workflows.
.github/
└── workflows/
└── update-articles.yml
File contents:
name: 📝 Update Blog Articles
on:
schedule:
- cron: "0 6 * * *" # Every day at 06:00 UTC
workflow_dispatch: # For manual triggering from the Actions tab
jobs:
update-readme:
name: Fetch & Update Articles
runs-on: ubuntu-latest
steps:
- name: 🔄 Checkout Repository
uses: actions/checkout@v4
- name: 🐍 Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: 📦 Install Dependencies
run: pip install requests feedparser
- name: 🚀 Run Article Fetcher
run: python scripts/fetch_articles.py
- name: 💾 Commit & Push Changes
run: |
git config --local user.email "github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add README.md
git diff --staged --quiet || git commit -m "📝 Auto-update: Latest blog articles [$(date +'%Y-%m-%d')]"
git push
actions/checkout@v4 -- Copies your repo to the virtual machine (ubuntu-latest) where the workflow runs. Without it the script cannot find README.md.
actions/setup-python@v5 -- Sets up the Python 3.11 environment. Even though GitHub Actions machines come with Python pre-installed, pinning the version is safer.
pip install requests feedparser -- Installs the two dependencies.
python scripts/fetch_articles.py -- Runs the script. This step updates README.md but does not commit yet.
Commit & Push step -- The critical line is:
git diff --staged --quiet || git commit -m "..."
git diff --staged --quiet returns a non-zero exit code when there are changes in the README; thanks to the || operator, a commit is made only if there are changes, and nothing happens otherwise. Without this, an empty commit would be created on every run.
💡 Cron syntax:
0 6 * * *means minute=0, hour=6, every day, every month, every day of the week. Keep in mind it uses UTC. For a different time, use crontab.guru.
6. Manual Triggering 🖱️
The cron job runs every morning, but you might not want to wait after publishing a new article. The workflow_dispatch trigger lets you run it whenever you want:
- Go to your repository on GitHub
- Click the Actions tab at the top
- Click 📝 Update Blog Articles in the left menu
- Click the Run workflow button that appears on the right
- Click Run workflow again in the dropdown that opens
The workflow starts within a few seconds. A yellow spinning icon appears in the left menu; once it finishes it turns green ✅. When you look at your README, you will see the updated article list.
7. Demo 👀
You can see a working example of the system described in this article on my own GitHub profile. Medium and dev.to articles update automatically every morning.
The full source code, including fetch_articles.py and update-articles.yml, is available in the repo below:
👉 github.com/yasinatesim/yasinatesim
Feedback 📬
While writing this article, I used Claude Opus 4.6 Thinking for proofreading and research, and Gemini 3.1 Flash Image Preview (Nano Banana 2) for generating the diagrams.
Feedback, suggestions, and corrections are always welcome. You can reach me through the social media links on my website or on LinkedIn.
Best, Yasin 🤗
Resources 📚
Docs
- GitHub Actions -- Workflow syntax
- GitHub Actions -- Events (schedule, workflow_dispatch)
- dev.to API Docs
- feedparser Docs
Tools
- crontab.guru -- Visually test cron expressions
- GitHub Secrets Docs -- Store API keys securely







Top comments (0)