DEV Community

Cover image for Keep Your GitHub Profile in Sync with Your YouTube Playlist (Updates Automatically)
Eleftheria Batsou
Eleftheria Batsou Subscriber

Posted on

Keep Your GitHub Profile in Sync with Your YouTube Playlist (Updates Automatically)

Introduction

Developers often have content scattered across platforms—code on GitHub, videos on YouTube, posts on blogs. Keeping a GitHub profile up to date with your latest work can be tedious. In this article, we’ll walk through a practical automation that updates a GitHub profile README with the latest videos from a YouTube playlist every 6 hours.

We’ll cover:

  • What we built and why it’s useful
  • How the automation works end-to-end
  • The exact code (workflow + Python script)
  • Limitations and gotchas
  • Ideas for future improvements

This was implemented using GitHub Actions and a small Python script—no external APIs or tokens required. We’ll also show how Cosine can help you build and maintain automations like this faster.

What we built

We added a section in the GitHub profile README (in my case placed placed above “Recent Blog Posts”) that displays the latest 4 videos from a specific YouTube playlist in a 2×2 grid. Each item shows:

  • Video thumbnail
  • Title
  • Publication date
  • Links to the YouTube video (opens in a new tab)

The grid is regenerated every 6 hours by a GitHub Action. If there are no changes, nothing is committed. If there are new videos, the README is updated and pushed. (You can make your own variation of how frequently you want the action to run or the layout of the videos.)

Why this is useful

  • Visibility: Visitors to your GitHub profile immediately see fresh content without you lifting a finger.
  • Single source of truth: Your YouTube playlist becomes the driver; update it and your profile follows.
  • No tokens required: We use YouTube’s public RSS feed, so there’s no need to set up API credentials.
  • Low maintenance: Scheduling + idempotent commit logic means this runs quietly in the background.

Architecture overview

1) Scheduled workflow (GitHub Actions)

  • Runs every 6 hours and optionally on demand
  • Checks out the repo
  • Executes a Python script

2) Python script

  • Fetches your playlist feed: https://www.youtube.com/feeds/videos.xml?playlist_id=... (I decided to go with a specific playlist, you can obviously check for other things, like your whole youtube channel)
  • Parses the XML (title, video id, publish date)
  • Renders a 2×2 HTML table grid
  • Inserts the grid between markers in README:
    • <!-- YOUTUBE:GRID_START -->
    • <!-- YOUTUBE:GRID_END -->
  • Writes and commits only if the content has changed

3) README markers

  • Provide a clear insertion point so the script can update that specific section safely.

The code

1) Action workflow: .github/workflows/update-readme.yml

name: Update GitHub Profile with YouTube videos

on:
  workflow_dispatch:
  schedule:
    - cron: "0 */6 * * *"  # every 6 hours

jobs:
  update-readme:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repo
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.11"

      - name: Update README with latest YouTube videos
        env:
          PLAYLIST_ID: PLxktx98zP3aC8GM3HVRylRZYnCJeZ96vE
          MAX_ITEMS: "4"
          README_PATH: "README.md"
          START_MARK: "<!-- YOUTUBE:GRID_START -->"
          END_MARK: "<!-- YOUTUBE:GRID_END -->"
        run: |
          python scripts/update_youtube_readme.py

      - name: Commit changes
        run: |
          if git diff --quiet; then
            echo "No changes."
          else
            git config user.name "github-actions[bot]"
            git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
            git add README.md
            git commit -m "chore: update YouTube videos grid in README"
            git push
          fi
Enter fullscreen mode Exit fullscreen mode
  • Schedule: 6-hour cron (0 */6 * * *)
  • No extra dependencies needed beyond Python

2) Script: scripts/update_youtube_readme.py

import os
import sys
import urllib.request
import xml.etree.ElementTree as ET
from datetime import datetime, timezone
from typing import List, Dict

PLAYLIST_ID = os.getenv("PLAYLIST_ID", "").strip()
MAX_ITEMS = int(os.getenv("MAX_ITEMS", "4"))
README_PATH = os.getenv("README_PATH", "README.md")
START_MARK = os.getenv("START_MARK", "<!-- YOUTUBE:GRID_START -->")
END_MARK = os.getenv("END_MARK", "<!-- YOUTUBE:GRID_END -->")

NS = {
    "atom": "http://www.w3.org/2005/Atom",
    "yt": "http://www.youtube.com/xml/schemas/2015"
}

def fetch_feed(url: str) -> str:
    with urllib.request.urlopen(url) as resp:
        return resp.read().decode("utf-8")

def parse_entries(feed_xml: str) -> List[Dict]:
    root = ET.fromstring(feed_xml)
    entries = []
    for entry in root.findall("atom:entry", NS):
        title_el = entry.find("atom:title", NS)
        vid_el = entry.find("yt:videoId", NS)
        pub_el = entry.find("atom:published", NS)
        link_el = entry.find("atom:link[@rel='alternate']", NS) or entry.find("atom:link", NS)

        title = title_el.text if title_el is not None else ""
        video_id = vid_el.text if vid_el is not None else ""
        published = pub_el.text if pub_el is not None else ""
        url = f"https://www.youtube.com/watch?v={video_id}" if video_id else (link_el.attrib.get("href", "") if link_el is not None else "")

        entries.append({
            "title": title,
            "video_id": video_id,
            "published": published,
            "url": url
        })
    return entries

def iso_to_dt(s: str) -> datetime:
    try:
        return datetime.fromisoformat(s.replace("Z", "+00:00"))
    except Exception:
        return datetime.now(timezone.utc)

def render_thumbnail_url(video_id: str) -> str:
    # Use high-quality default thumbnail
    return f"https://i.ytimg.com/vi/{video_id}/hqdefault.jpg"

def render_html_grid(items: List[Dict]) -> str:
    # 2x2 table grid to avoid CSS that GitHub might strip. Click through opens in new tab.
    # Note: GitHub does not allow inline playback of YouTube iframes; we show thumbnails + titles.
    rows = []
    display = items[:MAX_ITEMS]
    for i in range(0, len(display), 2):
        chunk = display[i:i+2]
        tds = []
        for e in chunk:
            vid = e["video_id"]
            thumb = render_thumbnail_url(vid) if vid else ""
            url = e["url"]
            title = e["title"].strip()
            date = iso_to_dt(e["published"]).date().isoformat()
            cell = (
                f"<td align=\"center\" valign=\"top\" width=\"50%\">"
                f"  <a href=\"{url}\" target=\"_blank\" rel=\"noopener noreferrer\">"
                f"    <img src=\"{thumb}\" alt=\"{title}\" style=\"width:100%; max-width:320px; border-radius:8px;\" />"
                f"  </a>"
                f"  <br/>"
                f"  <a href=\"{url}\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>{title}</strong></a>"
                f"  <br/><em>{date}</em>"
                f"</td>"
            )
            tds.append(cell)
        while len(tds) < 2:
            tds.append("<td width=\"50%\"></td>")
        rows.append("<tr>" + "".join(tds) + "</tr>")
    return "<table>" + "".join(rows) + "</table>"

def update_readme_section(readme_text: str, new_block: str) -> str:
    if START_MARK in readme_text and END_MARK in readme_text:
        before = readme_text.split(START_MARK)[0]
        after = readme_text.split(END_MARK)[1]
        return f"{before}{START_MARK}\n{new_block}\n{END_MARK}{after}"

    insert_header = "#### Recent Blog Posts"
    idx = readme_text.find(insert_header)
    if idx != -1:
        before = readme_text[:idx]
        after = readme_text[idx:]
        section_title = "#### Latest YouTube Videos\n"
        block = f"{section_title}{START_MARK}\n{new_block}\n{END_MARK}\n"
        return before + block + after

    sep = "" if readme_text.endswith("\n") else "\n"
    section = f"\n#### Latest YouTube Videos\n{START_MARK}\n{new_block}\n{END_MARK}\n"
    return f"{readme_text}{sep}{section}"

def main():
    if not PLAYLIST_ID:
        print("Error: Provide PLAYLIST_ID.", file=sys.stderr)
        sys.exit(1)

    pl_url = f"https://www.youtube.com/feeds/videos.xml?playlist_id={PLAYLIST_ID}"
    feed_xml = fetch_feed(pl_url)

    entries = parse_entries(feed_xml)
    entries.sort(key=lambda e: iso_to_dt(e["published"]), reverse=True)

    html_grid = render_html_grid(entries)

    with open(README_PATH, "r", encoding="utf-8") as f:
        readme = f.read()

    updated = update_readme_section(readme, html_grid)

    if updated != readme:
        with open(README_PATH, "w", encoding="utf-8") as f:
            f.write(updated)
        print("README updated.")
    else:
        print("No changes required.")

if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

3) README markers and placement

We added this section above “Recent Blog Posts”:

#### Latest YouTube Videos
<!-- YOUTUBE:GRID_START -->
<!-- YOUTUBE:GRID_END -->
Enter fullscreen mode Exit fullscreen mode

On each run, the script replaces the content between the markers with the latest grid.

Limitations and gotchas

  • No inline playback: GitHub sanitizes iframes, so you cannot embed playable YouTube videos directly in a README. Thumbnails + links are the safest, most reliable approach.
  • Rate limiting: The RSS feed is public and lightweight, but avoid overly aggressive schedules. Six hours is a good balance.
  • Images: YouTube thumbnails are hotlinked from i.ytimg.com. If you want full control, you could cache images in the repo, but that will grow the repository over time.

Future improvements

  • Channel uploads merge: Combine playlist feed with full channel feed (https://www.youtube.com/feeds/videos.xml?channel_id=...) and deduplicate.
  • Template rendering: Allow custom HTML templates for different layouts (e.g., 1×4 row, 2×2 grid, with/without dates).
  • Caching: Save the last processed video ID (e.g., in a JSON file) to skip parsing when nothing has changed.
  • Fallback thumbnails: Use maxresdefault.jpg when available; gracefully fall back to hqdefault.jpg.
  • Rich metadata: Show video duration or description snippets by scraping the watch page (be mindful of terms of service).
  • Cross-promotion: Also update pinned repositories or create a standalone page that lists videos with filters.
  • Notifications: Send a Slack/Discord message when a new video is detected.
  • Unit tests: Add minimal tests for XML parsing and rendering to keep the script stable.

How Cosine helps

Cosine builds tools that automate developer workflows end-to-end. Using Genie (an AI Software Engineer by Cosine), you can:

  • Specify changes at a high level (“Add a workflow to update my README from a YouTube playlist every 6 hours”).
  • Have Genie generate and integrate code directly into your repository, following your style and constraints.
  • Iterate safely: Genie reads your files, makes minimal diffs, and respects existing conventions.
  • Scale this pattern: From YouTube feeds to blog sync, conference talks, or release notes—Genie can wire up these automations quickly.

Learn more at Cosine

Conclusion

Keeping your GitHub profile fresh shouldn’t require manual updates. With a small Python script and a scheduled GitHub Action, your latest YouTube content can appear automatically in your README, giving visitors a quick snapshot of what you’ve been working on. This automation is simple, fast, and extensible—and with Cosine, you can go beyond this use case to build and maintain many similar developer-friendly workflows with minimal friction.

If you’d like to add channel uploads, notifications, or a custom layout, it’s a small extension from here. Happy automating!

Top comments (2)

Collapse
 
emilioacevedodev profile image
Emilio Acevedo

This is a fantastic solution. I love the elegance and simplicity of the approach, especially using the public RSS feed to avoid the complexity of tokens and APIs.

This is more than just a neat README trick; it's a perfect example of a DevOps mindset applied to our own personal brand: automating to maintain consistency and eliminate manual toil. It's a microcosm of what we strive for in large-scale systems.

Thinking from a mission-critical systems perspective, the natural next step that comes to mind is adding a small health check. For instance, a simple notification (perhaps to a Slack or Discord channel) if the script fails repeatedly or the RSS feed is unavailable. It ensures our own automation doesn't become a blind spot.

Thanks for sharing such a practical and well-explained walkthrough.

Collapse
 
eleftheriabatsou profile image
Eleftheria Batsou

Thank you so much Emilio!