<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: sameer zubair</title>
    <description>The latest articles on DEV Community by sameer zubair (@sameer_zubair_37ae31f4fb5).</description>
    <link>https://dev.to/sameer_zubair_37ae31f4fb5</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sameer_zubair_37ae31f4fb5"/>
    <language>en</language>
    <item>
      <title>WhiteboardIQ: From Blurry Whiteboard Photo to Structured Action Items with Gemma 4 E4B</title>
      <dc:creator>sameer zubair</dc:creator>
      <pubDate>Tue, 12 May 2026 23:15:47 +0000</pubDate>
      <link>https://dev.to/sameer_zubair_37ae31f4fb5/whiteboardiq-from-blurry-whiteboard-photo-to-structured-action-items-with-gemma-4-e4b-4ifg</link>
      <guid>https://dev.to/sameer_zubair_37ae31f4fb5/whiteboardiq-from-blurry-whiteboard-photo-to-structured-action-items-with-gemma-4-e4b-4ifg</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-gemma-2026-05-06"&gt;Gemma 4 Challenge: Build with Gemma 4&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;WhiteboardIQ — snap a photo of any whiteboard and get back a clean, structured list of action items, owners, deadlines, and priorities in seconds.&lt;/p&gt;

&lt;p&gt;Every team has been there: 45 minutes of productive planning, three whiteboards full of tasks and names, then someone takes a blurry phone photo and that's "the notes." Two days later nobody remembers who owned what.&lt;/p&gt;

&lt;p&gt;WhiteboardIQ fixes that. It reads the whiteboard image with Gemma 4's native vision and returns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;Action items&lt;/strong&gt; with owner, deadline, and priority (inferred from visual cues — circles = High, boxes = Medium, plain text = Low)&lt;/li&gt;
&lt;li&gt;🏛️ &lt;strong&gt;Decisions&lt;/strong&gt; made during the session&lt;/li&gt;
&lt;li&gt;❓ &lt;strong&gt;Open questions and blockers&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;📋 &lt;strong&gt;Full verbatim transcription&lt;/strong&gt; of the whiteboard&lt;/li&gt;
&lt;li&gt;📝 &lt;strong&gt;2–3 sentence executive summary&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;


&lt;div&gt;
    &lt;iframe src="https://www.youtube.com/embed/R0BXbDkJw-w"&gt;
    &lt;/iframe&gt;
  &lt;/div&gt;
 project. --&amp;gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;Code&lt;br&gt;
🔗 Web app + backend: github.com/samirzubair/GEMMA4&lt;br&gt;
🔗 Edge Gallery skill: samirzubair.github.io/GEMMA4/SKILL.md&lt;br&gt;
Project structure&lt;br&gt;
whiteboardiq/&lt;br&gt;
├── backend/&lt;br&gt;
│   ├── main.py        # FastAPI — POST /extract, serves frontend&lt;br&gt;
│   ├── model.py       # Gemma 4 via Ollama REST API (no SDK needed)&lt;br&gt;
│   └── formatter.py   # JSON → Markdown / CSV&lt;br&gt;
└── frontend/&lt;br&gt;
    ├── index.html     # Drag-and-drop upload UI&lt;br&gt;
    ├── style.css      # Dark-mode design system&lt;br&gt;
    └── app.js         # Fetch, render, copy, download&lt;/p&gt;

&lt;p&gt;whiteboardiq-skill/    # Google AI Edge Gallery skill&lt;br&gt;
├── SKILL.md           # Skill instructions for Gemma 4&lt;br&gt;
├── scripts/&lt;br&gt;
│   └── index.html     # run_js entry point&lt;br&gt;
└── assets/&lt;br&gt;
    └── webview.html   # Renders action items card in-app&lt;br&gt;
The Gemma integration — no SDK, just Ollama REST&lt;br&gt;
def extract_from_image_bytes(image_bytes: bytes, mime_type="image/jpeg") -&amp;gt; dict:&lt;br&gt;
    payload = {&lt;br&gt;
        "model": "gemma4:e4b",&lt;br&gt;
        "prompt": EXTRACTION_PROMPT,&lt;br&gt;
        "images": [base64.b64encode(image_bytes).decode()],&lt;br&gt;
        "stream": False,&lt;br&gt;
        "options": {"temperature": 0.2, "num_predict": 4096},&lt;br&gt;
    }&lt;br&gt;
    req = urllib.request.Request(&lt;br&gt;
        "&lt;a href="http://localhost:11434/api/generate" rel="noopener noreferrer"&gt;http://localhost:11434/api/generate&lt;/a&gt;",&lt;br&gt;
        data=json.dumps(payload).encode(),&lt;br&gt;
        headers={"Content-Type": "application/json"},&lt;br&gt;
    )&lt;br&gt;
    with urllib.request.urlopen(req, timeout=120) as resp:&lt;br&gt;
        return parse_json(json.loads(resp.read())["response"])&lt;br&gt;
temperature: 0.2 keeps extraction grounded — higher values caused the model to hallucinate owners or deadlines not on the board.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Used Gemma 4
&lt;/h2&gt;

&lt;p&gt;Native multimodal vision. Gemma 4 handles image + text in a single inference call. No separate OCR pipeline, no two-model stitching. The whiteboard photo goes in as a base64 blob alongside the structured prompt, and JSON comes out.&lt;br&gt;
Visual reasoning, not just OCR. Raw OCR gives you text. Gemma 4 understands context. It sees that a circled word is higher priority than plain text. It infers that a name written beside a task is the owner. It recognises that an arrow between two items implies a dependency. That's the difference between a transcription and an action list.&lt;br&gt;
Speed that feels real-time. E4B at Q4_K_M quantization runs in ~8 seconds on a MacBook for a typical whiteboard photo. The 27B Dense model gives marginally better handwriting recognition on very messy boards — but for a live demo and real-world enterprise use, E4B hits the sweet spot of accuracy vs. latency.&lt;br&gt;
Privacy — the killer feature for enterprise. Meeting content is sensitive. With Gemma 4 running locally via Ollama, whiteboard photos never leave the machine. No closed API, no data retention policy to worry about. The entire app runs without an internet connection.&lt;br&gt;
128K context window. Not used in the MVP, but the obvious next step: pass all whiteboard photos from a multi-hour session in one call and get unified, deduplicated action items across all boards. Only possible because of the large context.&lt;br&gt;
The prompt engineering&lt;br&gt;
The extraction prompt has three key rules that made the difference:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extract EVERY action item, even implicit ones
(e.g., "John → DB migration" → task for John)&lt;/li&gt;
&lt;li&gt;Infer priority from visual cues:
circled/starred/underlined = High, boxed = Medium, plain = Low&lt;/li&gt;
&lt;li&gt;A name written directly beside a task = that person is the owner
Without the implicit task rule, ~40% of action items were missed. Without the visual cue rule, all priorities came back as Medium. Gemma 4's instruction-following is strong enough to respect these rules reliably across very different whiteboard styles and handwriting quality.
Edge Gallery skill
The skill uses Gemma 4's agent mode via the run_js tool:&lt;/li&gt;
&lt;li&gt;User sends whiteboard photo in Edge Gallery chat&lt;/li&gt;
&lt;li&gt;Gemma reads the image with native vision&lt;/li&gt;
&lt;li&gt;Gemma calls run_js with structured JSON (action items, decisions, questions)&lt;/li&gt;
&lt;li&gt;scripts/index.html passes data to assets/webview.html via URL params&lt;/li&gt;
&lt;li&gt;A dark-mode card renders inline in the chat with priority badges and owner chips
## Instructions (from SKILL.md)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Call the &lt;code&gt;run_js&lt;/code&gt; tool using &lt;code&gt;index.html&lt;/code&gt; and a JSON string for &lt;code&gt;data&lt;/code&gt; with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;action_items: Array with task, owner, deadline, priority, notes&lt;/li&gt;
&lt;li&gt;decisions: Array of strings&lt;/li&gt;
&lt;li&gt;questions: Array of strings&lt;/li&gt;
&lt;li&gt;meeting_context, summary
The skill is live at &lt;a href="https://samirzubair.github.io/GEMMA4/SKILL.md" rel="noopener noreferrer"&gt;https://samirzubair.github.io/GEMMA4/SKILL.md&lt;/a&gt; — installable in any Edge Gallery instance in seconds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljqq52r4ba9afcx3xaye.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljqq52r4ba9afcx3xaye.PNG" alt=" " width="800" height="1738"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tiagt00lixcrmx822bm.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tiagt00lixcrmx822bm.PNG" alt=" " width="800" height="1738"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2a3975vmfklgx1sxbkf.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2a3975vmfklgx1sxbkf.PNG" alt=" " width="800" height="1738"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnepu3fpeh83th5ut3ly8.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnepu3fpeh83th5ut3ly8.PNG" alt=" " width="800" height="1738"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsywommn6p989yajf305p.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsywommn6p989yajf305p.PNG" alt=" " width="800" height="1738"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61ydh5h3huksyyr0c92j.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61ydh5h3huksyyr0c92j.PNG" alt=" " width="800" height="1738"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fn9bz2d8kogwss0mzvs.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fn9bz2d8kogwss0mzvs.PNG" alt=" " width="800" height="1738"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>gemmachallenge</category>
      <category>gemma</category>
    </item>
    <item>
      <title>From Blurry Whiteboard Photo to Structured Action Items with Gemma 4 E4B</title>
      <dc:creator>sameer zubair</dc:creator>
      <pubDate>Tue, 12 May 2026 23:07:57 +0000</pubDate>
      <link>https://dev.to/sameer_zubair_37ae31f4fb5/from-blurry-whiteboard-photo-to-structured-action-items-with-gemma-4-e4b-23hm</link>
      <guid>https://dev.to/sameer_zubair_37ae31f4fb5/from-blurry-whiteboard-photo-to-structured-action-items-with-gemma-4-e4b-23hm</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxd4tcymkxwdbw62zuyt1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxd4tcymkxwdbw62zuyt1.PNG" alt=" " width="800" height="1738"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;WhiteboardIQ&lt;/strong&gt; — snap a photo of any whiteboard and get back a clean, structured list of action items, owners, deadlines, and priorities in seconds.&lt;/p&gt;

&lt;p&gt;Every team has been there: 45 minutes of productive planning, three whiteboards full of tasks and names, then someone takes a blurry phone photo and that's "the notes." Two days later nobody remembers who owned what.&lt;/p&gt;

&lt;p&gt;WhiteboardIQ fixes that. It reads the whiteboard image with Gemma 4's native vision and returns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;Action items&lt;/strong&gt; with owner, deadline, and priority (inferred from visual cues — circles = High, boxes = Medium, plain text = Low)&lt;/li&gt;
&lt;li&gt;🏛️ &lt;strong&gt;Decisions&lt;/strong&gt; made during the session&lt;/li&gt;
&lt;li&gt;❓ &lt;strong&gt;Open questions and blockers&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;📋 &lt;strong&gt;Full verbatim transcription&lt;/strong&gt; of the whiteboard&lt;/li&gt;
&lt;li&gt;📝 &lt;strong&gt;2–3 sentence executive summary&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Export as &lt;strong&gt;JSON&lt;/strong&gt;, &lt;strong&gt;Markdown&lt;/strong&gt;, or &lt;strong&gt;CSV&lt;/strong&gt; — paste straight into Notion, Confluence, or a spreadsheet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three ways to use it:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Stack&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Web app&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;FastAPI + drag-and-drop UI. Gemma 4 via Ollama — no API key, fully offline&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Edge Gallery skill&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Install into Google AI Edge Gallery. Gemma 4 reads and structures whiteboards inline&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Web app — drop a photo, get action items in ~8 seconds:&lt;/strong&gt;&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
# Prerequisites: Ollama running with Gemma 4
ollama pull gemma4:e4b

cd whiteboardiq/backend
pip install -r requirements.txt
uvicorn main:app --reload
# Open http://127.0.0.1:8000
Edge Gallery skill — install in 10 seconds:

Open Edge Gallery → Agent → Skills → +

Gemma reads the board, extracts every task, and renders a live card with priority badges, owners, and deadlines — on-device, no internet required.

Install locally on iPhone (no URL needed):

AirDrop the whiteboardiq-skill/ folder to iPhone
Unzip in Files app
Edge Gallery → Skills → + → Import from file → select the folder


🔗 Web app + backend: github.com/samirzubair/GEMMA4

🔗 Edge Gallery skill: samirzubair.github.io/GEMMA4/SKILL.md

Project structure:

whiteboardiq/
├── backend/
│   ├── main.py        # FastAPI — POST /extract, serves frontend
│   ├── model.py       # Gemma 4 via Ollama REST API (no SDK needed)
│   └── formatter.py   # JSON → Markdown / CSV
└── frontend/
    ├── index.html     # Drag-and-drop upload UI
    ├── style.css      # Dark-mode design system
    └── app.js         # Fetch, render, copy, download
whiteboardiq-skill/    # Google AI Edge Gallery skill
├── SKILL.md           # Skill instructions for Gemma 4
├── scripts/
│   └── index.html     # run_js entry point — relays data to webview
└── assets/
    └── webview.html   # Renders action items card in-app
The Gemma integration — no SDK, just Ollama REST:

def extract_from_image_bytes(image_bytes: bytes, mime_type="image/jpeg") -&amp;gt; dict:
    payload = {
        "model": "gemma4:e4b",
        "prompt": EXTRACTION_PROMPT,
        "images": [base64.b64encode(image_bytes).decode()],
        "stream": False,
        "options": {"temperature": 0.2, "num_predict": 4096},
    }
    req = urllib.request.Request(
        "http://localhost:11434/api/generate",
        data=json.dumps(payload).encode(),
        headers={"Content-Type": "application/json"},
    )
    with urllib.request.urlopen(req, timeout=120) as resp:
        return parse_json(json.loads(resp.read())["response"])
temperature: 0.2 keeps extraction grounded — higher values caused the model to hallucinate owners or deadlines not visible on the board.

The Edge Gallery skill
The skill uses Gemma 4's agent mode:

User sends whiteboard photo in Edge Gallery chat
Gemma reads the image with native vision
Gemma calls run_js with structured JSON (action items, decisions, questions)
scripts/index.html passes data to assets/webview.html via URL params
A dark-mode card renders inline with priority badges, owner chips, and deadlines
## Instructions (from SKILL.md)

Call the `run_js` tool using `index.html` and a JSON string for `data` with:
- action_items: Array with task, owner, deadline, priority, notes
- decisions: Array of strings
- questions: Array of strings
- meeting_context, summary


**The bigger picture: what local Gemma 4 means for enterprise AI**
Most multimodal AI tools have a quiet asterisk: your data goes to our servers.

For consumer apps that's fine. For enterprise — where whiteboards contain roadmaps, hiring decisions, financial forecasts, and unreleased product names — it's often a dealbreaker. Legal reviews it, security blocks it, and the tool never ships internally.

Gemma 4 E4B changes that equation. An 8B parameter multimodal model that runs in real-time on a laptop, fits on a phone, reads handwriting, understands context, and produces structured output — fully offline — is a fundamentally different proposition than a cloud API.

WhiteboardIQ is a small demonstration of that shift. The whiteboard use case is deliberately mundane. That's the point. If Gemma 4 can turn a blurry meeting photo into a structured JIRA-ready action list in 8 seconds on consumer hardware, the question isn't "what else can it do?" — the question is "what's left that it can't?"

![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mhxfjm54500n79mie4g.PNG)
![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skitrnx37osc279e0ss8.PNG)
![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivcaegghh91j0oq2nmms.PNG)
![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ect6upsfvr64tce7yzdb.PNG)
![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nyr8f97mbytforwvjyru.PNG)
![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oqh6puqsm512ye8y9rvr.PNG)
![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vyy1gbguwqdqtaeybjof.PNG)


you tube link 

  &lt;div&gt;
    &lt;iframe src="https://www.youtube.com/embed/R0BXbDkJw-w"&gt;
    &lt;/iframe&gt;
  &lt;/div&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>devchallenge</category>
      <category>gemmachallenge</category>
      <category>gemma</category>
    </item>
  </channel>
</rss>
