<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Eduardo Reyes</title>
    <description>The latest articles on DEV Community by Eduardo Reyes (@eduardoreyes007351208).</description>
    <link>https://dev.to/eduardoreyes007351208</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/eduardoreyes007351208"/>
    <language>en</language>
    <item>
      <title>***UPDATE***</title>
      <dc:creator>Eduardo Reyes</dc:creator>
      <pubDate>Mon, 02 Feb 2026 00:18:49 +0000</pubDate>
      <link>https://dev.to/eduardoreyes007351208/update-4nk9</link>
      <guid>https://dev.to/eduardoreyes007351208/update-4nk9</guid>
      <description>&lt;p&gt;Hi all, &lt;/p&gt;

&lt;p&gt;I've been busy and haven't been able to work much on my projects but I just finished working on a front end website for my recipe scraping api. Here is a screenshot of the website. Here is the &lt;a href="https://www.meal2print.com/" rel="noopener noreferrer"&gt;link&lt;/a&gt; if you'd like to try it out. Thank you and leave any feedback if you'd like.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Recipe Scraping Tool in Python: What I learned</title>
      <dc:creator>Eduardo Reyes</dc:creator>
      <pubDate>Fri, 12 Sep 2025 01:36:47 +0000</pubDate>
      <link>https://dev.to/eduardoreyes007351208/building-a-recipe-scraping-tool-in-python-what-i-learned-4na3</link>
      <guid>https://dev.to/eduardoreyes007351208/building-a-recipe-scraping-tool-in-python-what-i-learned-4na3</guid>
      <description>&lt;h2&gt;
  
  
  The Problem..
&lt;/h2&gt;

&lt;p&gt;We've all been there, you want to learn how to cook a new meal so you Google the recipe. Then you get hit with all the ads, the website randomly scrolling on its own, and it just being a pain to just get the ingredient list or the instructions. I always think that there should be an easier way, then it hit me.. why don't I just ``make it easier. &lt;/p&gt;

&lt;p&gt;I wanted to make a tool in Python that scrapes through recipe website and returns the title, ingredient list, and instructions list in a txt file that's saved to your computer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Journey..
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Tools used:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Python&lt;/strong&gt; (3.13)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Requests&lt;/strong&gt; for requesting webpages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BeatifulSoup&lt;/strong&gt; for html parsing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ARgparse&lt;/strong&gt; for cli tool implementation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The basic code flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Receive URL from user input&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujajlju1oxsukgxd3khj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujajlju1oxsukgxd3khj.png" alt=" " width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Request the webpage using 'requests'&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdno9ncvbanpr02kxiaj7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdno9ncvbanpr02kxiaj7.png" alt=" " width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parse the html for 'application/ld+json' data using Beautiful Soup (bs4)&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpykhfieyu594y3kw07cd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpykhfieyu594y3kw07cd.png" alt=" " width="704" height="81"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Load and extract the title, ingredients, and instructions from JSON&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrtrgnh5ehcpzcp5004e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrtrgnh5ehcpzcp5004e.png" alt=" " width="693" height="497"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save data to an array and write the data to a txt file&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxytl5x49z3v9oykq8gpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxytl5x49z3v9oykq8gpi.png" alt=" " width="580" height="127"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Challenges and What I learned:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;My first webscraping project so I wasn't really sure how to go about getting the same data from different websites.&lt;/li&gt;
&lt;li&gt;At first, my code was very static, using bs4 to only get things from the website using hard coded class names.&lt;/li&gt;
&lt;li&gt;I had to do some research and learned that most websites have a script of type='application/ld+json' that contains the metadata such as title, ingredients, and instructions.&lt;/li&gt;
&lt;li&gt;I had also never created my own Pypi Python package, at first it was just a python script that the user would run.&lt;/li&gt;
&lt;li&gt;I learned how to package the tool so others can install and just run it, with the url as the parameter&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Txt file:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;This is what using the package looks like:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftg0rm5194uahucoer399.png" alt=" " width="800" height="32"&gt;
&lt;/li&gt;
&lt;li&gt;This is the final txt file:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeige2czxz88qm9tv0qk.png" alt=" " width="800" height="736"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  If you want to use the package:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;pip install recipescraper-cli-tool-er&lt;/li&gt;
&lt;li&gt;recipescraper (recipe url)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I want to make a website where people can go to and download the file&lt;/li&gt;
&lt;li&gt;I want to have it save the data to a pdf file instead of txt file&lt;/li&gt;
&lt;li&gt;There are some websites that still don't work so for a quick project it's okay, but I eventually want to have other ways to get the data when my current method doesn't work&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;This was a fun quick project that taught me about website json metadata, parsing the html structure, and creating Python packages. I do want to return to this project to improve it but for now onto the next one.&lt;/p&gt;

&lt;p&gt;Here's the GitHub repo if you're interested in the full code:&lt;br&gt;
[&lt;a href="https://github.com/eduardoreyes007351208/recipeScraper" rel="noopener noreferrer"&gt;https://github.com/eduardoreyes007351208/recipeScraper&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Thank you for reading, leave me your thoughts and ideas, and hopefully this makes cooking a little easier!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>python</category>
      <category>coding</category>
    </item>
    <item>
      <title>New here, trying to put myself out there more</title>
      <dc:creator>Eduardo Reyes</dc:creator>
      <pubDate>Tue, 02 Sep 2025 15:24:14 +0000</pubDate>
      <link>https://dev.to/eduardoreyes007351208/new-here-trying-to-put-myself-out-there-more-aj9</link>
      <guid>https://dev.to/eduardoreyes007351208/new-here-trying-to-put-myself-out-there-more-aj9</guid>
      <description>&lt;p&gt;Hi, my name is Eduardo Reyes, and here's a little about myself. I am a college graduate with a Bachelors in Computer Science. I had a few friends who where CS majors but I never really reached out and networked how I should have so I kinda want to do that here. I also haven't had luck with getting hired so I want to have a series where I document my journey with growing as a developer. It's nice to meet y'all!&lt;/p&gt;

</description>
      <category>beginners</category>
    </item>
  </channel>
</rss>
