DEV Community

Bessy Clarke
Bessy Clarke

Posted on • Edited on

Building My First CLI

I've made it to the end of my first phase at Flatiron School. To mark the end of the phase, I had to build my first application. The assignment was to scrap a website using Ruby and then use the data to build a CLI. I originally envisioned a much more snazzy app, but in the end, I kept it simple and to the point.

I wanted to build an app that displayed all the UNESCO Heritage sites across the US. The idea was for the user to interact with the CLI by displaying the site names, having the user select their chosen one, and then return a small chunk of information about each one. The whole process was a big learning curve, and while I did not get to implement some of the aspects of my original plan, I'm still very proud of what I made and hope to be able to gain more skills to make it better.

For starters, prior to Flatiron, I had never interacted with a terminal or even knew what a CLI was, so to be able to make something after one month is pretty exciting. Outside of actually coding the project, I learned how to connect and upload code to Github. I learned how to navigate the terminal and run files, which I had never done outside of the lessons and labs we are given. Armed with this knowledge, I am quite excited to build more things and working in the terminal made me feel like a hacker!

Building the project presented its own challenges. Leading up to the project, I was a little unclear about the principles of object-oriented programming. The idea of instantiating an object felt a bit abstract and it remained so throughout my project week. It wasn't until the Saturday prior to turning in the project that it finally clicked. Every time the scraper was called, it created a new instance of site, which then allowed Ruby to make each site an actual object in its memory. Each site was instantiated with a site name, a category, information, and the URL to the second level scrape.

  `Site.new(site_name, category, info, and URL).`
Enter fullscreen mode Exit fullscreen mode

The actual act of scraping a website also caused a lot of grief. It was so frustrating to choose the correct CSS selectors that would return the precise elements that I needed. Nokogiri made the process of scraping a little simpler though. I used Nokogiri to return the HTML elements on the page into a rendered format that Ruby could then interpret. Although I will say that I expected the process to be like an actual hot Nokogiri saw through butter, and less like a blunt knife going through a tomato.

first_url = "https://whc.unesco.org/en/statesparties/us"
html = open(first_url)
parsed_elements = Nokogiri::HTML(html)
unesco_cultural = parsed_elements.css("div.box li.cultural")

The biggest challenge was trying to find a website that was scrappable(is that even a word?) for both levels, which was the project requirement. Despite how frustrating scraping could be, I am thinking of finding more websites to scrape to hopefully build another CLI app. It ended up being great practice and also exposed the gaps in my knowledge, forcing me to review and research to complete the project.

Before wrapping up, I have to give a huge nod to the Pry Ruby gem. I didn't use it much prior to the project, but now I am wondering how I made it through my labs without it. It was so easy to inject a "binding.pry" into my methods and figure out exactly what was going wrong. Having the ability to check each and every variable and confirming what was being returned or passed into them made the debugging a lot less of a headache.

Completing this project makes this journey all the more real. I am on my way to becoming a coder, and while I didn't build something out of this world, I am very much looking forward to revisiting this project and seeing how much I grow as a coder over the new few months.

Top comments (0)