<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Greg</title>
    <description>The latest articles on DEV Community by Greg (@gms64).</description>
    <link>https://dev.to/gms64</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gms64"/>
    <language>en</language>
    <item>
      <title>Best Books for Learning Python in 2021</title>
      <dc:creator>Greg</dc:creator>
      <pubDate>Wed, 18 Nov 2020 23:12:15 +0000</pubDate>
      <link>https://dev.to/gms64/best-books-for-learning-python-in-2021-37hd</link>
      <guid>https://dev.to/gms64/best-books-for-learning-python-in-2021-37hd</guid>
      <description>&lt;h2&gt;
  
  
  its almost that time of year when people want to learn python
&lt;/h2&gt;

&lt;p&gt;As we get closer to the end of one of the oddest years on record and everyone changes their Google search queries to 'best python books 2021', I thought it might be a good time to tackle the incredibly popular question of "If I want to learn Python, what books should I read?".  There are endless articles on the topic - and basically any article will lead you to &lt;a href="https://automatetheboringstuff.com/"&gt;Automate the Boring Stuff&lt;/a&gt; - but I thought it was time to throw my hat in the ring, so to speak.  If you're not a regular reader, I've done this a few times in the past with &lt;a href="https://dev.to/blog/best-data-science-newsletters/"&gt;newsletters&lt;/a&gt;, &lt;a href="https://dev.to/blog/best-data-science-twitter-accounts/"&gt;twitter accounts&lt;/a&gt;, and &lt;a href="https://dev.to/blog/best-data-science-podcasts/"&gt;podcasts&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Just as a forewarning, in a few past articles I documented all the code I used to get to the results of the post.  This time I'm not doing so, and I'm sorry.  &lt;/p&gt;

&lt;p&gt;I mean I'm not really sorry - it takes &lt;em&gt;a lot more effort&lt;/em&gt; to format my code from a sketchy jupyter notebook to a presentable-ish jupyter notebook.  So we're just looking at results and random commentary from me this time!&lt;/p&gt;

&lt;p&gt;Anyways, on to the stuff you care about - the top python books to read this year.  &lt;/p&gt;

&lt;h2&gt;
  
  
  top python books to read in 2021
&lt;/h2&gt;

&lt;p&gt;All the data points here are from Goodreads, which is... probably not the best way to get the data if we're being honest, but its the one I had available to me. So... books!  The list below includes both beginner and not-beginner books - I'll go through the list at another time and split those out, but for now I'm just posting this online.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;title&lt;/th&gt;
&lt;th&gt;author&lt;/th&gt;
&lt;th&gt;average_rating&lt;/th&gt;
&lt;th&gt;ratings_count&lt;/th&gt;
&lt;th&gt;publish_date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/22800567-fluent-python"&gt;Fluent Python: Clear, Concise, and Effective Programming&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Luciano Ramalho&lt;/td&gt;
&lt;td&gt;4.66&lt;/td&gt;
&lt;td&gt;777&lt;/td&gt;
&lt;td&gt;2015-01-25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/22847284-grokking-algorithms-an-illustrated-guide-for-programmers-and-other-curio"&gt;Grokking Algorithms An Illustrated Guide For Programmers and Other Curious People&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Aditya Y. Bhargava&lt;/td&gt;
&lt;td&gt;4.41&lt;/td&gt;
&lt;td&gt;1,576&lt;/td&gt;
&lt;td&gt;2015-01-01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/33986067-deep-learning-with-python"&gt;Deep Learning with Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Francois Chollet&lt;/td&gt;
&lt;td&gt;4.64&lt;/td&gt;
&lt;td&gt;680&lt;/td&gt;
&lt;td&gt;2018-12-04&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/32899495-hands-on-machine-learning-with-scikit-learn-and-tensorflow"&gt;Hands-On Machine Learning with Scikit-Learn and TensorFlow&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Aurélien Géron&lt;/td&gt;
&lt;td&gt;4.55&lt;/td&gt;
&lt;td&gt;651&lt;/td&gt;
&lt;td&gt;2017-04-09&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/22514127-automate-the-boring-stuff-with-python"&gt;Automate the Boring Stuff with Python: Practical Programming for Total Beginners&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Al Sweigart&lt;/td&gt;
&lt;td&gt;4.28&lt;/td&gt;
&lt;td&gt;1,432&lt;/td&gt;
&lt;td&gt;2014-11-25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/23241059-python-crash-course"&gt;Python Crash Course: A Hands-On, Project-Based Introduction to Programming&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Eric Matthes&lt;/td&gt;
&lt;td&gt;4.35&lt;/td&gt;
&lt;td&gt;976&lt;/td&gt;
&lt;td&gt;2015-02-25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/14744694-python-for-data-analysis"&gt;Python for Data Analysis&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Wes McKinney&lt;/td&gt;
&lt;td&gt;4.13&lt;/td&gt;
&lt;td&gt;1,226&lt;/td&gt;
&lt;td&gt;2011-12-30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/80435.Learning_Python"&gt;Learning Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Mark Lutz&lt;/td&gt;
&lt;td&gt;3.96&lt;/td&gt;
&lt;td&gt;1,976&lt;/td&gt;
&lt;td&gt;2013-07-24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/23020812-effective-python"&gt;Effective Python: 59 Specific Ways to Write Better Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Brett Slatkin&lt;/td&gt;
&lt;td&gt;4.29&lt;/td&gt;
&lt;td&gt;630&lt;/td&gt;
&lt;td&gt;2015-02-01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/25545994-python-machine-learning"&gt;Python Machine Learning&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Sebastian Raschka&lt;/td&gt;
&lt;td&gt;4.27&lt;/td&gt;
&lt;td&gt;411&lt;/td&gt;
&lt;td&gt;2015-09-23&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/80436.Programming_Python"&gt;Programming Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Mark Lutz&lt;/td&gt;
&lt;td&gt;3.98&lt;/td&gt;
&lt;td&gt;848&lt;/td&gt;
&lt;td&gt;1996-08-01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/26457146-python-data-science-handbook"&gt;Python Data Science Handbook: Tools and Techniques for Developers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Jake Vanderplas&lt;/td&gt;
&lt;td&gt;4.32&lt;/td&gt;
&lt;td&gt;285&lt;/td&gt;
&lt;td&gt;2016-03-25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/24346909-introduction-to-machine-learning-with-python"&gt;Introduction to Machine Learning with Python: A Guide for Data Scientists&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Andreas C. Müller&lt;/td&gt;
&lt;td&gt;4.34&lt;/td&gt;
&lt;td&gt;257&lt;/td&gt;
&lt;td&gt;2015-06-25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/31134574-python-for-everybody"&gt;Python for Everybody: Exploring Data in Python 3&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Charles Severance&lt;/td&gt;
&lt;td&gt;4.31&lt;/td&gt;
&lt;td&gt;256&lt;/td&gt;
&lt;td&gt;2016-07-10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/17152735-python-cookbook"&gt;Python Cookbook&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;David Beazley&lt;/td&gt;
&lt;td&gt;4.16&lt;/td&gt;
&lt;td&gt;374&lt;/td&gt;
&lt;td&gt;2002-07-15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/8341335-learn-python-the-hard-way"&gt;Learn Python The Hard Way&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Zed A. Shaw&lt;/td&gt;
&lt;td&gt;3.87&lt;/td&gt;
&lt;td&gt;881&lt;/td&gt;
&lt;td&gt;2010-01-01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/18774655-flask-web-development"&gt;Flask Web Development: Developing Web Applications with Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Miguel Grinberg&lt;/td&gt;
&lt;td&gt;4.19&lt;/td&gt;
&lt;td&gt;302&lt;/td&gt;
&lt;td&gt;2014-04-28&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/25407018-data-science-from-scratch"&gt;Data Science from Scratch: First Principles with Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Joel Grus&lt;/td&gt;
&lt;td&gt;3.93&lt;/td&gt;
&lt;td&gt;552&lt;/td&gt;
&lt;td&gt;2015-04-14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/17912811-test-driven-web-development-with-python"&gt;Test-Driven Web Development with Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Harry Percival&lt;/td&gt;
&lt;td&gt;4.21&lt;/td&gt;
&lt;td&gt;237&lt;/td&gt;
&lt;td&gt;2010-01-01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/22299369-black-hat-python"&gt;Black Hat Python: Python Programming for Hackers and Pentesters&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Justin Seitz&lt;/td&gt;
&lt;td&gt;4.04&lt;/td&gt;
&lt;td&gt;319&lt;/td&gt;
&lt;td&gt;2014-11-25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/583495.Python_Pocket_Reference"&gt;Python Pocket Reference&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Mark Lutz&lt;/td&gt;
&lt;td&gt;4.00&lt;/td&gt;
&lt;td&gt;307&lt;/td&gt;
&lt;td&gt;1998-03-15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/17912929-introducing-python"&gt;Introducing Python: Modern Computing in Simple Packages&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Bill Lubanovic&lt;/td&gt;
&lt;td&gt;4.21&lt;/td&gt;
&lt;td&gt;170&lt;/td&gt;
&lt;td&gt;2013-11-22&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/34695800-a-common-sense-guide-to-data-structures-and-algorithms"&gt;A Common-Sense Guide to Data Structures and Algorithms: Level Up Your Core Programming Skills&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Jay Wengrow&lt;/td&gt;
&lt;td&gt;4.22&lt;/td&gt;
&lt;td&gt;160&lt;/td&gt;
&lt;td&gt;2017-08-22&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/8933914-head-first-python"&gt;Head First Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Paul Barry&lt;/td&gt;
&lt;td&gt;3.83&lt;/td&gt;
&lt;td&gt;320&lt;/td&gt;
&lt;td&gt;2010-01-01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/34695799-python-testing-with-pytest"&gt;Python Testing with Pytest: Simple, Rapid, Effective, and Scalable&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Brian Okken&lt;/td&gt;
&lt;td&gt;4.06&lt;/td&gt;
&lt;td&gt;119&lt;/td&gt;
&lt;td&gt;2017-10-25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/12042357-think-stats"&gt;Think Stats&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Allen B. Downey&lt;/td&gt;
&lt;td&gt;3.63&lt;/td&gt;
&lt;td&gt;315&lt;/td&gt;
&lt;td&gt;2011-01-01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/51941365-the-self-taught-programmer"&gt;The Self-Taught Programmer: The Definitive Guide to Programming Professionally&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Cory Althoff&lt;/td&gt;
&lt;td&gt;4.00&lt;/td&gt;
&lt;td&gt;71&lt;/td&gt;
&lt;td&gt;2017-01-24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/43174990-serious-python"&gt;Serious Python: Black-Belt Advice on Deployment, Scalability, Testing, and More&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Julien Danjou&lt;/td&gt;
&lt;td&gt;4.03&lt;/td&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;td&gt;2018-12-01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/31393737-feature-engineering-for-machine-learning"&gt;Feature Engineering for Machine Learning&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Alice Zheng&lt;/td&gt;
&lt;td&gt;3.81&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;td&gt;2018-04-10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/40573304-impractical-python-projects"&gt;Impractical Python Projects: Playful Programming Activities to Make You Smarter&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Lee Vaughan&lt;/td&gt;
&lt;td&gt;4.24&lt;/td&gt;
&lt;td&gt;31&lt;/td&gt;
&lt;td&gt;2018-11-27&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/39339569-python-flash-cards"&gt;Python Flash Cards: Syntax, Concepts, and Examples&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Eric Matthes&lt;/td&gt;
&lt;td&gt;4.08&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;2019-01-15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.goodreads.com/book/show/49828191-high-performance-python"&gt;High Performance Python: Practical Performant Programming for Humans&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Micha Gorelick&lt;/td&gt;
&lt;td&gt;4.17&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;2013-10-22&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;And what's with the weird sorting, you ask? Well its a combination of average rating &amp;amp; ratings count that I made up for &lt;a href="https://nextnovelproject.com/blog/calculations"&gt;another project&lt;/a&gt; that I like a lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  So which one should I read if I'm a beginner?
&lt;/h2&gt;

&lt;p&gt;So if we filter down the list above which books are primarily for beginners, you end up with a solid starting point:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.goodreads.com/book/show/22514127-automate-the-boring-stuff-with-python"&gt;Automate the Boring Stuff with Python: Practical Programming for Total Beginners&lt;/a&gt; by Al Sweigart (4.28 avg, 1,432 ratings)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.goodreads.com/book/show/23241059-python-crash-course"&gt;Python Crash Course: A Hands-On, Project-Based Introduction to Programming&lt;/a&gt; by Eric Matthes (4.35 avg, 976 ratings)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.goodreads.com/book/show/80435.Learning_Python"&gt;Learning Python&lt;/a&gt; by Mark Lutz (3.96 avg, 1,976 ratings)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.goodreads.com/book/show/22847284-grokking-algorithms-an-illustrated-guide-for-programmers-and-other-curio"&gt;Grokking Algorithms An Illustrated Guide For Programmers and Other Curious People&lt;/a&gt; by Aditya Y. Bhargava (4.41 avg, 1,576 ratings)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.goodreads.com/book/show/14744694-python-for-data-analysis"&gt;Python for Data Analysis&lt;/a&gt; by Wes McKinney (4.13 avg, 1,226 ratings)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And those are only the top five books on there.  Grokking Algorithms is a bit more focused on the &lt;em&gt;computer science&lt;/em&gt; aspects of coding rather than the python language, but it does use Python for its examples.  And Python for Data Analysis is basically focused on a python package called &lt;em&gt;pandas&lt;/em&gt;, which is what any data scientist/analyst spends most of their day in. So if you're truly a beginner, focus on Automate the Boring Stuff, Python Crash Course, and Learning Python.&lt;/p&gt;

&lt;p&gt;My other recommendation for beginners is to keep an eye out  for bundle deals, like &lt;a href="https://www.humblebundle.com/level-up-your-python"&gt;humble bundle&lt;/a&gt;. You can normally get ~10 ebooks for $15. Which is a pretty good deal, given a lot of these books go for $30+ on their own...&lt;/p&gt;

</description>
      <category>python</category>
      <category>books</category>
      <category>beginners</category>
    </item>
    <item>
      <title>The Most Popular Data Science Newsletters</title>
      <dc:creator>Greg</dc:creator>
      <pubDate>Thu, 17 Sep 2020 20:52:06 +0000</pubDate>
      <link>https://dev.to/gms64/the-most-popular-data-science-newsletters-4mi8</link>
      <guid>https://dev.to/gms64/the-most-popular-data-science-newsletters-4mi8</guid>
      <description>&lt;p&gt;Or at least, the most often cited when looking through articles about data science newsletters.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;title&lt;/th&gt;
&lt;th&gt;mention_count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://dataelixir.com/"&gt;Data Elixir&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.datascienceweekly.org/"&gt;Data Science Weekly&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.oreilly.com/emails/newsletters/"&gt;O’Reilly Data Newsletter&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://roundup.fishtownanalytics.com/"&gt;The Data Science Roundup&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://us13.campaign-archive.com/home/?u=67bd06787e84d73db24fb0aa5&amp;amp;id=6c9d98ff2c"&gt;Import AI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.kdnuggets.com/news/subscribe.html"&gt;KDnuggets News&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://mode.com/newsletter/"&gt;The Analytics Dispatch&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://datamachina.substack.com"&gt;Data Machina&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://subscribe.machinelearnings.co/"&gt;Machine Learnings&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://tinyletter.com/data-is-plural"&gt;Data is Plural&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.deeplearningweekly.com/"&gt;Deep Learning Weekly&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;a href="https://forms.technologyreview.com/the-algorithm/"&gt;The Algorithm&lt;/a&gt;**&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.getrevue.co/profile/wildml"&gt;The Wild Week in AI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.topbots.com/enterprise-ai-news-pro-newsletter/"&gt;Topbots Applied AI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://aiweekly.co/"&gt;AI Weekly&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.datainnovation.org/about/newsletter/"&gt;Center for Data Innovation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://inside.com/tags/technology"&gt;Inside AI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://insidebigdata.com/newsletter/"&gt;insideBIGDATA&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://cds.nyu.edu/newsletter/"&gt;NYU Data Science Community News&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.datasciencecentral.com/profiles/blogs/check-out-our-dsc-newsletter"&gt;Data Science Central&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://deephunt.in/"&gt;Deep Hunt&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.hndigest.com/"&gt;Hacker News Digest&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://opendatascience.com/newsletter/"&gt;OpenDS&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://theodi.org/knowledge-opinion/the-week-in-data/"&gt;The Week in Data&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.analyticsvidhya.com"&gt;Analytics Vidhya&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://dataconomy.com/"&gt;Dataconomy&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://medium.com/kaggle-blog"&gt;Kaggle Newsletter&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://towardsdatascience.com/receive-our-newsletters-681049ffa0cf"&gt;Towards Data Science&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;** Paid Newsletter&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This was collected by hand because my web scraping gameplan didn't account for newsletters that go to multiple different links... Could I have built some super cool algorithm to collect this? ...Maybe? But I didn't. Instead, I just collected links for about an hour or two and here we are.  Although the process was rather tedious, it probably provides a lot of value to you guys, so I'd say it was worth it.&lt;/p&gt;

&lt;h4&gt;
  
  
  honorable mention
&lt;/h4&gt;

&lt;p&gt;These newsletters were mentioned one time each in the lists I was referencing. FYI to the two people who are combing through the source links wondering why &lt;strong&gt;newsletter x&lt;/strong&gt; or &lt;strong&gt;newsletter y&lt;/strong&gt; wasn't included - I took out newsletters where the links were no longer functional.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://betakit.com/newsletters/subscribe-to-ai-the-ai-times-newsletter/"&gt;AI Times&lt;/a&gt;, &lt;a href="https://www.aitrends.com"&gt;AI Trends&lt;/a&gt;, &lt;a href="https://bigdatanewsweekly.com/"&gt;Big Data News Weekly&lt;/a&gt;, &lt;a href="https://bootstraplabs.com/"&gt;Bootstrap Labs&lt;/a&gt;, &lt;a href="https://www.cbinsights.com/newsletter"&gt;CB Insights&lt;/a&gt;, &lt;a href="https://chinai.substack.com/"&gt;ChinAI&lt;/a&gt;, &lt;a href="https://mailchi.mp/e42fc9825362/the-creative-ai-newsletter-195661"&gt;Creative AI&lt;/a&gt;, &lt;a href="https://dair.ai/newsletter/"&gt;dair.ai NLP Newsletter&lt;/a&gt;, &lt;a href="https://www.datacoalition.org"&gt;Data Coalition&lt;/a&gt;, &lt;a href="http://www.datacommunitydc.org"&gt;Data Community DC&lt;/a&gt;, &lt;a href="https://dataengweekly.substack.com/"&gt;Data Eng Weekly&lt;/a&gt;, &lt;a href="https://www.dataquest.io/blog/"&gt;DataQuest Newsletter&lt;/a&gt;, &lt;a href="https://www.exponentialview.co/"&gt;Exponential View&lt;/a&gt;, &lt;a href="https://fortune.com/newsletter/eye-on-ai"&gt;Eye on AI&lt;/a&gt;, &lt;a href="https://flowingdata.com/newsletter/"&gt;Flowing Data&lt;/a&gt;, &lt;a href="http://www.garysguide.com/events"&gt;Gary’s Guide&lt;/a&gt;, &lt;a href="https://hackernoon.com/tagged/artificial-intelligence"&gt;Hacker Noon&lt;/a&gt;, &lt;a href="https://tinyletter.com/hmason"&gt;Hilary Mason&lt;/a&gt;, &lt;a href="https://royapakzad.us17.list-manage.com/subscribe?u=9138308bb26620c53a0881c20&amp;amp;id=8152ed9f0c"&gt;Humane AI&lt;/a&gt;, &lt;a href="https://lionbridge.ai/ai-newsletter-subscription/"&gt;Lionbridge AI&lt;/a&gt;, &lt;a href="https://us14.list-manage.com/subscribe?u=049ae9b8f179f42af00aa83b7&amp;amp;id=d1c54464b3"&gt;Machine Learning Blueprint&lt;/a&gt;, &lt;a href="https://mlinproduction.com/machine-learning-newsletter/"&gt;ML in Production&lt;/a&gt;, &lt;a href="http://newsletter.ruder.io/"&gt;NLP News&lt;/a&gt;, &lt;a href="https://openai.com/blog"&gt;Open AI&lt;/a&gt;, &lt;a href="https://www.r-bloggers.com"&gt;R-Bloggers&lt;/a&gt;, &lt;a href="https://www.skynettoday.com/subscribe"&gt;Skynet Today&lt;/a&gt;, &lt;a href="https://stratechery.com/daily-update/"&gt;Stratechery&lt;/a&gt;, &lt;a href="http://www.thetalkingmachines.com/"&gt;Talking Machines&lt;/a&gt;, &lt;a href="https://tinyletter.com/art-of-data-science/"&gt;The Art of Data Science&lt;/a&gt;, &lt;a href="https://www.deeplearning.ai/thebatch/"&gt;The Batch&lt;/a&gt;, &lt;a href="https://analyticsindiamag.com"&gt;The Belamy&lt;/a&gt;, &lt;a href="https://thegradient.pub/subscribe/"&gt;The Gradient&lt;/a&gt;, &lt;a href="https://pudding.cool"&gt;The Pudding&lt;/a&gt;, &lt;a href="https://www.tldrnewsletter.com"&gt;TLDR&lt;/a&gt;, &lt;a href="https://twimlai.com/newsletter/"&gt;TWIML&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  sources for the list
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.dataquest.io/blog/best-data-science-newsletters/"&gt;https://www.dataquest.io/blog/best-data-science-newsletters/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.springboard.com/blog/machine-learning-ai-and-data-science-newsletters/"&gt;https://www.springboard.com/blog/machine-learning-ai-and-data-science-newsletters/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://towardsdatascience.com/13-essential-newsletters-for-data-scientists-remastered-f422cb6ea0b0"&gt;https://towardsdatascience.com/13-essential-newsletters-for-data-scientists-remastered-f422cb6ea0b0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://datacrunchcorp.com/data-science-weekly-newsletters/"&gt;https://datacrunchcorp.com/data-science-weekly-newsletters/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@georgeliu1998/the-most-comprehensive-list-of-data-science-newsletters-d5aac324c7d1"&gt;https://medium.com/@georgeliu1998/the-most-comprehensive-list-of-data-science-newsletters-d5aac324c7d1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://heartbeat.fritz.ai/newsletters-for-data-science-analytics-b9a69e468151?gi=7723851f52dc"&gt;https://heartbeat.fritz.ai/newsletters-for-data-science-analytics-b9a69e468151?gi=7723851f52dc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kaggle.com/getting-started/126606"&gt;https://www.kaggle.com/getting-started/126606&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://analyticsindiamag.com/top-10-data-science-newsletters-to-stay-updated-amid-lockdown/"&gt;https://analyticsindiamag.com/top-10-data-science-newsletters-to-stay-updated-amid-lockdown/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ischoolonline.berkeley.edu/blog/10-data-science-newsletters-subscribe/"&gt;https://ischoolonline.berkeley.edu/blog/10-data-science-newsletters-subscribe/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://humansofdata.atlan.com/2020/06/top-10-newsletters-in-data-science/"&gt;https://humansofdata.atlan.com/2020/06/top-10-newsletters-in-data-science/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aiartists.org/ai-newsletters"&gt;https://aiartists.org/ai-newsletters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.analyticsvidhya.com/data-science-blogs-communities-books-podcasts-newsletters-follow/"&gt;https://www.analyticsvidhya.com/data-science-blogs-communities-books-podcasts-newsletters-follow/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://iterative.ly/blog/best-resources-data-and-analytics-folks/#newsletters"&gt;https://iterative.ly/blog/best-resources-data-and-analytics-folks/#newsletters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://lionbridge.ai/articles/the-best-ai-machine-learning-newsletters-to-subscribe-to/"&gt;https://lionbridge.ai/articles/the-best-ai-machine-learning-newsletters-to-subscribe-to/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dataconomy.com/2018/01/5-awesome-data-science-subscriptions-keep-informed/"&gt;https://dataconomy.com/2018/01/5-awesome-data-science-subscriptions-keep-informed/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  newsletter time!
&lt;/h2&gt;

&lt;p&gt;You've made it to the body of the article! Congratulations!&lt;/p&gt;

&lt;p&gt;In today's post, I'm continuing my completely absurd crusade to find publications to subscribe to - next up on my list is newsletters.  My previous articles were about &lt;a href="https://gregondata.com/blog/best-data-science-twitter-accounts/"&gt;twitter accounts&lt;/a&gt; and &lt;a href="https://gregondata.com/blog/best-data-science-podcasts/"&gt;podcasts&lt;/a&gt;, if you're inclined to check them out.&lt;/p&gt;

&lt;p&gt;But similar to my views on podcasts, I love newsletters. I find them to be most useful for keeping up with all the cool, new developments in the fields that I'm interested in. Unlike other platforms - &lt;em&gt;cough&lt;/em&gt; Twitter &lt;em&gt;cough&lt;/em&gt; - where you get bogged down with a deluge of information, newsletters typically boil everything down into 5-10 bullet points per week (or month). And brevity is nice.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;And&lt;/em&gt; I recently found a cool app* that lets me subscribe to them &lt;em&gt;outside&lt;/em&gt; of my normal email, so I can have a separate 'Data Science/Tech Newsletter' app, effectively.  Which is &lt;em&gt;fantastic&lt;/em&gt;. Like truly, honestly, fantastic.  Highly recommend.  &lt;/p&gt;

&lt;p&gt;*For reference, the newsletter app is called &lt;a href="https://slickinbox.com/"&gt;slick inbox&lt;/a&gt; and it's still in beta.  There's also apparently one called &lt;a href="https://stoopinbox.com/"&gt;stoop inbox&lt;/a&gt;, which seems pretty similar. I don't endorse either one of these, nor do I have any affiliation - I just like having newsletters out of my inbox so that I can &lt;em&gt;actually&lt;/em&gt; read them when I have free time.&lt;/p&gt;

&lt;p&gt;Anyways, you probably don't care about that, lets get on to the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  gameplanning the process
&lt;/h2&gt;

&lt;p&gt;So, off the back of some silly articles where I found popular twitter accounts and podcasts, I thought... what next? &lt;/p&gt;

&lt;p&gt;The answer? Sleep. Also, work. But then, newsletters. Newsletters are definitely next.&lt;/p&gt;

&lt;p&gt;But how to go about doing this?&lt;/p&gt;

&lt;p&gt;My first thought was to find a site that aggregates newsletter listings and estimates subscribers, but I struck out hard on that front. So I moved to plan B - google-ing what the best data science newsletters were. That search brought me to a lot of blog posts in list format, which had links to - you guessed it - newsletters.  My bright data-science-y mind thought, "Hey, you can scrape these links and count the frequency they occur in the articles - that'll be an easy way to get your answer". So that's &lt;em&gt;exactly&lt;/em&gt; what I did.&lt;/p&gt;

&lt;p&gt;Well, that's what I scoped out and built... unfortunately, I had to scrap the code because links to newsletters are inconsistent (in that some people link to the main site, some to a subscribe-specific link, a newsletter-hosting service, etc).  But since I did already build the code, I think its worth sharing... even if it looks like its held together with duct tape.&lt;/p&gt;

&lt;h2&gt;
  
  
  code to find top data science newsletters
&lt;/h2&gt;

&lt;p&gt;I'm going to throw the code below with a bit less than the standard amount of commentary.  Since it doesn't &lt;em&gt;actually&lt;/em&gt; work, no one's really going to end up using it... probably?&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# import default python packages
import urllib
import requests
import time

# import non-standard python packages
# if you dont have these installed, install them with pip or conda
from bs4 import BeautifulSoup
from readability import Document
import pandas as pd

# define the desktop user-agent
USER_AGENT = "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0) Gecko/20100101 Firefox/15.0.1"
# mobile user-agent
MOBILE_USER_AGENT = "Mozilla/5.0 (iPhone; CPU iPhone OS 12_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1"

# function to create google query string based on a query text
def string_to_google_query(string, pg=None):
    query = urllib.parse.quote_plus(f'{string}')
    if pg != None:
        query = query + "&amp;amp;start=" + str(10*(pg-1))
    return f"https://google.com/search?hl=en&amp;amp;lr=en&amp;amp;q={query}", string

# Queries to feed scraper
list_kwargs = [
    {"string": 'best data science newsletters'},
    {"string": 'best data science newsletters', "pg": 2},
    {"string": 'best data science newsletters', "pg": 3},
    {"string": 'best data engineering newsletters'},
    {"string": 'best data visualization newsletters'},
    {"string": 'best artificial intelligence newsletters'},
    {"string": 'best machine learning newsletters'},

    {"string": 'data science newsletters'},
    {"string": 'data science newsletters', "pg": 2},
    {"string": 'data science newsletters', "pg": 3},
    {"string": 'data engineering newsletters'},
    {"string": 'data visualization newsletters'},
    {"string": 'artificial intelligence newsletters'},
    {"string": 'machine learning newsletters'},
]

def google_scraper_mobile(link):
    results = []
    headers = {"user-agent" : MOBILE_USER_AGENT}
    resp = requests.get(url, headers=headers)
    if resp.status_code == 200:
        soup = BeautifulSoup(resp.content, "html.parser")

    for g in soup.find_all('div', class_='mnr-c'):
        anchors = g.find_all('a')
        if len(anchors) &amp;gt; 0:
            try:
                # this code will fail on featured snippets
                link = anchors[0]['href']
            except:
                next

            try:
                title = anchors[0].find_all('div')[1].get_text().strip()
            except:
                title = anchors[0].get_text().strip()

            item = {
                "title": title,
                "link": link,
                "search_term": search_term
            }
            results.append(item)
    return results


# Crawling Google as a mobile user
headers = {"user-agent" : MOBILE_USER_AGENT}

results = []
for x in list_kwargs:
    url, search_term = string_to_google_query(**x)
    scrape_res = google_scraper_mobile(url)
    results = results + scrape_res

    time.sleep(2.5)


# put results into a dataframe
newsletter_df = pd.DataFrame(results)

# Check there is a number in the title (such as 'top 10 newsletters')
# or that the title contains newsletter
newsletter_df = newsletter_df.loc[(newsletter_df['title'].str.contains('[0-9]') | newsletter_df['title'].str.lower().str.contains('newsletter'))]
newsletter_df.drop_duplicates(subset='link',inplace=True)

# switch the user agent to desktop - articles shouldn't differ on desktop vs mobile and will likely have fewer issues on desktop
headers = {"user-agent" : USER_AGENT}

#define the crawler for each article
def article_link_crawl(link):
    """
    Returns links and either a 1 or 0 if it was a success / failure.
    Only Crawls articles, so there should be a few failures
    """
    try:
        domain = link.split('://')[1].split('/')[0] # defines the site domain
        article_links = []
        resp = requests.get(link, headers=headers) # get request for the link

        # pass the article through readibility to get the article content rather than the full webpage
        rd_doc = Document(resp.text)
        if resp.status_code == 200:
            soup = BeautifulSoup(rd_doc.content(), "html.parser")
        link_soup = soup.find_all('a') # find all links
        for link in link_soup:
            # if the link has a href, create an item to add to the aggregate article_links list
            if link.has_attr('href'):
                item = {
                    "text": link.get_text().strip(),
                    "link": link['href']
                    }
                # dont add any blank links, internal links, links starting with '/' or '#' (also internal links)
                if item['text'] != '' and item['link'].find(domain) == -1 and item['link'][0] != '/' and item['link'][0] != '#':
                    article_links.append(item)
        return article_links, 1
    except:
        return None, 0


# loop through results
agg_links = []
total_success = 0
total_fail = 0
fail_urls = []
for link in newsletter_df['link']:
    res_links, is_success = article_link_crawl(link)
    if is_success == 1:
        total_success = total_success+1
    else:
        total_fail = total_fail+1
        fail_urls.append(link)

    if res_links != None:
        for lnk in res_links:
            agg_links.append(lnk)
    time.sleep(2.5)

# function to get frequency of a link
def list_freq(tgt_list): 
    freq = {} 
    for item in tgt_list: 
        if (item in freq): 
            freq[item] += 1
        else: 
            freq[item] = 1

    result = []
    for key, value in freq.items(): 
        result.append({
            "link": key,
            "count": value
        })
    return result

# count occurrences of each link
clean_link_list = [x['link'].replace('http://','https://') for x in agg_links]
link_freq = list_freq(clean_link_list)

# count occurrences of anchor text
clean_text_list = [x['text'].replace('http://','https://') for x in agg_links]
text_freq = list_freq(clean_text_list)

# move variables to data frames
link_freq_df = pd.DataFrame(link_freq)
text_freq_df = pd.DataFrame(text_freq)

link_freq_df = link_freq_df.sort_values('count',ascending=False)
link_freq_df = link_freq_df.loc[link_freq_df['count'] &amp;gt;= 3]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This was the result of the code - not great, but not horrible.  In the end, I just decided the 'old fashioned way' was the proper way to find newsletters...&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;link&lt;/th&gt;
&lt;th&gt;count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://roundup.fishtownanalytics.com/"&gt;https://roundup.fishtownanalytics.com/&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.oreilly.com/data/newsletter.html"&gt;https://www.oreilly.com/data/newsletter.html&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://dataelixir.com/"&gt;https://dataelixir.com/&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.datascienceweekly.org/"&gt;https://www.datascienceweekly.org/&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://us13.campaign-archive.com/home/?u=67bd06787e84d73db24fb0aa5&amp;amp;id=6c9d98ff2c"&gt;https://us13.campaign-archive.com/home/?u=67bd06787e84d73db24fb0aa5&amp;amp;id=6c9d98ff2c&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/EthicalML/awesome-machine-learning-operations"&gt;https://github.com/EthicalML/awesome-machine-learning-operations&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.instagram.com/data4sci/"&gt;https://www.instagram.com/data4sci/&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.linkedin.com/company/dataforscience/"&gt;https://www.linkedin.com/company/dataforscience/&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/DataForScience"&gt;https://github.com/DataForScience&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://medium.com/data-for-science"&gt;https://medium.com/data-for-science&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://twitter.com/data4sci"&gt;https://twitter.com/data4sci&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://tinyletter.com/data-is-plural"&gt;https://tinyletter.com/data-is-plural&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://theodi.org/knowledge-opinion/the-week-in-data/"&gt;https://theodi.org/knowledge-opinion/the-week-in-data/&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.facebook.com/data4sci/"&gt;https://www.facebook.com/data4sci/&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://towardsdatascience.com/doing-machine-learning-the-uber-way-five-lessons-from-the-first-three-years-of-michelangelo-da584a857cc2"&gt;https://towardsdatascience.com/doing-machine-learning-the-uber-way-five-lessons-from-the-first-three-years-of-michelangelo-da584a857cc2&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.datasciencecentral.com/profiles/blogs/check-out-our-dsc-newsletter"&gt;https://www.datasciencecentral.com/profiles/blogs/check-out-our-dsc-newsletter&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.podcastinit.com"&gt;https://www.podcastinit.com&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.getrevue.co/profile/datamachina"&gt;https://www.getrevue.co/profile/datamachina&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://www.datascienceweekly.org/newsletters"&gt;https://www.datascienceweekly.org/newsletters&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://medium.com/towards-data-science/newsletters/the-daily-pick?source=newsletter_v3_promo--------------------------newsletter_v3_promo-"&gt;https://medium.com/towards-data-science/newsletters/the-daily-pick?source=newsletter_v3_promo--------------------------newsletter_v3_promo-&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>datascience</category>
      <category>machinelearning</category>
      <category>dataengineering</category>
    </item>
    <item>
      <title>Finding popular data science podcasts via web scraping</title>
      <dc:creator>Greg</dc:creator>
      <pubDate>Fri, 11 Sep 2020 13:21:58 +0000</pubDate>
      <link>https://dev.to/gms64/finding-popular-data-science-podcasts-via-web-scraping-34kj</link>
      <guid>https://dev.to/gms64/finding-popular-data-science-podcasts-via-web-scraping-34kj</guid>
      <description>&lt;p&gt;The article will go over the process I used to create the list of podcasts you see below.  If you're just here for the podcasts, then have at it... &lt;/p&gt;

&lt;h2&gt;
  
  
  the most popular data science podcasts
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;title&lt;/th&gt;
&lt;th&gt;author&lt;/th&gt;
&lt;th&gt;avg_rtg&lt;/th&gt;
&lt;th&gt;rtg_ct&lt;/th&gt;
&lt;th&gt;episodes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/lex-fridman-podcast/id1434243584"&gt;Lex Fridman Podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Lex Fridman&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;2400&lt;/td&gt;
&lt;td&gt;126&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/machine-learning-guide/id1204521130"&gt;Machine Learning Guide&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;OCDevel&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;626&lt;/td&gt;
&lt;td&gt;30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-skeptic/id890348705"&gt;Data Skeptic&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Kyle Polich&lt;/td&gt;
&lt;td&gt;4.4&lt;/td&gt;
&lt;td&gt;431&lt;/td&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-stories/id502854960"&gt;Data Stories&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Enrico Bertini and Moritz Stefaner&lt;/td&gt;
&lt;td&gt;4.5&lt;/td&gt;
&lt;td&gt;405&lt;/td&gt;
&lt;td&gt;162&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/twiml-ai-podcast-formerly-this-week-in-machine-learning/id1116303051"&gt;The TWIML AI Podcast (formerly This Week in Machine Learning &amp;amp; Artificial Intelligence)&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Sam Charrington&lt;/td&gt;
&lt;td&gt;4.7&lt;/td&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/dataframed/id1336150688"&gt;DataFramed&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;DataCamp&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;188&lt;/td&gt;
&lt;td&gt;59&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/the-ai-podcast/id1186480811"&gt;The AI Podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;NVIDIA&lt;/td&gt;
&lt;td&gt;4.5&lt;/td&gt;
&lt;td&gt;162&lt;/td&gt;
&lt;td&gt;125&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/superdatascience/id1163599059"&gt;SuperDataScience&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Kirill Eremenko&lt;/td&gt;
&lt;td&gt;4.6&lt;/td&gt;
&lt;td&gt;161&lt;/td&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/partially-derivative/id942048597"&gt;Partially Derivative&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Partially Derivative&lt;/td&gt;
&lt;td&gt;4.8&lt;/td&gt;
&lt;td&gt;141&lt;/td&gt;
&lt;td&gt;101&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/machine-learning/id384233048"&gt;Machine Learning&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Stanford&lt;/td&gt;
&lt;td&gt;3.9&lt;/td&gt;
&lt;td&gt;138&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/talking-machines/id955198749"&gt;Talking Machines&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Tote Bag Productions&lt;/td&gt;
&lt;td&gt;4.6&lt;/td&gt;
&lt;td&gt;133&lt;/td&gt;
&lt;td&gt;106&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/ai-in-business/id670771965"&gt;AI in Business&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Daniel Faggella&lt;/td&gt;
&lt;td&gt;4.4&lt;/td&gt;
&lt;td&gt;102&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/learning-machines-101/id892779679"&gt;Learning Machines 101&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.&lt;/td&gt;
&lt;td&gt;4.4&lt;/td&gt;
&lt;td&gt;87&lt;/td&gt;
&lt;td&gt;82&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/storytelling-with-data-podcast/id1318029970"&gt;storytelling with data podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Cole Nussbaumer Kna&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;td&gt;33&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-crunch/id1165189603"&gt;Data Crunch&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Data Crunch Corporation&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;70&lt;/td&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-viz-today/id1352837603"&gt;Data Viz Today&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Alli Torban&lt;/td&gt;
&lt;td&gt;5.0&lt;/td&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;td&gt;62&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/artificial-intelligence/id765641080"&gt;Artificial Intelligence&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;td&gt;4.1&lt;/td&gt;
&lt;td&gt;61&lt;/td&gt;
&lt;td&gt;31&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/oreilly-data-show-podcast/id944929220"&gt;O'Reilly Data Show Podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;O'Reilly Media&lt;/td&gt;
&lt;td&gt;4.2&lt;/td&gt;
&lt;td&gt;59&lt;/td&gt;
&lt;td&gt;60&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/machine-learning-software-engineering-daily/id1230807136"&gt;Machine Learning – Software Engineering Daily&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Machine Learning – Software Engineering Daily&lt;/td&gt;
&lt;td&gt;4.5&lt;/td&gt;
&lt;td&gt;59&lt;/td&gt;
&lt;td&gt;115&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-science-at-home/id1069871378"&gt;Data Science at Home&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Francesco Gadaleta&lt;/td&gt;
&lt;td&gt;4.2&lt;/td&gt;
&lt;td&gt;58&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557"&gt;Data Engineering Podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Tobias Macey&lt;/td&gt;
&lt;td&gt;4.7&lt;/td&gt;
&lt;td&gt;58&lt;/td&gt;
&lt;td&gt;150&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/big-data/id1148791298"&gt;Big Data&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Ryan Estrada&lt;/td&gt;
&lt;td&gt;4.6&lt;/td&gt;
&lt;td&gt;58&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/follow-the-data-podcast/id1104371750"&gt;Follow the Data Podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Bloomberg Philanthropies&lt;/td&gt;
&lt;td&gt;4.3&lt;/td&gt;
&lt;td&gt;57&lt;/td&gt;
&lt;td&gt;82&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/making-data-simple/id605818735"&gt;Making Data Simple&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;IBM&lt;/td&gt;
&lt;td&gt;4.3&lt;/td&gt;
&lt;td&gt;56&lt;/td&gt;
&lt;td&gt;104&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/analytics-on-fire/id1088683533"&gt;Analytics on Fire&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Mico Yuk&lt;/td&gt;
&lt;td&gt;4.4&lt;/td&gt;
&lt;td&gt;51&lt;/td&gt;
&lt;td&gt;48&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/learn-to-code-in-one-month/id1460397186"&gt;Learn to Code in One Month&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Learn to Code&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;26&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/becoming-a-data-scientist-podcast/id1076448558"&gt;Becoming A Data Scientist Podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Renee Teate&lt;/td&gt;
&lt;td&gt;4.5&lt;/td&gt;
&lt;td&gt;49&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/practical-ai-machine-learning-data-science/id1406537385"&gt;Practical AI: Machine Learning &amp;amp; Data Science&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Changelog Media&lt;/td&gt;
&lt;td&gt;4.5&lt;/td&gt;
&lt;td&gt;48&lt;/td&gt;
&lt;td&gt;105&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/present-beyond-measure-show-data-visualization-storytelling/id1029765276"&gt;The Present Beyond Measure Show: Data Visualization, Storytelling &amp;amp; Presentation for Digital Marketers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Lea Pica&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;44&lt;/td&gt;
&lt;td&gt;58&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/the-data-chief/id1509495585"&gt;The Data Chief&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Mission&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;43&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/ai-today-podcast-artificial-intelligence-insights-experts/id1279927057"&gt;AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Cognilytica&lt;/td&gt;
&lt;td&gt;4.2&lt;/td&gt;
&lt;td&gt;42&lt;/td&gt;
&lt;td&gt;161&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-driven/id1241441038"&gt;Data Driven&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Data Driven&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;41&lt;/td&gt;
&lt;td&gt;257&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009"&gt;HumAIn Podcast - Artificial Intelligence, Data Science, and Developer Education&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;David Yakobovitch&lt;/td&gt;
&lt;td&gt;4.8&lt;/td&gt;
&lt;td&gt;39&lt;/td&gt;
&lt;td&gt;78&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-gurus/id1351574994"&gt;Data Gurus&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Sima Vasa&lt;/td&gt;
&lt;td&gt;5.0&lt;/td&gt;
&lt;td&gt;39&lt;/td&gt;
&lt;td&gt;106&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/masters-of-data-podcast/id1363415303"&gt;Masters of Data Podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Sumo Logic hosted by Ben Newton&lt;/td&gt;
&lt;td&gt;5.0&lt;/td&gt;
&lt;td&gt;38&lt;/td&gt;
&lt;td&gt;74&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/the-policyviz-podcast/id982966091"&gt;The PolicyViz Podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;The PolicyViz Podcast&lt;/td&gt;
&lt;td&gt;4.7&lt;/td&gt;
&lt;td&gt;36&lt;/td&gt;
&lt;td&gt;180&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/the-radical-ai-podcast/id1505229145"&gt;The Radical AI Podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Radical AI&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;34&lt;/td&gt;
&lt;td&gt;35&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/women-in-data-science/id1440076586"&gt;Women in Data Science&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Professor Margot Gerritsen&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;28&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/towards-data-science/id1470952338"&gt;Towards Data Science&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;The TDS team&lt;/td&gt;
&lt;td&gt;4.6&lt;/td&gt;
&lt;td&gt;26&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-in-depth/id1468304417"&gt;Data in Depth&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Mountain Point&lt;/td&gt;
&lt;td&gt;5.0&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-science-imposters-podcast/id1249728040"&gt;Data Science Imposters Podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Antonio Borges and Jordy Estevez&lt;/td&gt;
&lt;td&gt;4.4&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;td&gt;88&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/the-artists-of-data-science/id1506968775"&gt;The Artists of Data Science&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Harpreet Sahota&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;41&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/datafemme/id1484529990"&gt;#DataFemme&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Dikayo Data&lt;/td&gt;
&lt;td&gt;5.0&lt;/td&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;td&gt;30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/the-banana-data-podcast/id1463103655"&gt;The Banana Data Podcast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Dataiku&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;33&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/experiencing-data-with-brian-t-oneill/id1444887095"&gt;Experiencing Data with Brian T. O'Neill&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Brian T. O'Neill from Designing for Analytics&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/secrets-of-data-analytics-leaders/id1334792097"&gt;Secrets of Data Analytics Leaders&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Eckerson Group&lt;/td&gt;
&lt;td&gt;4.8&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;82&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-journeys/id1358963399"&gt;Data Journeys&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;AJ Goldstein&lt;/td&gt;
&lt;td&gt;5.0&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;26&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-driven-discussions/id1258339160"&gt;Data Driven Discussions&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Outlier.ai&lt;/td&gt;
&lt;td&gt;5.0&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/data-futurology-leadership-strategy-in-artificial-intelligence/id1385051346"&gt;Data Futurology - Leadership And Strategy in Artificial Intelligence, Machine Learning, Data Science&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Felipe Flores&lt;/td&gt;
&lt;td&gt;4.4&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;135&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://podcasts.apple.com/us/podcast/artificially-intelligent/id1223852506"&gt;Artificially Intelligent&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Christian Hubbs and Stephen Donnelly&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  why i want to find data science podcasts
&lt;/h2&gt;

&lt;p&gt;This would normally be at the top of an article on finding data science podcasts. Well it would be at the top of any article. But realistically, most people are finding this from google, and they're just looking for the answer that's at the top of the page. If you type in 'the most popular data science podcasts', you really don't want to have to scroll down endlessly to find the answer you're looking for. So to make &lt;em&gt;their&lt;/em&gt; experience better, we're just leaving the answer up there. And giving them sass. Lots of sass.&lt;/p&gt;

&lt;p&gt;Anyways, I really like listening to things. While newsletters are great for keeping up with current events and blogs are great for learning specific things, podcasts have a special place in my heart for allowing me to aimlessly learn something new every day. The format really lends itself to delivering information efficiently, but in a way where you can multitask.  Pre-COVID, my morning commute was typically full of podcasts. While COVID has rendered my commute a nonexistent affair, I still try to listen to at least a podcast a day if I can manage it.  My view is that 30 minutes of learning a day will really add up in the long run, and I feel that podcasts are a great way to get there.&lt;/p&gt;

&lt;p&gt;Now that we've been through my love affair with podcasts, you can imagine my surprise when I started looking for a few data science ones to subscribe to and I &lt;em&gt;didn't&lt;/em&gt; find a tutorial on how to use web scraping to find the most popular data science podcasts to listen to.  I know, crazy.  There's a web scraping tutorial on everything under the sun except for - seemingly - podcasts.  I mean there's probably not one on newsletters either, but we'll leave that alone for now... &lt;/p&gt;

&lt;p&gt;So if no one else is crazy enough to write about finding data science podcasts with web scraping, then...&lt;/p&gt;

&lt;h2&gt;
  
  
  gameplanning the process
&lt;/h2&gt;

&lt;p&gt;By now we're almost certainly rid of those savages who are only here for the &lt;strong&gt;answer&lt;/strong&gt; (&lt;em&gt;gasp, how could they&lt;/em&gt;), so we'll go into the little process I went through to gather the data.  It's not particularly long, and took me probably an hour to put it together, so it should be a good length for an article.&lt;/p&gt;

&lt;p&gt;I'm using python here with an installation of Anaconda (which is a common package management / deployment system for python).  I'll be running this in a Jupyter notebook, since its a one-off task that I don't need to use ever again... hopefully.&lt;/p&gt;

&lt;p&gt;In terms of what I'm going to do, I'll run a few google keyword searches which are limited to the '&lt;a href="https://podcasts.apple.com/us/podcast/"&gt;https://podcasts.apple.com/us/podcast/&lt;/a&gt;' domain and scrape the results for the first few pages.  From there I'll just be scraping the apple podcast page to get the total number of ratings and the average rating. Yea, the data will be biased, but its a quick and dirty way to get the answer I'm looking for.&lt;/p&gt;

&lt;h2&gt;
  
  
  code to find top data science podcasts - version 1
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# import default python packages
import urllib
import requests
import time
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above packages are included in python, the below ones aren't always included. If you don't have them installed, you'll have to download them. You can find out how to use &lt;a href="https://packaging.python.org/tutorials/installing-packages/#use-pip-for-installing"&gt;pip&lt;/a&gt; to do it or &lt;a href="https://docs.anaconda.com/anaconda/user-guide/tasks/install-packages/"&gt;conda&lt;/a&gt;. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# import non-standard python packages
# if you dont have these installed, install them with pip or conda
from bs4 import BeautifulSoup
import pandas as pd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now that the packages have been imported, you should define your user agent. First off, because its polite if you're scraping anything. Secondly, google gives different results for mobile and desktop searches.  This isn't actually my user-agent, I took it from another tutorial since I'm a bit lazy.  I actually use linux...&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# define your desktop user-agent
USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:65.0) Gecko/20100101 Firefox/65.0"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Alright now we're going to define the queries we want to run. And then create a function that spits out the URL we want to scrape on google.  I'm putting the queries in a kwargs format, since I want to put them through a function.  That means I can just loop through the list of kwargs and get the results that the function returns.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Queries
list_kwargs = [
    {"string": 'data podcast'},
    {"string": 'data podcast', "pg": 2},
    {"string": 'data podcast', "pg": 3},
    {"string": 'data science podcast'},
    {"string": 'data engineering podcast'},
    {"string": 'data visualization podcast'},
]

def string_to_podcast_query(string, pg=None):
    query = urllib.parse.quote_plus(f'site:https://podcasts.apple.com/us/podcast/ {string}')
    if pg != None:
        query = query + "&amp;amp;start=" + str(10*(pg-1))
    return f"https://google.com/search?hl=en&amp;amp;lr=en&amp;amp;q={query}", string

# define the headers we will add to all of our requests
headers = {"user-agent" : USER_AGENT}

# set up an empty list to push results to
results = []

# cycle through the list of queries 
for x in list_kwargs:
    # return the query url and the search term that was used to create it (for classification later)
    url, search_term = string_to_podcast_query(**x)

    # make a get request to the url, include the headers with our user-agent
    resp = requests.get(url, headers=headers)

    # only proceed if you get a 200 code that the request was processed correctly
    if resp.status_code == 200:
        # feed the request into beautiful soup
        soup = BeautifulSoup(resp.content, "html.parser")

    # find all divs (a css element that wraps page areas) within google results
    for g in soup.find_all('div', class_='r'):
        # within the results, find all the links 
        anchors = g.find_all('a')
        if anchors:
            # get the link and title, add them to an object, and append that to the results array
            link = anchors[0]['href']
            title = g.find('h3').text
            item = {
                "title": title,
                "link": link,
                "search_term": search_term
            }
            results.append(item)

    # sleep for 2.5s between requests.  we don't want to annoy google and deal with recaptchas
    time.sleep(2.5)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Alright, now we have the google results back - nice.  From here, lets put that in a pandas dataframe and filter it a bit.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;google_results_df = pd.DataFrame(results)

# create a filter for anything that is an episode.  They should contain a ' | '.
# drop any duplicate results as well.
google_results_df['is_episode'] = google_results_df['title'].str.contains(' | ',regex=False)
google_results_df = google_results_df.drop_duplicates(subset='title')

google_results_podasts = google_results_df.copy().loc[google_results_df['is_episode']==False]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Ok cool, we have a list of podcasts. Lets define our apple podcasts scraper.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def podcast_scrape(link):
    # get the link, use the same headers as had previously been defined.
    resp = requests.get(link, headers=headers)
    if resp.status_code == 200:
        soup = BeautifulSoup(resp.content, "html.parser")

    # find the figcaption element on the page
    rtg_soup = soup.find("figcaption", {"class": "we-rating-count star-rating__count"})
    # the text will return an avg rating and a number of reviews, split by a •
    # we'll spit that out, so '4.3 • 57 Ratings' becomes '4.3', '57 Ratings'
    avg_rtg, rtg_ct = rtg_soup.get_text().split(' • ')
    # then we'll take numbers from the rtg_ct variable by splitting it on the space
    rtg_ct = rtg_ct.split(' ')[0]

    # find the title in the document, get the text and strip out whitespace
    title_soup = soup.find('span', {"class":"product-header__title"})
    title = title_soup.get_text().strip()
    # find the author in the document, get the text and strip out whitespace
    author_soup = soup.find('span', {"class":"product-header__identity podcast-header__identity"})
    author = author_soup.get_text().strip()

    # find the episode count div, then the paragraph under that, then just extract the # of episodes
    episode_soup = soup.find('div', {"class":"product-artwork__caption small-hide medium-show"})
    episode_soup_p = episode_soup.find('p')
    episode_ct = episode_soup_p.get_text().strip().split(' ')[0]

    # format the response as a dict, return that response as the result of the function
    response = {
        "title": title,
        "author": author,
        "link": link,
        "avg_rtg": avg_rtg,
        "rtg_ct": rtg_ct,
        "episodes": episode_ct
    }
    return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Cool, we now have a podcast scraper.  You can try it with the below code. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podcast_scrape('https://podcasts.apple.com/us/podcast/follow-the-data-podcast/id1104371750')


{'title': 'Follow the Data Podcast',
'author': 'Bloomberg Philanthropies',
'link': 'https://podcasts.apple.com/us/podcast/follow-the-data-podcast/id1104371750',
'avg_rtg': '4.3',
'rtg_ct': '57'}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Back to the code.  Lets now loop through all the podcast links we have.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# define the result array we'll fill during the loop
podcast_summ = []
for link in google_results_podcasts['link']:
    # use a try/except, since there are a few episodes still in the list that will cause errors if we don't do this.  This way, if there is an error we just wont add anything to the array.
    try:
        # get the response from our scraper and append it to our results
        pod_resp = podcast_scrape(link)
        podcast_summ.append(pod_resp)
    except:
        pass
    # wait for 5 seconds to be nice to apple
    time.sleep(5)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now to put everything into a dataframe and do a little bit of sorting and filtering.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pod_df = pd.DataFrame(podcast_summ)

# Remove non-english podcasts, sorry guys...
pod_df = pod_df.loc[~pod_df['link'].str.contains('l=')]
pod_df.drop_duplicates(subset='link', inplace=True)

# merge with the original dataframe (in case you want to see which queries were responsible for which podcasts)
merge_df = google_results_podcasts.merge(pod_df,on='link',suffixes=('_g',''))
merge_df.drop_duplicates(subset='title', inplace=True)

# change the average rating and rating count columns from strings to numbers
merge_df['avg_rtg'] = merge_df['avg_rtg'].astype('float64')
merge_df['rtg_ct'] = merge_df['rtg_ct'].astype('int64')

# sort by total ratings and then send them to a csv
merge_df.sort_values('rtg_ct',ascending=False).to_csv('podcasts.csv')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;From here I exported the file to csv and did a bit of cheating where I combined the title and link to create a &lt;code&gt;&amp;lt;a hrer="link"&amp;gt;title&amp;lt;/a&amp;gt;&lt;/code&gt;, but that's mainly because I got a bit lazy...&lt;/p&gt;

&lt;p&gt;Anyways, that was the full process in creating the above list of data science podcasts. You now have the top podcasts, sorted by total reviews.  I considered also using castbox as a source of scraping (since they have an approximation of subscribers / downloads), but I couldn't find any good way to search for generally popular podcasts.  Or podcasts that contained a certain word.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The first version of this article stopped here and showed results from this code&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  code to find top data science podcasts - version 2
&lt;/h2&gt;

&lt;p&gt;Well, that was fine, but I think its actually lacking a bit.  There seem to be a few podcasts that I've stumbled across that are missing which I was hoping this would capture.  So we're going to switch some stuff up.  First, I'm going to use a mobile user agent to tell Google I'm searching from my phone.&lt;/p&gt;

&lt;p&gt;Why? Well Google shows different results for desktop searches vs mobile searches, so if we're looking to find the best podcasts, we want to be where most of the searches are actually happening.  And since you basically always listen to podcasts on your phone, it probably makes sense to search &lt;em&gt;from your phone&lt;/em&gt;...  The code for that is below, the main changes are in &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Mobile Search Version
headers = {"user-agent" : MOBILE_USER_AGENT}

results = []
for x in list_kwargs:
    url, search_term = string_to_podcast_query(**x)
    resp = requests.get(url, headers=headers)
    if resp.status_code == 200:
        soup = BeautifulSoup(resp.content, "html.parser")

    for g in soup.find_all('div', class_='mnr-c'): # updated target class
        anchors = g.find_all('a')
        if anchors:
            link = anchors[0]['href']
            title = anchors[0].find_all('div')[1].get_text().strip() # updated title crawler
            item = {
                "title": title,
                "link": link,
                "search_term": search_term
            }
            results.append(item)

    time.sleep(2.5)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;What else did I switch up? I switched the Google queries up a bit and added a few more.  I figure if I'm actually trying to find the best podcasts, it makes sense to search for them.  That way, you get the ones that typically show up on these types of blog lists.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Queries
list_kwargs = [
    {"string": 'best data podcast'},
    {"string": 'best data podcast', "pg": 2},
    {"string": 'best data podcast', "pg": 3},
    {"string": 'best data podcast', "pg": 4},
    {"string": 'best data science podcast'},
    {"string": 'best data science podcast', "pg": 2},
    {"string": 'best data science podcast', "pg": 3},
    {"string": 'best artificial intelligence podcast'},
    {"string": 'best machine learning podcast'},
    {"string": 'best data engineering podcast'},
    {"string": 'best data visualization podcast'},
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And that's it - all of the changes I made for the second version.  The results are updated up top, but it gets a more complete &lt;/p&gt;

&lt;h3&gt;
  
  
  code to find top data science podcasts - version 3
&lt;/h3&gt;

&lt;p&gt;And I'm an idiot. 'Fixing' my queries to only find the 'best data science podcasts' ended up making me miss a few of the good ones I found earlier.  So I'm going to do as any good data scientist does and just combine the results of both sets of queries...&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Queries&lt;br&gt;
list_kwargs = [&lt;br&gt;
    {"string": 'best data podcast'},&lt;br&gt;
    {"string": 'best data podcast', "pg": 2},&lt;br&gt;
    {"string": 'best data podcast', "pg": 3},&lt;br&gt;
    {"string": 'best data podcast', "pg": 4},&lt;br&gt;
    {"string": 'best data science podcast'},&lt;br&gt;
    {"string": 'best data science podcast', "pg": 2},&lt;br&gt;
    {"string": 'best data science podcast', "pg": 3},&lt;br&gt;
    {"string": 'best artificial intelligence podcast'},&lt;br&gt;
    {"string": 'best machine learning podcast'},&lt;br&gt;
    {"string": 'best data engineering podcast'},&lt;br&gt;
    {"string": 'best data visualization podcast'},&lt;br&gt;
]&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  closing note&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;This is a cross-post from my &lt;a href="https://gregondata.com/blog/best-data-science-podcasts"&gt;blog&lt;/a&gt;. My current readership is a solid 0 views per month, so I thought it might be worth actually sharing it here...&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
