DEV Community

loading...
Cover image for Scraping every post on an Instagram profile with less than 10 lines of Python

Scraping every post on an Instagram profile with less than 10 lines of Python

chrisgreening profile image Chris Greening ・Updated on ・3 min read

In this blog post, I'm going to give a quick tutorial on how you can scrape every post on an Instagram profile page using instascrape with less than 10 lines of Python!

Specifically, I am going to be scraping every post from Joe Biden's Instagram account (@joebiden)

GitHub logo chris-greening / instascrape

Powerful and flexible Instagram scraping library for Python, providing easy-to-use and expressive tools for accessing data programmatically

instascrape: powerful Instagram data scraping toolkit

Version Downloads Release License

Activity Dependencies Issues Code style: black

What is it?

instascrape is a lightweight Python package that provides an expressive and flexible API for scraping Instagram data. It is geared towards being a high-level building block on the data scientist's toolchain and can be seamlessly integrated and extended with industry standard tools for web scraping, data science, and analysis.

Key features

Here are a few of the things that instascrape does well:

  • Powerful, object-oriented scraping tools for profiles, posts, hashtags, reels, and IGTV
  • Scrapes HTML, BeautifulSoup, and JSON
  • Download content to your computer as png, jpg, mp4, and mp3
  • Dynamically retrieve HTML embed code for posts
  • Expressive and consistent API for concise and elegant code
  • Designed for seamless integration with Selenium, Pandas, and other industry standard tools for data collection and analysis
  • Lightweight; no boilerplate or configurations necessary
  • The only hard dependencies are Requests and…

Prerequisites for those of you following along at home

Importing from our libraries

Let's start by importing the tools we'll be using.

from selenium.webdriver import Chrome 
from instascrape import Profile, scrape_posts
Enter fullscreen mode Exit fullscreen mode

Preparing the profile for scraping

As I've mentioned in previous blog posts, Instagram serves most content asynchronously using JavaScript allowing for the seamless infinite scroll effect and decreased load times.

Alt Text

To render JavaScript, this is where our webdriver comes in handy. For this tutorial, I will be using chromedriver to automate Google Chrome as my browser but feel free to use whatever webdriver you are comfortable with!

webdriver = Chrome("path/to/chromedriver.exe")
Enter fullscreen mode Exit fullscreen mode

Now a quick aside before we start with this next part; you are going to have to find your Instagram sessionid *gasp* Don't worry! Here is a super short guide. Be sure to paste it below in the headers dictionary where indicated.

headers = {
    "user-agent": "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Mobile Safari/537.36 Edg/87.0.664.57",
    "cookie": "sessionid=PASTE_YOUR_SESSIONID_HERE;"
}
joe = Profile("joebiden")
joe.scrape(headers=headers)
Enter fullscreen mode Exit fullscreen mode

Dynamically loading all posts

And now for the part you've all been waiting for! Using the Profile.get_posts instance method, there are a variety of arguments we can pass for painlessly loading all the posts on a page.

In this case, we are going to have to manually login to our Instagram account when the browser opens so we pass login_first=True. This will give us 60 seconds to enter our username and password (this wait time can be modified to whatever you want)

posts = joe.get_posts(webdriver=webdriver, login_first=True)
Enter fullscreen mode Exit fullscreen mode

Now, to prove to you that it worked, here is a GIF of me scrolling through the scraped URLs of all 1,261 posts 😏

Alt Text

Scraping the data from each post

Now there is only one thing left to do, and that is scrape every individual post. The scrape_posts function takes a variety of arguments that let you configure your scrape however you want!

The most important argument in this case is posts which is a list of unscraped instascrape.Post objects.

In this case, I'm going to set a pause of 10 seconds between each scrape so that Instagram doesn't temporarily IP block us.

scraped_posts, unscraped_posts = scrape_posts(posts, headers=headers, pause=10, silent=False)
Enter fullscreen mode Exit fullscreen mode

In the event that there is a problem, we are able to configure scrape_posts such that all posts that were not scraped are returned so we don't lose all of the work we did, hence the unscraped.

In conclusion

And there we have it! In less than 10 lines of Python, we were able to scrape almost 50,000 data points from @joebiden's Instagram account!

Alt Text

We can now analyze his engagement, how many hashtags he uses, who he tags in photos, etc. In my next blog post, I'll be showing some ways we can analyze this data and glean useful insights!

In the meantime, here is a related article where I analyze 10,000 data points scraped from Donald Trump's Instagram account.

Click here for the full code, dataset, and file containing all the URLs used in this tutorial.

If you have any questions, feel free to drop them in the comments below, message me, or email me at chris@christophergreening.com!

Alt Text

Discussion (15)

pic
Editor guide
Collapse
karisjochen profile image
karisjochen

Thanks so much for these tutorials and sharing your work! Question: I ran this exact code for my own instagram data and it has been executing for ~30 minutes now. I have 552 instagram posts. I'm hesitant to kill it but I am unsure if it is stuck. Any ideas?

Collapse
chrisgreening profile image
Chris Greening Author

Unfortunately it's an incredibly slow approach, Instagram starts blocking if you scrape too much too fast so I try to play the long game and let it run in the background.

In the scrape_posts function, you'll see pause=10 which refers to a 10s pause between each post scrape. Considering you have 552 posts, that'll be (552*10)/60 = 92 minutes 😬

In the future, passing silent=False as an argument will print what number the scrape is currently on, I'm actually gonna edit that in right now for anyone else reading the article in the future!

Thanks for reaching out!

Collapse
chrisgreening profile image
Chris Greening Author

If it's any consolation though, that means it's working! You're just gonna have to wait an extra hour or so before you can get your data 😬

Collapse
karisjochen profile image
karisjochen

haha thank you! So it did eventually finish without error but then I appeared to have a list of "Post" objects of which I could not tell how I was to get the data from. From reading the GitHub documentation I tried various methods but to no avail (this isn't a knock on you more a knock on my learning curve).

So now after a few hours of messing around I tried to run the "joe biden code" for my own account and even though I am setting login_first=False in the get_posts function, the chrome driver brings me to a login page. Im able to log into instagram but meanwhile my code says it has finished running without error but my posts and scraped_posts objects are now just empty lists.

Thread Thread
karisjochen profile image
karisjochen

oh I guess I should also mention that my end goal is to collect data similar to the data you analyzed in your donald trump post. I saw you published a notebook of the analysis code (thank you!) but didn't see a line-by-line on how you got that data.

Thread Thread
chrisgreening profile image
Chris Greening Author

scraped Post objects contain the scraped data as instance attributes! Try using the to_dict method on one of the Post's and it should return a dictionary with the data it scraped for that Post. The key/values of the returned dict will correspond one-to-one with the available instance attributes

I'll take a look at the login_first bug rn and see if I can replicate it, it might be on the library's end! Instagram has been making a lot of changes the last month or so and have been making it increasingly harder to scrape

Thread Thread
chrisgreening profile image
Chris Greening Author

ahhh okay, so when you set login_first=False, Instagram is still redirecting to the login page automatically but instascrape is trying to start scrolling immediately which results in an empty list since there are no posts rendered on the page

to access dynamically rendered content like posts you're pretty much always gonna have to be logged in so it's best to leave login_first as True unless you're chaining scrapes and your webdriver is already logged in manually

Thread Thread
karisjochen profile image
karisjochen

amazing thank you! So I was able to get my first 10 posts no problem by specifying amount=10 but then I tried to do all ~500 pictures and after 232 pictures I came across this error:

ConnectionError: ('Connection aborted.', OSError("(54, 'ECONNRESET')"))

Im guessing this means instagram blocked my request? Have you come across this issue?

Collapse
idilkylmz profile image
idilkylmz

Hi Chris,

Thank you for your sharing. I tried to use your code but I am getting this error.

ImportError: cannot import name 'QUOTE_NONNUMERIC' from partially initialized module 'csv' (most likely due to a circular import) (/home/idil/Masaüstü/csv.py)

Do you know what this is about?

Collapse
magesh236 profile image
Mageshwaran

This is Cool Man

Collapse
chrisgreening profile image
Chris Greening Author

Hey thanks so much, I appreciate it! 😄

Collapse
villival profile image
villival

Always crisp and clear... thanks for sharing ...

Collapse
alessandrosassi profile image
Alessandro Sassi • Edited

Thank you very much for this precious tool!
I'm trying to run the code, but despite inserting my session id i still get 'MissingCookiesWarning' and 'InstagramRedirectLoginError1.
How to fix this?

Collapse
anambarajas_ profile image
Anaaa

Thanks for your hard work. I'm really lucky because I found out about this project just as I wanted to scrape my business IG profile. Keep up with the good work!

Collapse
chrisgreening profile image
Chris Greening Author

This is exactly why I released it, thanks so much for the feedback 😄 motivates me to keep working on it