DEV Community

Cover image for How to Use Proxies in Python
Federico Trotta
Federico Trotta

Posted on

How to Use Proxies in Python

If you've been working with Python for a bit, especially in the particular case of data scraping, you've probably encountered situations where you are blocked while trying to retrieve the data you want. In such a situation, knowing how to use a proxy is a handy skill to have.

In this article, we'll explore what proxies are, why they're useful, and how you can use them using the library request in Python.

What is a Proxy?

Let’s start from the beginning by defining what a proxy is.

You can think of a proxy server as a “middleman” between your computer and the internet. When you send a request to a website, the request goes through the proxy server first. The proxy then forwards your request to the website, receives the response, and sends it back to you. This process masks your IP address, making it appear as if the request is coming from the proxy server instead of your own device.

As understandable, this has a lot of consequences and uses. For example, it can be used to bypass some pesky IP restrictions, or maintain anonymity.

Why use a proxy in web scraping?

So, why proxies might be helpful while scraping data? Well, we already gave a reason before. For example, you can use them to bypass some restrictions.

So, in the particular case of web scraping, they can be useful for the following reasons:

  • Avoiding IP blocking: websites often monitor for suspicious activity, like a single IP making numerous requests in a short time. Using proxies helps distribute your requests across multiple IPs avoiding being blocked.
  • Bypassing geo-restrictions: some content is only accessible from certain locations and proxies can help you appear as if you're accessing the site from a different country.
  • Enhancing privacy: proxies are useful to keep your scraping activities anonymous by hiding your real IP address.

How to use a proxy in Python using requests

The requests library is a popular choice for making HTTP requests in Python and incorporating proxies into your requests is straightforward.

Let’s see how!

Getting Valid Proxies

First things first: you have to get valid proxies before actually using them. To do so, you have two options:

  • Free proxies: you can get proxies for free from websites like Free Proxy List. They're easily accessible but, however, they can be unreliable or slow.
  • Paid proxies: services like Bright Data or ScraperAPI provide reliable proxies with better performance and support, but you have to pay.

Using Proxies with requests

Now that you have your list of proxies you can start using them. For example, you can create a dictionary like so:

proxies = {
    'http': 'http://proxy_ip:proxy_port',
    'https': 'https://proxy_ip:proxy_port',
}
Enter fullscreen mode Exit fullscreen mode

Now you can make a request using the proxies:

import requests

proxies = {
    'http': 'http://your_proxy_ip:proxy_port',
    'https': 'https://your_proxy_ip:proxy_port',
}

response = requests.get('https://httpbin.org/ip', proxies=proxies)
Enter fullscreen mode Exit fullscreen mode

To see the outcome of your request, you can print the response:

print(response.status_code)  # Should return 200 if successful
print(response.text)         # Prints the content of the response
Enter fullscreen mode Exit fullscreen mode

Note that, if everything went smoothly, the response should display the IP address of the proxy server, not yours.

Proxy Authentication Using requests: Username and Password

If your proxy requires authentication, you can handle it in a couple of ways.

Method 1: including Credentials in the Proxy URL
To include the username and password to manage authentication in your proxy, you can do so:

proxies = {
    'http': 'http://username:password@proxy_ip:proxy_port',
    'https': 'https://username:password@proxy_ip:proxy_port',
}
Enter fullscreen mode Exit fullscreen mode

Method 2: using HTTPProxyAuth
Alternatively, you can use the HTTPProxyAuth class to handle authentication like so:

from requests.auth import HTTPProxyAuth

proxies = {
    'http': 'http://proxy_ip:proxy_port',
    'https': 'https://proxy_ip:proxy_port',
}

auth = HTTPProxyAuth('username', 'password')

response = requests.get('https://httpbin.org/ip', proxies=proxies, auth=auth)
Enter fullscreen mode Exit fullscreen mode

How to Use a Rotating Proxy with requests

Using a single proxy might not be sufficient if you're making numerous requests. In this case, you can use a rotating proxy: this changes the proxy IP address at regular intervals or per request.

If you’d like to test this solution, you have two options: manually rotate proxies using a list or using a proxy rotation service.

Let’s see both approaches!

Using a List of Proxies

If you have a list of proxies, you can rotate them manually like so:

import random

proxies_list = [
    'http://proxy1_ip:port',
    'http://proxy2_ip:port',
    'http://proxy3_ip:port',
    # Add more proxies as needed
]

def get_random_proxy():
    proxy = random.choice(proxies_list)
    return {
        'http': proxy,
        'https': proxy,
    }

for i in range(10):
    proxy = get_random_proxy()
    response = requests.get('https://httpbin.org/ip', proxies=proxy)
    print(response.text)
Enter fullscreen mode Exit fullscreen mode

Using a Proxy Rotation Service

Services like ScraperAPI handle proxy rotation for you. You typically just need to update the proxy URL they provide and manage a dictionary of URLs like so:

proxies = {
    'http': 'http://your_service_proxy_url',
    'https': 'https://your_service_proxy_url',
}

response = requests.get('https://httpbin.org/ip', proxies=proxies)
Enter fullscreen mode Exit fullscreen mode

Conclusions

Using a proxy in Python is a valuable technique for web scraping, testing, and accessing geo-restricted content. As we’ve seen, integrating proxies into your HTTP requests is straightforward using the library requests.

A few parting tips when scraping data from the web:

  • Respect website policies: always check the website's robots.txt file and terms of service.
  • Handle exceptions: network operations can fail for various reasons, so make sure to handle exceptions and implement retries if necessary.
  • Secure your credentials: if you're using authenticated proxies, keep your credentials safe and avoid hardcoding them into your scripts.

Happy coding!

Top comments (0)