DEV Community

Cover image for Creating a Twitter Bot with OpenAI Models in Python: A Beginner’s Guide
Olorundara Akojede (dvrvsimi)
Olorundara Akojede (dvrvsimi)

Posted on • Updated on • Originally published at showwcase.com

Creating a Twitter Bot with OpenAI Models in Python: A Beginner’s Guide

It is no secret that AI is the new disruptive thing in the tech ecosystem and almost all other sectors are looking for innovative ways to adopt automated solutions in their repetitive processes and companies would pay for a seamless and more efficient workflow within their organizations.

Generative Pre-trained Transformers (GPTs) are a type of large language model (LLM) and a prominent framework for generative artificial intelligence, they have been in existence for a while but OpenAi’s chatGPT found a way to make them accessible to even the non-technical population.

In this article, you would learn how to integrate one of OpenAi’s models in a Python program that automates a Twitter bot. Although there are tons of bots on Twitter, it would be cool to build your own as a developer. Who knows? your idea might be featured on the Twitter Dev forum.

The processes would be in easy-to-follow and beginner-friendly steps, let’s go!

Getting Started
Before starting properly, there are a couple of setups that should be put into place and they include:

  1. Access to both Twitter API and OpenAi API.
  2. A text editor.
  3. A platform to periodically run a Python script.

To gain access to Twitter APIs, you should signup on Twitter Developer Portal for dev access. You cannot have a dev account without a Twitter account so make sure to use an account that would be used strictly for automated tweets, this would allow other users to know that your account is a bot account and prevent your account from getting flagged. To learn how to properly set up an automated account, check Automated Account labeling for bots on Twitter’s docs.

You would be asked a couple of questions to know the scope of your API usage and there are various levels of access that can be granted for Twitter API. After a series of changes at Twitter, Elevated access was revoked for new applications, Free access is the lowest tier. It has restricted functions but it should do just fine for the purpose of what you would be building. To know more about Twitter access levels and versions, check the About the Twitter API page.

monthly cap usage for different tiers

You can proceed to the Dashboard page after being granted access, on this page, you can name your app and make authentication settings, these settings determine what your generated tokens/secrets can/cannot do. The default environment is usually on Production and can be changed as more developments are made. Remember to store all your tokens in a safe place.

dashboard interface

Likewise, look up OpenAi’s API page and create a personal account. The free trial should do for most basic call frequencies and should last for a considerable while, however, if you plan to do some heavy lifting, go ahead to add your preferred subscription.

OpenAi usage metrics page

There are so many platforms for hosting and deploying scripts, WayScript is a good choice and we would get to set it up in a bit.

Preparing the code environment
Now that all the preparatory steps are out of the way, it is time to write the program. There is an endless list of cool features to add to a Twitter bot but you need to preinstall some dependencies that have in-built functions to make code writing efficient. You would build a bot that gives facts about a specific topic based on the user’s request but you would also be able to add a twist!

Tweepy and OpenAi are the required libraries, run pip install tweepy openai in your terminal to install them. Installing dependencies globally is not a good practice, see this article on how to set up virtual environments from your terminal.

What next?
Create a main.py file inside your project folder after you have activated your virtual environment, then import the preinstalled libraries, you should also import the time library which would be used later on, do not install time since it is an in-built Python package.

# importing required libraries
import tweepy
import openai
import time
Enter fullscreen mode Exit fullscreen mode

Now that you have tweepy imported, you can use one of its functions to authenticate your initially generated credentials but before that, create a new .py file for storing your credentials and name it config.py. In the file, assign variables to each credential and save the file.

## config.py
consumer_key = "xxxxxxxxxxxxxxxxxxxxxx"
consumer_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
access_token = "13812xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
access_token_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
openai_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
client_ID = "S1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
client_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Enter fullscreen mode Exit fullscreen mode

You can now import all your credentials by importing config into your main file.

# importing required libraries
import tweepy
import openai
import time
# note the use of * to universally import config content
from config import *
Enter fullscreen mode Exit fullscreen mode

You cannot access Twitter APIs without authenticating these credentials. OAuthHandler , also known as OAuth1UserHandler in newer versions of tweepy, is a tweepy class for authenticating credentials and it would suffice for this level of access, to know more about other supported authentication methods, check Authentication in Tweepy docs. Set up your OpenAi credential too.

# authorizing with tweepy, note that "auth" is a variable
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
# .set_access_token() is a tweepy method
auth.set_access_token(access_token, access_token_secret)

# creating API object
api = tweepy.API(auth)

# setting up OpenAi key
openai.api_key = openai_key
Enter fullscreen mode Exit fullscreen mode

Another good practice would be to add config.py to a .gitignore file, this would ensure that your credentials are untracked if you push your project to Github.

OpenAi chat models
There are various OpenAi chat models, each with its own syntax. All chat models generate responses with tokens. For every 4 characters, 1 token is expended, longer words cost more tokens so it is good to take that into consideration in order not to exceed your API call limit.

We would be using the text-davinci-003 model(a type of GPT-3.5 model) but the gpt-3.5-turbo is the most capable type of GPT-3.5 model because it is optimized for chat completions and it costs fewer tokens. Read extensively about their chat models on OpenAi's Docs.

Writing Functions
We would need to write a generate_fact() function that would generate a response with our selected chat model from whatever topic the user decides. The Completion class allows us to create an instance of chat completion with the option to tweak some parameters, code below:

#function to geenrate facts about a topic
def generate_fact(topic):
  # play around with the prompt
    prompt = f"you are a grumpy computer programmer, tell a fact about {topic} in a rude and sarcarstic tone"
    response = openai.Completion.create(
    # bear in mind that this engine has a token limit of 4000+ tokens
        engine="text-davinci-003",
        prompt=prompt,
    # note that max_token accomodates both prompt_token and completion_token so provide enough tokens
        max_tokens=1024,
    # set to n=1 for single response and n>1 for multiple responses
        n=1,
        stop=None,
    # temperature ranges from 0.0 to 1.0, nearness to 0 would make the model more deterministic and repititive, nearness to 1 would make it fluid
        temperature=0.8,
    )
    fact = response.choices[0].text.strip()
    return fact
Enter fullscreen mode Exit fullscreen mode

Now that you have created a fact-generating model, you would need to write a handle_mentions() function that handles the mentions of your bot account, you would use Tweepy to listen to specific mentions that would trigger a response.

# function to handle mentions
def handle_mentions():
    mentions = api.mentions_timeline(count=5)  # only retrieve last 5 mentions

    for mention in mentions:
        if mention.in_reply_to_status_id is not None:  # skips tweet replies(optional)
            continue
        if mention.user.screen_name == "bot_account_username":  # skip mentions from self
            continue

        # parse mention_text, it can be "tell me about", anything
        mention_text = mention.text.casefold() # casefold to prevent any error that may arise from different cases
        if "tell a fact about" not in mention_text:
            continue
        topic = mention_text.split("tell a fact about")[1].strip()

        # generate a fact by calling our initisl function
        try:
            fact = generate_fact(topic)
            # post the fact as a reply to the mention
            api.update_status(f"@{mention.user.screen_name} {fact}", in_reply_to_status_id=mention.id)
    # handle OpenAI API errors
        except openai.OpenAIError as e:
            api.update_status(f"@{mention.user.screen_name} sorry, an error occurred while processing your request.", in_reply_to_status_id=mention.id)
    # handle Tweepy errors
        except tweepy.TweepyException as e:
            api.update_status(f"@{mention.user.screen_name} sorry, an error occurred while processing your request.", in_reply_to_status_id=mention.id)
    # Handle other errors that may arise in your code
        except Exception as e:
            api.update_status(f"@{mention.user.screen_name} sorry, an error occurred while processing your request.", in_reply_to_status_id=mention.id)
Enter fullscreen mode Exit fullscreen mode

Remember to replace bot_account_username with your bot account's username, adding error handlers is also a very good Python practice.

To ensure that the program keeps streaming for mentions and sending responses after a specified period of time, you need to add a code block that serves as a loop.

# by adding if __name__ == "__main__":, we can ensure that certain code blocks, handle_mentions() in this case, will only be executed when main.py is run as the main program

if __name__ == "__main__":
    while True:
        try:
            handle_mentions()
        except tweepy.TweepyException as e:
            # handle twitter API errors
            print("an error occurred while handling mentions:", e)
        except Exception as e:
            # handle other errors
            print("an error occurred while handling mentions:", e)
        # wait for 2 minutes before checking for new mentions(reduce the time if traffic increases on your bot account, this is why we imported time)
        time.sleep(120)
Enter fullscreen mode Exit fullscreen mode

NOTE:
When copying the above code blocks, there may be a series of `IndentationError instances so be sure to look out for them.

Deploying the Python Script on WayScript.
Setting up WayScript can be a little challenging but this tutorial video by WayScript thoroughly helps to break the processes down from start to finish.

WayScript home page

Conclusion
I have been testing a bot with more functions, take a look at the repository. I would periodically update the README.md with more information.

You want to check more of my articles? you can find them here, you can also connect with me on Twitter.

Great job on building your own Twitter bot with an OpenAI model! As you have learned, Python and OpenAI make it easy to create a powerful bot that can interact with users and provide valuable information. With the techniques you’ve learned, the possibilities are endless for what you can create. Whether it’s a language translator bot, a fact generator, or a reminder bot, the key is to use your imagination and build something that provides value to your audience. So go ahead, build your own Twitter bot, and watch it come to life!

Top comments (0)