DEV Community

Cover image for Build your jokes generator using Machine Learning and Serverless
Simona Cotin for Microsoft Azure

Posted on

Build your jokes generator using Machine Learning and Serverless

This article is part of #25DaysOfServerless. New challenges will be published every day from Microsoft Cloud Advocates throughout the month of December. Find out more about how Microsoft Azure enables your Serverless functions.

Have an idea or a solution?


The Turing test has been the de facto standard for testing a machine for intelligence. It's the way we try to figure out if our models are capable of responding to general-purpose questions the way a human would. The most efficient method for building a machine that has human-like responses is using neural networks and deep learning. These are new algorithms that have been made feasible through the recent advancements in hardware performance.

One aspect of humanity that is restricted to evolved primates is humor. In all fairness, there is no clear cut definition of humor, however, there is one theory that has gained quite a bit of popularity in recent years. The argument is that what makes us classify something as being funny falls under the "incongruity theory". The underlying mechanism of this theory builds on top of the inconsistency between what we are expecting to happen and what actually happens.
As it turns out the cognitive mechanisms that allow us to identify something as "funny" are quite complex, and it is only a few evolved primates that can find something funny.
For example Koko, the western lowland gorilla who understands more than 1,000 American Sign Language signs and 2,000 spoken words has been observed making prank jokes (IE: she would tie the instructor's shoelaces and then innocently make the sign for "chase". That's one tricksy gorilla :D !).
As a rule of thumb, the primates that can make laughing like sounds and are ticklish can be categorized as having a sense of humor.

Computers, on the other hand, are not ticklish, vocal cords are not yet a justified upgrade on any of the latest generation laptops, so, by all means, training a machine to understand humor needs to be trained by other means. The best way to train a neural network is by using an exhaustive text that contains as many jokes as possible in a specific format. You will notice that we already restricted the domain of humor to jokes (no pranks, ironic comedy with complex situations etc).

"If you're willing to restrict the flexibility of your approach, you can almost always do something better."
– John Carmack"

In our situation, we want to restrict the domain of the jokes to one-liners, question style comedy which would make the training of our model more focused, essentially what we are doing is gently nudging the machine in the right direction.
The machine intelligence we are building is for text generation, a deep learning neural network that can make jokes.
We will use the language with the most evolved ecosystem in terms of libraries, examples, and documentation, which in this case, is Python. I am a big fan of Python for its readability and because it is very closely evolving alongside JavaScript so many things feel familiar, it's also the other serverless language, and this makes it an excellent candidate to
write an Azure Function that tells us jokes.
The most popular approach to writing text generation applications are based on Andrej Karpathy's char-rnn architecture and blog post,
the example teaches a recurrent neural network to predict the next character in a sequence based on the previous n characters.
Based on Andrejs work Max Woolf built a python package that makes building a text generation bot very easy. It allows you to configure your neural network depth/layers/behavior etc.

The main method for training your joke model is textgenrnn. This object has a train_from_file method that allows you to consume a file with training data to teach your bot to figure out what punch line would be best suited to answer the joke.

The usage is quite straightforward, training a simple model would look something like this:

from textgenrnn import textgenrnn

textgen = textgenrnn(name="new_model")
textgen.train_from_file('jokes.txt', num_epochs=5)

After we have trained the data from the training data set, let's take it for a spin and try generating a test joke. We can do this by calling textgen.generate(). This will output the joke, but this simple call will not return the joke from the serverless function, which means we want to use a version of the method call that returns the data.

joke = textgen.generate(return_as_list=True)[0] # Grab the first generated joke in the list, in this case the only one

The script will create the model that is a serialized version of the dataset that will be used by tensorflow to help our function generate jokes based on the training data. This is the file with .hdf5 extension in your project's root folder.

Now the last piece of the puzzle is using the new model as a trigger to generate jokes in an Azure Function. Assuming you created a storage account that has a container named jokes, here's how your function will look like:

import azure.functions as func
from textgenrnn import textgenrnn
import tempfile

import urllib.request
import os
import logging

def main(myblob: func.InputStream, context: func.Context):
    tempFilePath = tempfile.gettempdir()
    jokesfile = os.path.join(tempFilePath,"jokes.hdf5")
    urllib.request.urlretrieve(myblob.uri, jokesfile)
    textgen = textgenrnn(jokesfile)
    joke = textgen.generate(return_as_list=True)[0]
    logging.info(f"joke: {joke}")
{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "name": "myblob",
      "type": "blobTrigger",
      "direction": "in",
      "path": "jokes/{name}",
      "connection": "AzureWebJobsStorage"
    }
  ]
}

To test your function, upload the generated .hdf5 file to the joke container and see the generated joke in the functions logs.

Need more inspiration? You can find the source code in my github repository here.

Congrats

You now have an API with a sense of humor :D

Conclusions

You can notice that the API is not particularly funny in the classical sense, but it will surprise you. I think that what you would be expecting and what the endpoint is returning will definitely fall under the "theory of incongruity".

To improve results to match more traditional patterns, there are a few ways to improve the training method like:

  • restrict the type of jokes
  • increase training data set size

Want to submit your solution to this challenge? Build a solution locally and then submit an issue. If your solution doesn't involve code you can record a short video and submit it as a link in the issue desccription. Make sure to tell us which challenge the solution is for. We're excited to see what you build! Do you have comments or questions? Add them to the comments area below.


Watch for surprises all during December as we celebrate 25 Days of Serverless. Stay tuned here on dev.to as we feature challenges and solutions! Sign up for a free account on Azure to get ready for the challenges!

Top comments (1)

Collapse
 
rohansawant profile image
Rohan Sawant • Edited

Haha! This is pure gold! 🥇

Is there a live version of this? I would like to fiddle with it!