In the previous post I talked about how video games can be used as a resource for building general machine learning models. Today I want to walk through building an object detection model using the Watson Machine Learning service to identify and track objects in Fortnite. Let's jump right in!
Resources
We'll be using a few different services available on IBM Cloud, so first you'll want to create a new account, or log in to your existing account.
The services that we'll be using are:
- Cloud Object Storage: To store our data and models
- Watson Machine Learning: Environment to train our model
- Cloud Annotations: To quickly label our training data
You will also need to have a video of Fortnite gameplay that you can use as training and test data (I have provided one, here, but the more, the better).
Cloud Object Storage
Once you have logged into IBM Cloud, simply click on "Create Resource" and search for "object storage"
, give your instance a name, I chose "couch-fortnite-object-storage"
and select the Lite plan (the lite plan is free and allows up to 25GB of storage). Once the service is created, we need to create credentials so that we can use our object storage to store both our test data and model files. Click New Credential and make sure that the role is set to Writer and the option to include HMAC credential is checked.
Once created, click View Credential, you will see a JSON output of your credential, we need a few elements:
apikey
-
cos_hmac_keys
––>access_key_id
andsecret_access_key
resource_instance_id
You can either keep this open in a second tab, or save the JSON in a text file to use in a few moments.
Watson Machine Learning
The last service we need to use is Watson Machine Learning - follow the same steps as above, searching for "watson machine learning"
, give it a name and select the lite plan. We need to create credentials for this service as well. Click New Credential and again make sure to select Writer as the role. Click View Credentials and again, make a note of some of the elements that we will need later:
instance_id
password
url
username
Preparing the data
The goal here will be to train a model that can both identify and track an object in videos of Fortnite gameplay. We'll use a tool called Cloud Annotations to simplify this process. Navigate to cloud.annotations.ai – we'll use our object storage credentials to login. Enter your resource_instance_id
and apikey
, select US as the region.
Once logged in, the first thing we need to do is click Create bucket and give it a name. Next, select localization – this will allow us to label objects in a photo by drawing bounding boxes.
Next, click add media and select your Fortnite video (video files will be split into individual frames). Now, click add label, let's name it baller so that we can label ballers in the video (the Fortnite vehical).
Now we can go through our images, drawing boxes around each baller that we see.
You may label as many or as few images as you would like. Only images that have labels will be used in training.
As a general note about machine learning, and Fortnite specifically, you should use training data that incorporates all spectrums of environments that you anticipate using in testing and general use of your mode;. What I mean specifically in Fortnite is that there are many different environments you can encounter (city-scapes, trees, snow, lava, etc.), and you should try to encorporate at least a few different environments in your training data to build the best possible model.
Training
We will be using a CLI tool to interface with our labeled training images and train/download our model. The CLI requires Node 10.13.0 or later.
Installation
npm install -g cloud-annotations
Once installed, you will have access to a command, cacli
in your terminal.
Training our model
I suggest creating a new directory where we can run our training operations and eventually download our model (I created a directory called fortnite-obj-detection
). From that directory run the command cacli train
, the first time you run this command it will prompt you for credentials for both your Watson Machine Learning and object storage instances, this allows the tool to access the training data and then train our model.
cacli
will also ask about training params, using the k80
gpu is what we will be using and is included in the lite plan of Watson Machine Learning. The steps, I suggest using 20 * [number of training images]
as a general rule of thumb.
Once run, a configuration file will be created so in the future you can simply retrain the model with new data without providing the service credentials.
Once all of the parameters have been filled in, we're ready to train! The CLI tool should automatically initiate the training job and join the queue, it will provide for you a model_id
, we will need this to both monitor and download the model. The CLI will ask if you would like to monitor the job once it has started, but if you close the terminal or would like to monitor elsewhere, you can also run cacli progress [model_id]
.
Download the model
Once the job has finished, we're ready to use the model, but first we need to download it! Simply run cacli download [model_id]
and it will retrieve the trained model from our object storage bucket and download it locally. The tool will download 3 versions of the model, ready to deploy in various environments. We will be using the model_web version, ready to use in a tensorflow.js application.
Using the model
A standalone react app is available for you to clone and use already! Once you've cloned the repository, simply copy the model_web directory (the whole directory), and put it in the public directory of the react app. Finally, add a video to the public, called video.mov. Finally, run the app! If all goes well, the video will play and display bounding boxes around the objects that it has identified.
Hopefully this is a great starting point for you to build your own object detection models! Like I said before, I think that video games create a great environment for developing general purpose machine learning model.
Top comments (2)
“Once run(Cacli), a configuration file will be created so in the future you can simply retrain the model with new data without providing the service credentials.”
May I know what is the exact name of the config file and where Is it stored? I want to delete the file to re-enter all credentials.
love this