DEV Community

Cover image for From Game Developer to Deep Learning – The Easier Route
danaf
danaf

Posted on

From Game Developer to Deep Learning – The Easier Route

My name is Dana and I’m a game developer. Or rather I was a game developer, until I took a detour into the world of technical writing. But that’s not the story I’m going to tell you here. Rather, I want to share with you how I learned how to build deep learning models. This isn’t a horror story about gluing together buggy code written by others, but, rather, one of how easy it was for me to become a deep learning (DL) practitioner without former knowledge in the field of artificial intelligence. And since many developers are now getting into DL, either as a full-time career or because knowledge of it is now required for their role, I thought I'd share some learnings from my initial journey into DL.

It all started two years ago when I got an exciting opportunity to write technical documentation and blogs for a company called PerceptiLabs. PerceptiLabs is a DL startup, founded in 2017 by Swedish computer scientists Martin Isaksson and Robert Lundberg. Their tool, of the same name, makes DL modeling easier through a visual-based, low-code approach.

Having spent much of my game development career implementing GUIs, gamepad inputs, memory cards, and online functionality, the latter of which I still dread to this day, my DL knowledge was limited to the pseudo-AI traditionally employed in AAA game titles. So, starting the gig with PerceptiLabs was exciting as it gave me the opportunity to learn real DL from the ground up, without prior knowledge. However, that’s what made it scary at the same time.

Initial Learnings

My first step was to figure out how DL works and how a computer can learn to analyze something. Traditionally, intelligent code uses conditional statements (e.g., if/else) to alter program flow based on different conditions. Even game developers use a relatively deterministic and simple form of AI, rather than true DL, to maintain high frame rates and to ensure games remain fun,

So my first learning was that DL takes a different approach.

DL is about building data structures called models to store probabilities about different aspects of data. The most common model today is a neural network, of which there are many variations. Algorithms then populate these models from example data, and traverse them to strengthen certain probabilities based on the data. This training process repeats until the algorithm's tests confirm that those predictions hold true on new, never-before-seen data. The trained model is then embedded into an application for inference, where it's used to make predictions from real-world data.

To wrap my head around this, I relied heavily on this great YouTube series on how a neural network works. It explains in layman's terms, how a neural network classifies pictures of hand-written digits. This is considered the Hello World project for DL, and helped me create PerceptiLabs' Basic Image Recognition tutorial. As a game programmer I could easily relate to this, because it deals with analyzing pixels in images.

I learnt that Python is the most popular programming language for DL, due to its high-level, almost English-like constructs and that TensorFlow is the most popular DL framework. It provides high-level statements to create graphs of tensors (collections of numbers), through which operations flow (hence the name TensorFlow). Back in my game-programming days I would have been on cloud nine if game code could be written so succinctly.

I initially struggled to understand how TensorFlow represents a neural network, something I've seen echoed in forums and other places. From my days of low-level C/C++ programming, I envisioned creating and traversing my own graphs and writing custom math routines. But TensorFlow does all of this grunt work for you. It wraps these mechanics into high-level functions that make it easier to create, populate, and run models, so you can focus on solving DL problems.

Learning with PerceptiLabs

Not everyone starting out has the time to learn TensorFlow, and not everyone is a programmer. And as they say, you don't know what you don't know. That's why having a tool like PerceptiLabs in which to experiment with DL, has been a godsend.

Image description

It creates a working model that I can delve into, and the initial model code encapsulated across Components, which can be visually dragged and connected together. This makes the model's architecture easier to understand, and serves as a sort of launching pad for your DL journey. It has enabled me to ease into the TensorFlow code and decide how deep I wanted to learn it. In most cases, a quick skim through the code was enough to understand what's going on under the hood.

The GUI itself has also helped me to learn key DL concepts. For example, as you drill down into the GUI's settings, views, etc., you soon come across terms like accuracy, loss, pooling, and epochs, not to mention options like activation functions. Although I didn't initially know what these things meant, quick Google searches were all it took to piece things together.

Accomplishments/Facing your Fears

Complex math and scary set theory symbols have never been my thing, but it's certainly a thing in the DL world. And writing meaningful DL content for Perceptilabs has forced me to spend a lot of time reading academic papers, and deciphering intimidating math equations and symbols. That, in turn, has led me to relearn calculus, set theory, and algebra. But I've discovered that even a basic understanding of these subjects, can help you unlock the wealth of knowledge published by others.

This knowledge isn't strictly required to use PerceptiLabs, but it certainly helps to have a deeper understanding of what you're building, especially when you can plunk down an entire model via a few visual Components!

Lessons Learned

It's been an exciting journey so far, and the world of DL is constantly evolving.

For those getting started in DL, here's a few key recommendations based on my learnings:

  • Start out with the YouTube series mentioned above on how a neural network works: It truly does a great job of explaining things, and the example it uses, is simple enough to understand the relationship between data (pixels, features, etc.) and model structures, but complex enough to cover all the foundational aspects you need to know.
  • Visit ML-oriented Websites Frequently: Sites like KDnuggets and Towards Data Science constantly add great ML articles which cover all aspects of DL and target all levels of ML knowledge, from beginner to expert.
  • Wikipedia is Very Much your Friend: Wikipedia has been invaluable when it comes to quick, succinct articles about DL. It covers everything from mathematics to DL algorithms, and is often one of the first search results from which I learn about a DL-related topic.
  • Check out PerceptiLabs' Blogs: We've worked hard to bring you pragmatic blogs packed with practical tips and advice on many aspects of ML and modeling.
  • Check out arXiv: arXiv.org is an open-access repository owned by Cornell University that contains a great collection of whitepapers and academic articles for DL.
  • Don't be Intimidated by Algorithms, Math, and Programming: With a tool like PerceptiLabs, you can easily piece things together as you go. Whenever you come across something new, simply search for articles about it online and you'll likely find loads of information. And if you do decide to delve into the code, Python is very high-level, meaning it reads like a series of English-like commands. It also has comprehensive documentation explaining what everything means, as does TensorFlow.

I hope this article inspires and points you in the right direction for your journey into ML, wherever it may take you.

Top comments (0)