DEV Community

hrenski
hrenski

Posted on

Check Your Assumptions - What's Going In To Your Model

During my data science course, my instructor has stressed many times (and it has been reiterated in multiple blogs and videos):

"Good features make good models."

and

"Always be skeptical."

A recent experience has driven these mantras home for me, and I thought I would share them.

Recently, I have been looking at different methods of feature extraction, and since we have been discussing neural nets in my course, I naturally began looking at autoencoders (AE). Since convolutional neural nets (CNNs) also interesting to me, I decided to put these together and try to setup a convolutional autoencoder (CAE). While there are many good blogs and guides on CAEs, they usually give you the network architecture straight away. In order to give myself some experience with the details, I decided to start with a bare bones CAE then tune the network parameters and architecture by hand. (I won't get into the actual network that I started with here as I'm planning to write later about my experience and observations while adjusting the various components.)

I figured using images with faces was a good start, so I searched for and found the Labeled Faces in the Wild (LFW) dataset. There are several versions of the images:

  • Un-edited images
  • aligned images via funneling
  • aligned images via deep funneling
  • aligned images using commercial software

I figured using aligned images would be easier to use, so I downloaded the deep funneled version and got to work.

Aligned LFW

After combining all the images into a single dataset, I ran through several epochs on my network and viewed the output to see how well the encoder was reconstructing the images. At this point, the network was still simple, so I wasn't expecting high quality. Here's an example of what I saw.

Aligned reconstruction

As you can see, each reconstructed image, regardless of the original face, has a face that is eerily similar.

ghost

At first, I suspected that this was due to the simplicity of my network; it was outputting an aggregate face based on all of the inputs. However, keeping in mind my instructors slogan, I went back and downloaded the un-edited version of the images (without the deep funneling).

Noaligned LFW

After running the images without alignment through the CAE, I didn't see the same eerie face in each reconstructed image, but something truer (albeit very low resolution) to the input.

Noaligned reconstruction

Even my simple CAE (without much structure) had picked up on the impact of the deep funneling; it had, in a sense, started learning the weights that the funneling neural net had used to align the faces. By removing the existing bias, I was able to start focusing on training a network to generate a faithful reconstruction and not one affected by previous edits.

This was a informative experience for me as it drove home the idea, that it is good to question where your data came from and what process has been applied to it.

Top comments (0)