DEV Community

Prateek Sawhney
Prateek Sawhney

Posted on

Behavioral Cloning of Self Driving Car

Link to Code

https://github.com/prateeksawhney97/Behavioral-Cloning-Project-P4

My Final Project

Behavioral Cloning Project for Self-Driving Car Nano Degree Term 1. The project includes designing a neural network and then training the car on the road in unity simulator. The CNN learns and clones the driving behavior.

Demo Link

https://www.youtube.com/watch?v=2_6eNQr4yAc&feature=youtu.be

Steps followed:

The goals / steps of this project were the following:

  • Use the simulator to collect data of good driving behavior
  • Build, a convolution neural network in Keras that predicts steering angles from images
  • Train and validate the model with a training and validation set
  • Test that the model successfully drives around track one without leaving the road

First of all, the model is trained to generate the model.h5 file with the help of following command. model.py file contains the code to train the model.

python model.py

Using the Udacity provided simulator and my drive.py file, the car can be driven autonomously around the track by executing:

python drive.py model.h5

After the car successfully steers through the track, the video of the driving behavior can be formed by producing various frames and saving that frames in the output-video folder, by executing the following command. The fourth argument, output-video, is the directory in which to save the images seen by the agent. If the directory already exists, it'll be overwritten.

python drive.py model.h5 output-video

After all the frames of the car driving in the simulator are saved in the output-video folder, the video can be made by combining all the frames with the use of following command. It creates a video based on images found in the output-video directory. The name of the video will be the name of the directory followed by '.mp4'.

python video.py output-video

Optionally, we can specify the FPS (frames per second) of the video. The default is 60 fps.

python video.py output-video --fps 48

Model Architecture

My model consists of a convolution neural network which is implemented with the help of keras in a much easier way. The model is like the NVIDIA model, and contains five Convolutional layers and four Dense layers. The model also contains a Dropout layer, a Flatten layer and one Cropping2D layer. The data is normalized in the model using a Keras lambda layer. The total number of parameters in the proposed model is 348, 219.

Attempts to reduce overfitting in the model

The model contains dropout layer in order to reduce overfitting. There is a Dropout layer after the five Convolutional Layers to reduce overfitting. The Dropout Layer has a probability of 0.5 to dropout the weights. The model was trained and validated on different data sets to ensure that the model was not overfitting. The model was tested by running it through the simulator and ensuring that the vehicle could stay on the track.

Model Parameter Tuning

The model uses an Adam optimizer, so the learning rate is tuned manually. "optimizer=Adam(lr=1.0e-4)" depicts the usage of Adam optimizer with a learning rate of "1.0e-4". The number of epochs is set to 10 and batch_size is set to 32.

Appropriate Training Data

Training data was chosen to keep the vehicle driving on the road. I used training data by driving for around three tracks on the road. Nearly 13,000 images including the center, left and right camera images were used to train the model. Various training Data Augmentation techniques were used to augment the training data like random flip, random translate, random brightness and RGB to YUV image conversion just as NVIDIA uses in its model.

Top comments (0)