As we proceed into the 2020s concepts like Artificial Intelligence, Automation and Big Data are powerful tools that allow Data Scientists and Software Engineers to unlock secrets. One of my favorite subjects that I have been able to study so far in Computer Science is the concept and implementation of Deep Neural Networks. The theory and mathematics seem daunting but the code is compact and elegant. The results are the closest thing to Magic that I have seen in the world of technology. Let us take a quick look at some basic concepts and a touch of history.
Let us start with my basic interpretation of what makes a Deep Neural Network. Neural Networks are a form of Machine Learning which is a category of Artificial Intelligence. Neural Networks mimic the human brain by being layers of neurons or nodes that are connected together through synapses or edges. A trained Neural Network is able to accept new input data and reach a conclusion based on learned pattern recognition. This pattern or feature recognition is enabled by the machines ability to determine the weight values of nodes and edges through training. During training a Neural Network uses mathematical optimization or a cost function to work towards the best element of an available set.
What is Machine Learning
Now would be a good time to give Machine Learning a definition by scientists who have helped to advance the field, step by step.
“Machine Learning is the field of study that gives computers the ability to learn without being explicitly programmed.”
Arthur Lee Samuel (December 5, 1901 – July 29, 1990) is a Computer Scientist, AI pioneer and popularized the term Machine Learning in 1959. He is known for developing one of the first board game solvers by utilizing a search tree to analyze a game of checkers.
“Well posed Learning Problem: A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.”
Tom Michael Mitchell (born August 9, 1951) is a Computer Scientist and professor at Carnegie Mellon University. Known for contributions to Machine Learning, AI and cognitive neuroscience.
Andrew Yan-Tak Ng (born 1976) is a Computer Scientist and adjunct professor at Stanford University. He is known for co-founding Google Brain, co-founding Coursera and a former Chief Scientist of Baidu. Deeplearning.ai
A simple Neural Network consists of an Input Layer that accepts a feature vector or column of data. The input data is then passed into a Hidden Layer that is connected via weighted edges. Finally the data is passed to an Output Layer, via weighted edges, that constrains the data into a result. This concept can be expanded to Deep Neural Networks by increasing the number of Hidden Layers.
“So, ask yourself: If what you're working on succeeds beyond your wildest dreams, would you have significantly helped other people? If not, then keep searching for something else to work on. Otherwise you're not living up to your full potential.”
"The universe is an infinite sandbox. We are free to build anything we want while we occupy this space. Our only limitation is self doubt and a fear of change. Thankfully it will be our creativity and values that shape the future."
-A Neural Network