You can think of it as a basis of neural network. The motivation for perceptron is taken from the human neuron.
Input are going to neuron which perform some sort of computation on it in order to give the output. The functionality inside the neuron is often referred as Activation Function.
Now, in order to do the learning we need to adjust some parameters these parameters are known as weights. These weights get multiplied to the input.
There is still a problem. What if the input is zero? Doesn't matter what change we do to the w nothing is going to happen. In order to solve this problem we will add a bias to each input.
The interesting thing is the multiplication of input and it's weight has to overcome the bias in order to have some effect on the output.
The value of both the weight and bias can be positive and negative.
Mathematically our generalisation is