DEV Community

Cover image for Empirical Evaluation of Rectified Activations in Convolutional Network
Paperium
Paperium

Posted on • Originally published at paperium.net

Empirical Evaluation of Rectified Activations in Convolutional Network

Tiny tweak makes image-recognition better: randomized leaky neurons

Neural nets for images often use a simple rule called ReLU to decide when a neuron fires.
We tested a few small changes — letting the neuron keep a tiny response for negative inputs, or making that tiny response learned or even random.
Across common image tasks the small non-zero negative slope gave more stable results, and on small dataset the fixed or learned slopes tends to memorize, while the randomized version stayed robust.
In plain words: a little randomness can stop models from fitting the noise.
That helps beat overfitting and make predictions that hold on new pictures.
The best run reached 75.
68%
accuracy on a standard image test without fancy tricks.
These tweak is simple and cheap to try, yet its effect was clear and sometimes surprising to folks who thought sparsity was everything.
For anyone building image models, try a tiny random leak — it might give your system a useful nudge, and it rarely cost much.

Read article comprehensive review in Paperium.net:
Empirical Evaluation of Rectified Activations in Convolutional Network

🤖 This analysis and review was primarily generated and structured by an AI . The content is provided for informational and quick-review purposes.

Top comments (0)