DEV Community

Cover image for Slicing Pre-Trained models in Keras. Part (I)
Mohamed Mohamed Farag
Mohamed Mohamed Farag

Posted on

Slicing Pre-Trained models in Keras. Part (I)

Today we will discuss how to slice the pre-trained models provided by the Keras framework for deep learning(DL) implementation.

Prerequisites

  • Deep learning foundations
  • Intermediate level regarding keras.

To try a new thing or to invent something, there should be a motive to do so. I was motivated during my MSc. study by the idea that we can split vision pre-trained models such as DenseNet, Inception, and ResNet models, to get a certain block inside the model and re-use it again. Furthermore, I searched a lot for such a thing at different reputable websites such as Stackoverflow and Github without getting an answer.

Motivation

  • What if we can remove a block from the pre-trained model such as Residual block, Inception block, etc, to use it in our own architecture?

  • What if we are able to merge a lot of foundational blocks together to create a new model for our task?

  • During the research for a better model, what if we need just a part of the architecture that offers good performance without high complexity?

  • What if we need to add another component between two blocks in the pre-trained network such as the inception blocks to try new ideas?

This will be a series of two articles:

Article (I): we will discuss why do we need to do it?
Article (II): we will see an example by applying the idea to the DenseNet architecture.

Thank you!❤

Top comments (0)