<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: catasaurus</title>
    <description>The latest articles on DEV Community by catasaurus (@catasaurus).</description>
    <link>https://dev.to/catasaurus</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/catasaurus"/>
    <language>en</language>
    <item>
      <title>Useful tensorflow/keras callbacks for model training</title>
      <dc:creator>catasaurus</dc:creator>
      <pubDate>Sun, 13 Mar 2022 23:06:45 +0000</pubDate>
      <link>https://dev.to/catasaurus/useful-tensorflowkeras-callbacks-for-model-training-3e5n</link>
      <guid>https://dev.to/catasaurus/useful-tensorflowkeras-callbacks-for-model-training-3e5n</guid>
      <description>&lt;p&gt;Here are some callbacks that I have found to be very useful when training machine learning models using python and tensorflow:&lt;/p&gt;

&lt;h4&gt;
  
  
  Number one: Early stopping
&lt;/h4&gt;

&lt;p&gt;Keras early stopping (&lt;a href="https://keras.io/api/callbacks/early_stopping/"&gt;https://keras.io/api/callbacks/early_stopping/&lt;/a&gt;) has to be my favorite callback. With it you can define when the model should stop training if it is not improving. An example for usage is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;earlystopping = tf.keras.callbacks.EarlyStopping(
    monitor="val_loss",
    min_delta=0.001,
    patience=5,
    verbose=1,
    restore_best_weights=True,
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will stop the model's training once it does not improve at least 0.001 in loss for 5 epochs. It will then restore the model's weights to the weights on the best epoch. Just like any callback make sure to include it during training like&lt;br&gt;
&lt;code&gt;model.fit(some_data_X, some_data_y, epochs=some_number, callbacks=[earlystopping, some_other_callback])&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Number two: Learning rate scheduler
&lt;/h4&gt;

&lt;p&gt;Keras learning rate scheduler (&lt;a href="https://keras.io/api/callbacks/learning_rate_scheduler/"&gt;https://keras.io/api/callbacks/learning_rate_scheduler/&lt;/a&gt;) can be very useful if you are having problems with your learning rate. With it you can reduce or increase learning rate during training based on a number of conditions. An example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def scheduler(epoch, lr):
       return lr * tf.math.exp(-0.5)

 learningratecallback = tf.keras.callbacks.LearningRateScheduler(scheduler)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The scheduler function is where you can define your logic for how the learning rate should decrease or increase. &lt;code&gt;learningratecallback&lt;/code&gt; just wraps your function in a &lt;code&gt;tf.keras.callbacks.LearningRateScheduler()&lt;/code&gt;. Don't forget to include it in &lt;code&gt;model.fit()&lt;/code&gt;!&lt;/p&gt;

&lt;h4&gt;
  
  
  Last but not least, number three: Custom callbacks
&lt;/h4&gt;

&lt;p&gt;Custom callbacks (&lt;a href="https://keras.io/guides/writing_your_own_callbacks/"&gt;https://keras.io/guides/writing_your_own_callbacks/&lt;/a&gt;) are great if you need to do something during training that is not built in to keras + tensorflow. I won't go in depth as there is a lot you can do. Basically you have to define a class that inherits from &lt;code&gt;keras.callbacks.Callback&lt;/code&gt;. There are many different functions that you can define that will be called at different times during the training (or testing and prediction) cycle. A simple example would be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Catsarecoolcallback(keras.callbacks.Callback):
    def on_epoch_end(self, logs=None):
        print('cats are cool!`)
callback = Catsarecoolcallback()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This (as you can probably tell) prints out &lt;code&gt;cats are cool!&lt;/code&gt; every time an epoch ends.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hope you learned something while reading this!
&lt;/h4&gt;

</description>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>tensorflow</category>
      <category>python</category>
    </item>
  </channel>
</rss>
