DEV Community

Madhan Alagarsamy
Madhan Alagarsamy

Posted on

Keras Deserialization Safe Mode: Security Capabilities and Limitations

Overview

This article analyzes the security behavior of Keras safe mode during model deserialization, focusing on what it prevents and what it does not.


Introduction

In TensorFlow Keras, loading a model involves more than reading stored data.

It requires deserializing objects such as layers, optimizers, and loss functions from a configuration structure.

This process can execute Python code, which introduces potential security risks when loading untrusted models.

To reduce this risk, Keras provides a parameter:

from tensorflow.keras.utils import deserialize_keras_object

obj = deserialize_keras_object(config, safe_mode=True)
Enter fullscreen mode Exit fullscreen mode

The safe_mode parameter is designed to restrict unsafe behavior during deserialization.

However, its protection is limited to specific cases.


Keras Deserialization Overview

Keras represents objects using a configuration dictionary:

config = {
    "class_name": "Adam",
    "config": {"learning_rate": 0.001},
    "module": "keras.optimizers",
    "registered_name": None
}
Enter fullscreen mode Exit fullscreen mode

During deserialization:

obj = deserialize_keras_object(config, safe_mode=True)
Enter fullscreen mode Exit fullscreen mode

Keras performs:

  1. Class resolution
  2. Module import
  3. Object instantiation

Each of these steps can execute code depending on how the object is defined.


Security Capabilities of Safe Mode

1. Blocking Lambda Deserialization

The primary security feature of safe mode is the prevention of lambda function deserialization.

Example:

malicious = lambda x: __import__("os").system("echo hacked")
Enter fullscreen mode Exit fullscreen mode

Behavior:

deserialize_keras_object(config, safe_mode=True)   # Blocked
deserialize_keras_object(config, safe_mode=False)  # May execute
Enter fullscreen mode Exit fullscreen mode

Lambda functions are unsafe because:

  • They are anonymous and hard to inspect
  • They can execute arbitrary system commands

Safe mode blocks this attack vector.


2. Limiting Certain Dynamic Execution

By blocking lambda functions, safe mode reduces some dynamic execution paths that rely on inline function definitions.

However, this protection is limited.


Security Limitations of Safe Mode

1. Custom Objects

Custom objects are treated as trusted:

class MyLayer:
    def __init__(self):
        print("Executed")

obj = deserialize_keras_object(
    config,
    custom_objects={"MyLayer": MyLayer},
    safe_mode=True
)
Enter fullscreen mode Exit fullscreen mode

Result:

  • Code executes normally
  • No restriction from safe mode

2. Registered Objects

Objects registered using:

from keras.saving import register_keras_serializable

@register_keras_serializable(package="custom")
class MyLayer:
    pass
Enter fullscreen mode Exit fullscreen mode

These are trusted and executed without restriction.


3. Built-in Keras Classes

All built-in components are allowed:

config = {
    "class_name": "Dense",
    "config": {"units": 64},
    "module": "keras.layers"
}
Enter fullscreen mode Exit fullscreen mode

These are always executed normally.


4. Execution in from_config()

Keras reconstructs objects using methods like from_config():

class DangerousLayer:
    @classmethod
    def from_config(cls, config):
        import os
        os.system("echo executed")
        return cls()
Enter fullscreen mode Exit fullscreen mode

Safe mode does not restrict this execution.


5. Module Imports

Keras dynamically imports modules:

"module": "keras.optimizers"
Enter fullscreen mode Exit fullscreen mode

Safe mode does not restrict imports.


6. Custom Object Scope

from keras.saving import custom_object_scope

with custom_object_scope({"MyLayer": MyLayer}):
    obj = deserialize_keras_object(config, safe_mode=True)
Enter fullscreen mode Exit fullscreen mode

Everything inside is trusted.


7. Normal Python Code Execution

Safe mode blocks lambda functions but allows normal Python code:

class Malicious:
    def __init__(self):
        __import__("os").system("echo executed")
Enter fullscreen mode Exit fullscreen mode

This executes even with safe_mode=True.


8. Namespace Shadowing and Priority Inversion

Safe mode strictly blocks lambda execution, but it relies on the integrity of the Keras object registry.

A critical limitation exists due to priority inversion during deserialization.

When a registered_name is present in the configuration, Keras prioritizes resolving the object through the registry instead of using the standard built-in class.

Example:

config = {
    "class_name": "Dense",
    "registered_name": "ShadowLib>Dense",
    "config": {"units": 64}
}
Enter fullscreen mode Exit fullscreen mode

If a malicious class has been registered under the same identifier (e.g., ShadowLib>Dense), Keras will instantiate that class instead of the expected built-in layer.

This creates a namespace shadowing risk, where:

  • A trusted name (like Dense) is overridden
  • A malicious implementation is executed

Safe mode does not validate:

  • The origin of the registered object
  • Whether it shadows a built-in class

As a result, registry-based attacks remain possible even with safe_mode=True.


Summary

Behavior Safe Mode
Lambda execution Blocked
Custom objects Not blocked
Registered objects Not blocked
Built-in classes Not blocked
from_config execution Not blocked
Module imports Not blocked
Normal class code Not blocked
Namespace shadowing Not blocked

Conclusion

Keras safe mode provides protection against unsafe lambda deserialization.

However, it does not:

  • Prevent execution of custom or registered objects
  • Restrict logic inside class methods
  • Limit module imports
  • Protect against registry-based shadowing attacks

Therefore, safe mode is a partial safeguard, not a complete security solution.


Key Takeaway

Using:

safe_mode=True
Enter fullscreen mode Exit fullscreen mode

improves safety, but it does not guarantee secure model loading.

Top comments (0)