DEV Community

omarkhater for AWS Community Builders

Posted on

Unlocking Flexibility and Efficiency: How to Leverage Sagemaker Pipelines Parameterization

Introduction


Are you using Sagemaker Pipelines for your machine learning workflows yet? If not, you're missing out on a powerful tool that simplifies the entire process from building to deployment. But even if you're already familiar with this service, you may not know about one of its key features: parameterization. With parameterization, you can customize the whole workflow, making it more flexible and dynamic. In this article, we'll take a deep dive into this feature, share some real-world examples, and discuss the pros and cons. We'll even offer some workarounds for addressing the limitations of this service. So grab your coffee and let's get started!

Overview of Sagemaker Pipelines Parameterization


Before we dive into the pros and cons of Sagemaker Pipelines parameterization, let's take a quick look at how it works. Sagemaker Pipelines allows users to specify input parameters using the Parameter class. These parameters can be specified when defining the pipeline, and their values can be set at execution. Parameterization allows users to create flexible pipelines that can be customized for different use cases.

In addition, Sagemaker studio provides a stunning GUI to execute the pipeline using any other values for the defined parameters.

Parameterization overview

Generally speaking, Sagemaker Modeling Pipeline supports 4 different type of parameters:

  • ParameterString – Representing a string parameter.
  • ParameterInteger – Representing an integer parameter.
  • ParameterFloat – Representing a float parameter.
  • ParameterBoolean – Representing a Boolean Python type

and the syntax is as simple as:

<parameter> = <parameter_type>(
    name="<parameter_name>",
    default_value=<default_value>
)
Enter fullscreen mode Exit fullscreen mode

Real-world Examples


Amazon Sagemaker Example Notebooks provides several complete posts about Sagemaker pipelines parameterization. Posts below are really helpful to get started with:

Benefits of Sagemaker Pipelines Parameterization


Clearly, parameterization is a key advantage of using Sagemaker Pipelines in automating ML workflows.

There are numerous advantages such as:

  • GUI-based executions: Typically, one can define the Pipeline once then execute the whole work flow smoothly. This is a significant benefit if you work with a colleague data scientist who prefer low-code solutions. Yet, we still can execute it using the Sagemaker SDK
  • Rapid prototyping: Evidently, it enables more efficient experimentation and testing by allowing for easy modification of pipeline components without the need for extensive manual changes.

  • Colloboration: By divding the ML workflow into modular, parameterized parts. It is now more practical and efficient for team-work.

  • Automation: It facilitates automation by enabling the use of scripts and code to modify pipeline parameters, allowing for fully automated end-to-end machine learning workflows.

The list goes on and on.

Limitations of Sagemaker Pipelines Parameterization


While parameterization is a useful feature of Sagemaker Pipelines, it also has some limitations that can make it difficult to use in certain situations. Here are some common limitations to be aware of:

  • Limited support for dynamic or runtime parameters: Sagemaker Pipelines only supports static parameters that are set during pipeline definition. There is no support for runtime or dynamic parameters that can be set during pipeline execution.

  • Limited support for nested parameters: Sagemaker Pipelines does not support nested parameters or hierarchical parameters, which can be limiting in more complex pipeline use cases.

  • Limited parameter validation: Sagemaker Pipelines does not provide extensive parameter validation capabilities, which can make it harder to catch errors or issues during pipeline execution. For example, it may not automatically validate the format or type of the input data, or ensure that the parameters are within acceptable ranges or limits.

  • Quota limiations: AWS sets a non-adjustable quota limit of 200 for the maximum number of parameters in the pipeline. This might be a problem in large-scale pipelines.

In addition, the official documentation listed some other limitations:

  • Not 100% compatible with other Sagemaker Python SDK modules: For example, pipeline parameters can't be used to pass image_uri for Framework Processors but can be used with Processor.
  • Not all arguments can be parameterized: Remember to read the documentation carefully to see whether a certain parameter can be a Pipeline variable or not.

For example, the role can be parameterized while the base_job_name can not be parameterized in the Processor API as shown below.

readdocs

  • Not all built-in Python operations can be applied to parameters.
# An example of what not to do
my_string = "s3://{}/training".format(ParameterString(name="MyBucket", default_value=""))

# Another example of what not to do
int_param = str(ParameterInteger(name="MyBucket", default_value=1))

# Instead, if you want to convert the parameter to string type, do
int_param.to_string()
Enter fullscreen mode Exit fullscreen mode

Useful Workarounds


While these limitations can make parameterization in Sagemaker Pipelines challenging, there are solutions for overcoming some of them. Here are a few examples:

  • Using Lambda functions for dynamic parameters: To work around the limitation of static parameters, you can use a Lambda function to determine the parameter value dynamically based on other pipeline inputs or external data. For example, you could use a Lambda function to calculate the minimum star rating to include in your analysis based on the average star rating of all customer reviews. You can use Lambda step in this context. All different steps are summarized in this post

  • Using ProperyFiles for nested parameters: If you need to specify nested or hierarchical parameters, you can write a JSON file to be used within the Pipeline using both JsonGet and PropertyFile as shown in the code snippet below:

    import sagemaker
    from sagemaker.workflow.properties import PropertyFile
    from sagemaker.workflow.steps import ProcessingStep
    from sagemaker.processing import  FrameworkProcessor
    import json
    from sagemaker.workflow.functions import JsonGet
    from sagemaker.sklearn.estimator import SKLearn
    from sagemaker.workflow.pipeline_context import PipelineSession
    from sagemaker.workflow.pipeline import Pipeline
    from sagemaker.processing import  ProcessingOutput
    
    pipeline_session = PipelineSession()
    pp_outputs = [ProcessingOutput(output_name="paths", 
                                   source="/opt/ml/processing/output")]
    Paths_file = PropertyFile(
        name="NestedParameter",
        output_name="paths",
        path="nested_parameter.json",
    )
    
    pp = FrameworkProcessor(role=sagemaker.get_execution_role(),
                            instance_type="ml.t3.medium",
                            instance_count=1,
                            estimator_cls=SKLearn,
                            framework_version = "0.23-1",
                           sagemaker_session = pipeline_session)
    step_args = pp.run(code = "DoNothing.py",
                      outputs= pp_outputs)
    step_process = ProcessingStep(
        name="Dummystep",
        step_args=step_args,
       property_files=[Paths_file])
    
    train_path = JsonGet(
     step_name=step_process.name,
     property_file=Paths_file,
     json_path="paths.train.URI",
    )
    test_path = JsonGet(
     step_name=step_process.name,
     property_file=Paths_file,
     json_path="paths.test.URI",
    )   
    

    The JSON file could be something like this:

    {
    "paths": {
        "train": {
            "URI": "s3://<path_to_train_data>"
        },
        "test": {
            "URI": "s3://<path_to_test_data>"
        }
    }
    }
    
  • Building custom validation scripts: To catch errors or issues with your pipeline parameters, you can build custom validation scripts that check the parameter values before the pipeline runs. This can help catch errors early and prevent pipeline failures due to invalid parameters.

  • Careful design: All in all, you need to design the exposed parameters to ensure not passing the quota value. If you need further increase, you may contact the support about your case.

Conclusion


Parameterization is a useful feature in Sagemaker Pipelines, but it does have some limitations that can make it challenging to use in certain situations. By using Lambda functions, PropertyFiles, and custom validation scripts, you can work around some of these limitations and create more flexible pipelines. By following best practices for parameterization, you can also ensure that your pipelines are well-organized and easy to use. With these tips and tricks, you'll be able to make the most of Sagemaker Pipelines parameterization and create powerful machine learning workflows.

Top comments (0)