<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Saad Abdullah</title>
    <description>The latest articles on DEV Community by Saad Abdullah (@iemsaad).</description>
    <link>https://dev.to/iemsaad</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iemsaad"/>
    <language>en</language>
    <item>
      <title>Optimizing Machine Learning Models: Comparing Grid Search, Randomized Search, and Optuna</title>
      <dc:creator>Saad Abdullah</dc:creator>
      <pubDate>Fri, 25 Oct 2024 00:08:55 +0000</pubDate>
      <link>https://dev.to/iemsaad/optimizing-machine-learning-models-comparing-grid-search-randomized-search-and-optuna-4hnf</link>
      <guid>https://dev.to/iemsaad/optimizing-machine-learning-models-comparing-grid-search-randomized-search-and-optuna-4hnf</guid>
      <description>&lt;p&gt;Imagine you’re planning a road trip, and you quickly pick a random car. It gets you to your destination, but it’s not the smoothest ride. Later, you realize there was a better car for the trip. This is like training a machine learning model without tuning its hyperparameters. Just like choosing the right car makes your trip better, choosing the right hyperparameters can improve how well your model performs. While default settings might work okay, tuning them helps you get the best results. Let’s see how finding the right settings can make a big difference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Are Hyperparameters?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In machine learning, hyperparameters are settings or configurations that define the structure of a model or control how the model is trained. Unlike model parameters (such as weights in a neural network) that the model learns from the data, hyperparameters must be specified before training begins. These influence both the model’s performance and the computational cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Types of Hyperparameters:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Model Hyperparameters: These control the complexity of the model. &lt;strong&gt;Example&lt;/strong&gt;: The number of layers or neurons in a neural network, the depth of a decision tree.&lt;/li&gt;
&lt;li&gt;Training Hyperparameters: These affect the optimization process. &lt;strong&gt;Example&lt;/strong&gt;: The learning rate, batch size, number of epochs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now that we have a basic understanding of hyperparameters, let me explain why hyperparameter tuning is necessary with an example.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dataset Used: The Diabetes Dataset&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For this article, I’m using the Diabetes dataset from Scikit-learn, which contains clinical features to predict the progression of diabetes. These features include information like age, BMI, blood pressure, and six blood serum measurements. The goal of this model is to predict a quantitative measure of disease progression.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Methodology:&lt;/strong&gt; To showcase the effects of hyperparameter tuning, I’ll train a K-Nearest Neighbors (KNN) regression model on this dataset. I’ll start with the default settings and measure the performance. Then, I’ll compare that with various tuning methods like Grid Search, Randomized Search, and the modern Optuna technique.&lt;/p&gt;

&lt;p&gt;But keep in mind, the purpose here isn’t to build the best-performing model possible. Instead, it’s to demonstrate how hyperparameter tuning can improve results over default settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Baseline Model: K-Nearest Neighbors without Tuning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without tuning, I trained the KNN model with its default settings. Here’s how it performed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Importing necessary libraries
import numpy as np
from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_squared_error
from scipy.stats import randint, uniform
import optuna

# Loading the dataset
data = load_diabetes()
X, y = data.data, data.target

# Spliting the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

# Default model
knn_default = KNeighborsRegressor()
knn_default.fit(X_train, y_train)
y_pred_default = knn_default.predict(X_test)
mse_default = mean_squared_error(y_test, y_pred_default)
print(f"Mean Squared Error without tuning: {mse_default}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Mean Squared Error without tuning: 3222.117894736842&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The error is relatively high, which isn’t surprising because I haven’t tailored the hyperparameters to fit the dataset. Now, let’s see if tuning the hyperparameters makes a difference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 1: Grid Search&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first method I used was Grid Search, one of the most straightforward approaches for hyperparameter tuning. The idea behind Grid Search is simple: it systematically works through multiple combinations of hyperparameter values, exhaustively searching the space to find the best set of parameters for the model. Think of it as testing every possible combination of “settings” and then evaluating which one works best.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here’s how it works:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Step 1: You define a “grid” of hyperparameters. For example, if you’re tuning a K-Nearest Neighbors (KNN) model, the hyperparameters might include the number of neighbors (n_neighbors), the distance metric (metric), and the weighting scheme (weights).&lt;/p&gt;

&lt;p&gt;• Step 2: Grid Search tries every possible combination of the hyperparameters within that grid. For example, if you specify 3 possible values for n_neighbors (e.g., 5, 10, 15) and 2 values for metric (e.g., ‘euclidean’, ‘manhattan’), the search would evaluate the performance of the model for each combination.&lt;/p&gt;

&lt;p&gt;• Step 3: The model’s performance is evaluated for each combination, typically using cross-validation to prevent overfitting. After all combinations are tried, the one with the best performance is selected.&lt;/p&gt;

&lt;p&gt;The main advantage of Grid Search is that it’s comprehensive: by testing every combination, you’re guaranteed to find the best-performing hyperparameters for the given set. However, this thoroughness comes at a cost which in this case is time. If the grid is large, Grid Search can become computationally expensive and time-consuming, especially for models with many hyperparameters or large datasets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#space to explore
param_grid = {
    'n_neighbors': [3, 5, 7, 9],
    'weights': ['uniform', 'distance'],
    'metric': ['euclidean', 'manhattan']
}
grid_search = GridSearchCV(KNeighborsRegressor(), param_grid, cv=5, scoring='neg_mean_squared_error')
grid_search.fit(X_train, y_train)
best_knn_grid = grid_search.best_estimator_
y_pred_grid = best_knn_grid.predict(X_test)
mse_grid = mean_squared_error(y_test, y_pred_grid)
print(f"Mean Squared Error with Grid Search tuning: {mse_grid}")
print(f"Best hyperparameters (Grid Search): {grid_search.best_params_}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After applying Grid Search:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Mean Squared Error with Grid Search tuning: 3133.022563447985&lt;br&gt;
Best hyperparameters (Grid Search): {‘metric’: ‘euclidean’, ‘n_neighbors’: 9, ‘weights’: ‘distance’}&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As you can see, the model’s performance improved slightly. The MSE dropped, but the tuning process took some time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 2: Randomized Search&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, I tried Randomized Search, which is more efficient than Grid Search in terms of computations because it randomly samples hyperparameters rather than testing every combination. It’s faster but still capable of finding good results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here’s how it works:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Step 1: Like Grid Search, you define a set of hyperparameters and their possible values. However, instead of trying all combinations, you specify how many random combinations should be sampled.&lt;/p&gt;

&lt;p&gt;• Step 2: The algorithm selects random combinations of hyperparameters from the specified ranges. For example, if you define a range for n_neighbors between 1 and 20, Randomized Search might randomly pick values like 7, 12, and 19, without testing every single option.&lt;/p&gt;

&lt;p&gt;• Step 3: Just like with Grid Search, each random combination is evaluated using cross-validation, and the best one is chosen.&lt;/p&gt;

&lt;p&gt;The key advantage of Randomized Search is speed. Since it’s randomly selecting combinations, it can quickly search through large hyperparameter spaces, making it ideal for situations where you have limited time or computational resources. However, because it’s not exhaustive, there’s no guarantee that it will find the absolute best combination — but it often gets close enough, especially when you allow it to sample enough combinations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#space to explore
param_dist = {
    'n_neighbors': randint(1, 20),
    'weights': ['uniform', 'distance'],
    'metric': ['euclidean', 'manhattan']
}
random_search = RandomizedSearchCV(KNeighborsRegressor(), param_distributions=param_dist, n_iter=50, cv=5, scoring='neg_mean_squared_error', random_state=42)
random_search.fit(X_train, y_train)
best_knn_random = random_search.best_estimator_
y_pred_random = best_knn_random.predict(X_test)
mse_random = mean_squared_error(y_test, y_pred_random)
print(f"Mean Squared Error with Randomized Search tuning: {mse_random}")
print(f"Best hyperparameters (Randomized Search): {random_search.best_params_}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s what happened after using Randomized Search:&lt;/p&gt;

&lt;p&gt;Mean Squared Error with Randomized Search tuning: 3052.428993401872&lt;br&gt;
Best hyperparameters (Randomized Search): {‘metric’: ‘euclidean’, ‘n_neighbors’: 14, ‘weights’: ‘uniform’}&lt;/p&gt;

&lt;p&gt;This time, the MSE dropped even further, showing a more noticeable improvement over both the default settings and Grid Search. Plus, it took less time to run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method 3: Optuna — The Modern Approach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally, I used Optuna, a more recent and advanced method for hyperparameter optimization. Optuna takes a different path from both Grid and Randomized Search by using a process called “sequential model-based optimization”, which intelligently explores the hyperparameter space based on previous evaluations. This allows it to find better results more efficiently than traditional methods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here’s how it works:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Step 1: Optuna begins by sampling hyperparameters and training the model, just like Randomized Search. However, after each evaluation, it analyzes the results and uses this information to guide future selections.&lt;/p&gt;

&lt;p&gt;• Step 2: Based on the results of previous trials, Optuna narrows down the search space, focusing on areas that are more likely to yield better-performing models. It uses techniques like Bayesian optimization to predict which hyperparameters are likely to work well, allowing it to explore the hyperparameter space more intelligently.&lt;/p&gt;

&lt;p&gt;• Step 3: The process continues iteratively, with each trial refining the model’s performance, leading to faster and more effective hyperparameter tuning.&lt;/p&gt;

&lt;p&gt;Optuna’s strength lies in its ability to adapt the search based on real-time results, which makes it more efficient than both Grid Search and Randomized Search. It finds better hyperparameters with fewer evaluations, making it particularly useful for complex models or large datasets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def objective(trial):
    # space to explore
    n_neighbors = trial.suggest_int('n_neighbors', 1, 20)
    weights = trial.suggest_categorical('weights', ['uniform', 'distance'])
    metric = trial.suggest_categorical('metric', ['euclidean', 'manhattan'])

    # Train a KNeighborsRegressor with these hyperparameters
    knn_optuna = KNeighborsRegressor(n_neighbors=n_neighbors, weights=weights, metric=metric)
    knn_optuna.fit(X_train, y_train)

    # Predict and evaluate performance
    y_pred = knn_optuna.predict(X_test)
    mse = mean_squared_error(y_test, y_pred)

    return mse

# Running the Optuna optimization
study = optuna.create_study(direction='minimize')
study.optimize(objective, n_trials=50)

# Get the best model and hyperparameters
best_params_optuna = study.best_params
print(f"Best hyperparameters (Optuna): {best_params_optuna}")

# Train with the best parameters from Optuna
best_knn_optuna = KNeighborsRegressor(**best_params_optuna)
best_knn_optuna.fit(X_train, y_train)
y_pred_optuna = best_knn_optuna.predict(X_test)
mse_optuna = mean_squared_error(y_test, y_pred_optuna)
print(f"Mean Squared Error with Optuna tuning: {mse_optuna}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After tuning with Optuna:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Best hyperparameters (Optuna): {‘n_neighbors’: 20, ‘weights’: ‘distance’, ‘metric’: ‘euclidean’}&lt;br&gt;
Mean Squared Error with Optuna tuning: 2871.220587912944&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Optuna delivered the best performance out of all three methods, significantly reducing the MSE.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to Use Each Method&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Grid Search: Best for smaller datasets and when computational resources aren’t a concern. It’s comprehensive but slow.&lt;br&gt;
• Randomized Search: Great for larger datasets or when you’re short on time. It explores the hyperparameter space efficiently but less thoroughly.&lt;br&gt;
• Optuna: Ideal for more complex models and large datasets. It’s fast, intelligent, and often finds the best results with fewer evaluations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each method has its strengths, and the choice of which one to use depends on the specific needs of your project, the size of your dataset, and the computational resources available. For most modern machine learning tasks, however, Optuna offers a compelling balance of performance and efficiency.&lt;/p&gt;

&lt;p&gt;Hyperparameter tuning may seem like an extra step, but it can significantly enhance the performance of your machine learning models. As demonstrated, even a simple model like KNN can benefit from tuning. So, the next time you train a model, don’t settle for default settings — take the time to explore hyperparameter tuning. It might just unlock the full potential of your machine learning model.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>optuna</category>
      <category>gridsearch</category>
      <category>hyperparametertuning</category>
    </item>
    <item>
      <title>Presenter Pattern in Ruby on Rails: Implementation, Pros, and Cons.</title>
      <dc:creator>Saad Abdullah</dc:creator>
      <pubDate>Tue, 14 May 2024 21:01:22 +0000</pubDate>
      <link>https://dev.to/iemsaad/presenter-pattern-in-ruby-on-rails-implementation-pros-and-cons-3abp</link>
      <guid>https://dev.to/iemsaad/presenter-pattern-in-ruby-on-rails-implementation-pros-and-cons-3abp</guid>
      <description>&lt;p&gt;Imagine being immersed in a complex Ruby on Rails project, where you find yourself navigating through models, controllers, and views like a seasoned developer. However, as the project evolves, so does the complexity of the codebase, making it increasingly challenging to maintain clarity and organization. This is precisely the scenario I encountered in a recent project. As the project expanded, the need for clean and understandable code became necessary. Amidst the need, I found a solution — the Presenter Pattern. Now, with this fresh understanding at hand, my aim is to contribute for fellow developers, highlighting the impact of the Presenter Pattern in Ruby on Rails.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Presenter Pattern:
&lt;/h2&gt;

&lt;p&gt;Before we dive into the nitty-gritty of implementation, let’s take a moment to understand what exactly the Presenter Pattern is all about. The Presenter Pattern is a structural design pattern that promotes the separation of concerns by extracting presentation logic from the models and controllers into separate presenter objects. This separation allows for a cleaner architecture where each component is responsible for a specific task, thus enhancing code readability and maintainability. It’s like giving each component of your application its own spotlight on the stage — models handle data, controllers orchestrate the flow, and presenters? Well, presenters take charge of how that data is presented to the user.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation in Ruby on Rails:
&lt;/h2&gt;

&lt;p&gt;Implementing the Presenter Pattern in Ruby on Rails is straightforward and can be achieved using plain Ruby classes or dedicated gems such as Draper or ActivePresenter. In this article I will go with plain ruby classes, let’s break down the implementation step by step, starting with the BasePresenter class, followed by the ProductPresenter, and finally, how they are utilized in views with the help of an application helper.&lt;/p&gt;

&lt;h3&gt;
  
  
  BasePresenter:
&lt;/h3&gt;

&lt;p&gt;The BasePresenter class serves as the foundation for all other presenters in our application. It’s responsible for handling the common functionality and delegation of methods to the underlying model object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class BasePresenter &amp;lt; SimpleDelegator
  def initialize(model, view)
    @model, @view = model, view

    super(@model)
  end

  def method_missing(meth, *args, &amp;amp;block)
    if @model.respond_to?(meth)
      @model.send(meth, *args)
    else
      nil
    end
  end

  def get_view
    @view
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;We initialize the presenter with the model object (@model), the view context (@view), and super(@model) is used to initialize SimpleDelegator so that all of @model’s methods are available in the presenter.&lt;br&gt;
The method_missing method dynamically delegates method calls to the model object if the method is not explicitly defined in the presenter.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  ProductPresenter:
&lt;/h3&gt;

&lt;p&gt;The ProductPresenter class is a specific presenter tailored for the Product model. It encapsulates the presentation logic related to products, such as formatting prices and determining availability status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class ProductPresenter &amp;lt; BasePresenter
  def formatted_price
    get_view.number_to_currency(price)
  end

  def availability_status
    available? ? "Available" : "Out of stock"
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;We define methods like formatted_price and availability_status to encapsulate presentation logic.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;formatted_price method utilizes the get_view method inherited from BasePresenter to access view-related functionalities like number_to_currency for formatting prices.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  View Integration:
&lt;/h3&gt;

&lt;p&gt;In our view templates, we utilize the presenters to handle the presentation logic, ensuring separation of concerns and maintaining clean and readable views.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;% present(product, ProductPresenter) do |p| %&amp;gt;
  &amp;lt;h2&amp;gt;&amp;lt;%= p.name %&amp;gt;&amp;lt;/h2&amp;gt;
  &amp;lt;p&amp;gt;Price: &amp;lt;%= p.formatted_price %&amp;gt;&amp;lt;/p&amp;gt;
  &amp;lt;p&amp;gt;Status: &amp;lt;%= p.availability_status %&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;% end %&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;We use the present helper method to instantiate a ProductPresenter for the product object.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Inside the block, we can access the presenter methods (name, formatted_price, availability_status) to display the product information.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Application Helper:
&lt;/h3&gt;

&lt;p&gt;To streamline the usage of presenters in views, we define a helper method in the application helper to instantiate presenters easily.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module ApplicationHelper
  def present(model, presenter_class)
    presenter = presenter_class.new(model, self)
    yield(presenter) if block_given?
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;yield(presenter) yields the presenter object to a block of code provided in the view. If a block is given when calling the present method in the view, the presenter object is passed to that block, allowing for custom presentation logic to be executed within the block.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;By following this approach, we ensure a clear separation of concerns, with presentation logic encapsulated in presenters, leading to more maintainable and readable code in our Ruby on Rails application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pros of the Presenter Pattern:
&lt;/h2&gt;

&lt;p&gt;Now, let’s shine the spotlight on the pros of the Presenter Pattern:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Separation of Concerns:&lt;/strong&gt; By isolating presentation logic, presenters declutter your models and controllers, making your codebase a joy to navigate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testability:&lt;/strong&gt; With presentation logic neatly packaged in presenters, unit testing becomes a breeze.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reusability:&lt;/strong&gt; Presenters aren’t just a one-hit wonder — they can be reused across different views and controllers, saving you time and effort in the long run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cons of the Presenter Pattern:
&lt;/h2&gt;

&lt;p&gt;Of course, no solution is without its drawbacks. Here are a few cons to keep in mind:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increased Complexity:&lt;/strong&gt; Introducing presenters adds an extra layer of abstraction, which can be overwhelming for simpler applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overhead:&lt;/strong&gt; Implementing presenters for every model and view may introduce some overhead in terms of additional code and maintenance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Potential for Over-Engineering:&lt;/strong&gt; There’s a fine line between elegance and over-engineering — be wary of creating presenters for every piece of data, as it might lead to unnecessary complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;As we wrap up our exploration of presenters, it’s evident that this pattern can significantly improve your Rails applications. By adopting the Presenter Pattern, you can say farewell to messy code and welcome a more manageable codebase.&lt;/p&gt;

&lt;p&gt;So, what are your thoughts? How do you think presenters could enhance your projects? Share your ideas and let’s collaborate to improve our codebases!&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>designpatterns</category>
      <category>systemdesign</category>
    </item>
  </channel>
</rss>
