DEV Community

amal org
amal org

Posted on

Data Analyst Guide: Mastering Cross-Validation: Why 80/20 Split is Wrong

Data Analyst Guide: Mastering Cross-Validation: Why 80/20 Split is Wrong

===========================================================

Business Problem Statement


In many real-world data analysis scenarios, the 80/20 split is widely used for training and testing machine learning models. However, this approach can lead to overfitting and poor model performance on unseen data. Let's consider a scenario where we're building a predictive model for a retail company to forecast sales. The model will be used to inform inventory management and pricing decisions. If the model is not accurate, it can result in significant financial losses. For example, if the model overestimates sales, the company may overstock, leading to waste and unnecessary costs. On the other hand, if the model underestimates sales, the company may understock, resulting in lost revenue.

The ROI impact of using a poorly performing model can be significant. Let's assume that the company has an annual revenue of $10 million and the model is used to inform 20% of the inventory management decisions. If the model is accurate, the company can expect to save $200,000 per year. However, if the model is inaccurate, the company may lose $200,000 per year.

Step-by-Step Technical Solution


Step 1: Data Preparation

We'll use the pandas library to load and prepare the data. Let's assume we have a dataset sales_data.csv with the following columns: date, store_id, item_id, sales.

import pandas as pd
from sklearn.model_selection import train_test_split

# Load the data
sales_data = pd.read_csv('sales_data.csv')

# Convert the date column to datetime
sales_data['date'] = pd.to_datetime(sales_data['date'])

# Set the date column as the index
sales_data.set_index('date', inplace=True)

# Split the data into training and testing sets using cross-validation
from sklearn.model_selection import TimeSeriesSplit
tscv = TimeSeriesSplit(n_splits=5)

# Define the features and target variables
X = sales_data.drop('sales', axis=1)
y = sales_data['sales']
Enter fullscreen mode Exit fullscreen mode

Step 2: Analysis Pipeline

We'll use the sklearn library to build a simple linear regression model.

from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error

# Define the model
model = LinearRegression()

# Initialize the lists to store the training and testing scores
train_scores = []
test_scores = []

# Iterate over the training and testing sets
for train_index, test_index in tscv.split(X):
    X_train, X_test = X.iloc[train_index], X.iloc[test_index]
    y_train, y_test = y.iloc[train_index], y.iloc[test_index]

    # Train the model
    model.fit(X_train, y_train)

    # Make predictions
    y_pred = model.predict(X_test)

    # Calculate the training and testing scores
    train_score = model.score(X_train, y_train)
    test_score = model.score(X_test, y_test)

    # Calculate the mean squared error
    mse = mean_squared_error(y_test, y_pred)

    # Append the scores to the lists
    train_scores.append(train_score)
    test_scores.append(test_score)

# Calculate the average training and testing scores
avg_train_score = sum(train_scores) / len(train_scores)
avg_test_score = sum(test_scores) / len(test_scores)

print(f'Average Training Score: {avg_train_score:.2f}')
print(f'Average Testing Score: {avg_test_score:.2f}')
Enter fullscreen mode Exit fullscreen mode

Step 3: Model/Visualization Code

We'll use the matplotlib library to visualize the predicted sales.

import matplotlib.pyplot as plt

# Plot the predicted sales
plt.plot(y_test, label='Actual Sales')
plt.plot(y_pred, label='Predicted Sales')
plt.xlabel('Time')
plt.ylabel('Sales')
plt.title('Predicted Sales')
plt.legend()
plt.show()
Enter fullscreen mode Exit fullscreen mode

Step 4: Performance Evaluation

We'll use the sklearn library to evaluate the performance of the model.

from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score

# Calculate the mean absolute error
mae = mean_absolute_error(y_test, y_pred)

# Calculate the mean squared error
mse = mean_squared_error(y_test, y_pred)

# Calculate the R-squared score
r2 = r2_score(y_test, y_pred)

print(f'Mean Absolute Error: {mae:.2f}')
print(f'Mean Squared Error: {mse:.2f}')
print(f'R-squared Score: {r2:.2f}')
Enter fullscreen mode Exit fullscreen mode

Step 5: Production Deployment

We'll use the pickle library to save the trained model and deploy it in production.

import pickle

# Save the trained model
with open('trained_model.pkl', 'wb') as f:
    pickle.dump(model, f)

# Load the saved model
with open('trained_model.pkl', 'rb') as f:
    loaded_model = pickle.load(f)

# Use the loaded model to make predictions
loaded_model.predict(X_test)
Enter fullscreen mode Exit fullscreen mode

SQL Queries

We'll use the following SQL queries to load the data and store the predicted sales.

-- Create a table to store the sales data
CREATE TABLE sales_data (
    date DATE,
    store_id INT,
    item_id INT,
    sales FLOAT
);

-- Load the sales data into the table
INSERT INTO sales_data (date, store_id, item_id, sales)
SELECT date, store_id, item_id, sales
FROM sales_data.csv;

-- Create a table to store the predicted sales
CREATE TABLE predicted_sales (
    date DATE,
    store_id INT,
    item_id INT,
    predicted_sales FLOAT
);

-- Insert the predicted sales into the table
INSERT INTO predicted_sales (date, store_id, item_id, predicted_sales)
SELECT date, store_id, item_id, predicted_sales
FROM predicted_sales.csv;
Enter fullscreen mode Exit fullscreen mode

Metrics/ROI Calculations

We'll use the following metrics to evaluate the performance of the model.

  • Mean Absolute Error (MAE)
  • Mean Squared Error (MSE)
  • R-squared Score (R2)
  • Return on Investment (ROI)

The ROI calculation will depend on the specific business problem and the cost of implementing the model.

Edge Cases

We'll consider the following edge cases.

  • Handling missing values in the data
  • Handling outliers in the data
  • Handling non-linear relationships between the features and target variable

Scaling Tips

We'll use the following scaling tips to deploy the model in production.

  • Use a cloud-based platform to deploy the model
  • Use a containerization tool such as Docker to deploy the model
  • Use a load balancer to distribute the traffic to multiple instances of the model
  • Use a monitoring tool to monitor the performance of the model and detect any issues.

Full Code Implementation

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.model_selection import TimeSeriesSplit
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
import matplotlib.pyplot as plt
import pickle

# Load the data
sales_data = pd.read_csv('sales_data.csv')

# Convert the date column to datetime
sales_data['date'] = pd.to_datetime(sales_data['date'])

# Set the date column as the index
sales_data.set_index('date', inplace=True)

# Split the data into training and testing sets using cross-validation
tscv = TimeSeriesSplit(n_splits=5)

# Define the features and target variables
X = sales_data.drop('sales', axis=1)
y = sales_data['sales']

# Define the model
model = LinearRegression()

# Initialize the lists to store the training and testing scores
train_scores = []
test_scores = []

# Iterate over the training and testing sets
for train_index, test_index in tscv.split(X):
    X_train, X_test = X.iloc[train_index], X.iloc[test_index]
    y_train, y_test = y.iloc[train_index], y.iloc[test_index]

    # Train the model
    model.fit(X_train, y_train)

    # Make predictions
    y_pred = model.predict(X_test)

    # Calculate the training and testing scores
    train_score = model.score(X_train, y_train)
    test_score = model.score(X_test, y_test)

    # Calculate the mean squared error
    mse = mean_squared_error(y_test, y_pred)

    # Append the scores to the lists
    train_scores.append(train_score)
    test_scores.append(test_score)

# Calculate the average training and testing scores
avg_train_score = sum(train_scores) / len(train_scores)
avg_test_score = sum(test_scores) / len(test_scores)

print(f'Average Training Score: {avg_train_score:.2f}')
print(f'Average Testing Score: {avg_test_score:.2f}')

# Plot the predicted sales
plt.plot(y_test, label='Actual Sales')
plt.plot(y_pred, label='Predicted Sales')
plt.xlabel('Time')
plt.ylabel('Sales')
plt.title('Predicted Sales')
plt.legend()
plt.show()

# Calculate the mean absolute error
mae = mean_absolute_error(y_test, y_pred)

# Calculate the mean squared error
mse = mean_squared_error(y_test, y_pred)

# Calculate the R-squared score
r2 = r2_score(y_test, y_pred)

print(f'Mean Absolute Error: {mae:.2f}')
print(f'Mean Squared Error: {mse:.2f}')
print(f'R-squared Score: {r2:.2f}')

# Save the trained model
with open('trained_model.pkl', 'wb') as f:
    pickle.dump(model, f)

# Load the saved model
with open('trained_model.pkl', 'rb') as f:
    loaded_model = pickle.load(f)

# Use the loaded model to make predictions
loaded_model.predict(X_test)
Enter fullscreen mode Exit fullscreen mode

Top comments (0)