Introduction
Hello, everyone! This will be my first post so be harsh with me, critique me where you think I can improve on and I will surely take it into account next time.
For the past few months, I have been deeply into health, mainly exercising and watching what I eat, and now that I think I've got a solid grasp on it, I wanted to see how I can further optimize in the case that there are some things that I might have missed.
Objectives
For this chapter, I wish to go into studying my meals throughout my health journey and conclude with a meal plan for the next week that (1) hits my minimum protein requirements, (2) does not go past my calorie limit, (3) hits my minimum fiber requirements, and (4) minimizes cost.
Dataset
We start by introducing the dataset, the food that we've tracked using Cronometer. Cronometer has been working with me side-by-side in my journey and now, I will be exporting the data that I've input to analyze for myself with the objectives I have previously listed.
Luckily for me, Cronometer lets me export data to a .csv file with ease on their website.
For this chapter, we will be exporting only the 'Food & Recipe Entries' dataset.
We start with examining the data we got from 'Food & Recipe Entries'. The dataset is very comprehensive, which I'm sure will be great for future chapters! In this chapter, we do want to limit it to the name of the food, its amount, protein, calories, and fiber.
# Importing and checking out the dataset
df = pd.read_csv("servings.csv")
df.head()
Data Preprocessing
We already have some columns set for us, in 'Food Name', 'Amount', 'Energy (kcal)', 'Fiber (g)', and 'Protein (g)'. Perfect! Now, the only thing we lack is to get the cost for each food given a certain amount as it was not being tracked in the dataset. Luckily for me, I was the one who input the data in the first place so I can input the prices that I do know. However, I will not be inputting prices for all of the food items. Instead, we ask our good old friend ChatGPT for their estimate and fill in the prices that we do know by tweaking the .csv file. We store the new dataset in 'cost.csv' which we derived by taking the 'Food Name' and 'Amount' columns from the original dataset.
# Group by 'Food Name' and collect unique 'Amount' for each group
grouped_df = df.groupby('Food Name')['Amount'].unique().reset_index()
# Expand the DataFrame so each unique 'Food Name' and 'Amount' is on a separate row
expanded_df = grouped_df.explode('Amount')
# Export the DataFrame to a CSV file
expanded_df.to_csv('grouped_food_names_amounts.csv')
# Read the added costs and save as a new DataFrame
df_cost = pd.read_csv("cost.csv").dropna()
df_cost.head()
Some foods were dropped simply because they were too oddly specific and would not be in the scope of the data of being low-calorie, nutritious, and/or cheap (or simply because I could not be bothered with making the recipe again). We then would need to merge two data frames, the original dataset and the one with the cost, in order to obtain the supposed 'final' dataset. Since the original dataset contains the entries for each food, this means the original dataset has multiple entries of the same food, especially those that I eat repeatedly (i.e. eggs, chicken breast, rice). We also want to fill columns without values with '0' as the most likely source of problems here would be the 'Energy', 'Fiber', 'Protein', and 'Price' columns.
merged_df = pd.merge(df, df_cost, on=['Food Name', 'Amount'], how='inner')
specified_columns = ['Food Name', 'Amount', 'Energy (kcal)', 'Fiber (g)', 'Protein (g)', 'Price']
final_df = merged_df[specified_columns].drop_duplicates()
final_df.fillna(0, inplace=True)
final_df.head()
Optimization
Perfect! Our dataset is finished and now, we begin with the second part, optimization. Recalling the objectives of the study, we want to identify the least cost given a minimum amount of protein and fiber, and a maximum amount of calories. The option here is to brute force every single combination, but in the industry the proper term is "Linear Programming" or "Linear Optimization" but don't quote me on that. This time, we will be using puLP which is a Python library that is aimed towards doing exactly that. I do not know much about using it besides following the template, so do browse their documentation instead of reading my unprofessional explanation of how it works. But for those who do want to listen to my casual explanation of the topic, we're basically solving for y = ax1 + bx2 + cx3 + ... + zxn.
The template we will be following is the template for the Case Study of the Blending problem, where we follow similar objectives but in this case, we want to blend our meals throughout the day. In order to start, we would need to convert the DataFrame into dictionaries, specifically, the 'Food Name' as a list of independent variables that serve as the series of x's, then Energy, Fiber, Protein, and Price as a dictionary such that 'Food Name': value for each. Do note that the Amount will be foregone from here on out, and will instead be concatenated with the 'Food Name' as we will not be using it quantitatively.
# Concatenate Amount into Food Name
final_df['Food Name'] = final_df['Food Name'] + ' ' + final_df['Amount'].astype(str)
food_names = final_df['Food Name'].tolist()
# Create dictionaries for 'Energy', 'Fiber', 'Protein', and 'Price'
energy_dict = final_df.set_index('Food Name')['Energy (kcal)'].to_dict()
fiber_dict = final_df.set_index('Food Name')['Fiber (g)'].to_dict()
fiber_dict['Gardenia, High Fiber Wheat Raisin Loaf 1.00 Slice'] = 3
fiber_dict['Gardenia, High Fiber Wheat Raisin Loaf 2.00 Slice'] = 6
protein_dict = final_df.set_index('Food Name')['Protein (g)'].to_dict()
price_dict = final_df.set_index('Food Name')['Price'].to_dict()
# Display the results
print("Food Names Array:", food_names)
print("Energy Dictionary:", energy_dict)
print("Fiber Dictionary:", fiber_dict)
print("Protein Dictionary:", protein_dict)
print("Price Dictionary:", price_dict)
For those without keen eyesight, continue scrolling. For those who did notice the eerie 2 lines of code, let me explain. I saw this while I was grocery shopping but the nutrition facts on Gardenia's High Fiber Wheat Raisin loaf do not actually have 1 slice be 9 grams of Fiber, it has 2 slices for 6 grams. This is a big deal and has caused me immeasurable pain knowing that the values may be incorrect due to either a misinput of data or a change of ingredients which caused the data to be outdated. Either way, I needed this justice corrected and I will not stand for any less Fiber than I deserve. Moving on.
We go straight into plugging in our values using the template from the Case Study data. We set variables to stand for the minimum values we want out of Protein and Fiber, as well as the maximum Calories we are willing to eat. Then, we let the magical template code do its work and get the results.
# Set variables
min_protein = 120
min_fiber = 40
max_energy = 1500
# Just read the case study at https://coin-or.github.io/pulp/CaseStudies/a_blending_problem.html. They explain it way better than I ever could.
prob = LpProblem("Meal Optimization", LpMinimize)
food_vars = LpVariable.dicts("Food", food_names, 0)
prob += (
lpSum([price_dict[i] * food_vars[i] for i in food_names]),
"Total Cost of Food daily",
)
prob += (
lpSum([energy_dict[i] * food_vars[i] for i in food_names]) <= max_energy,
"EnergyRequirement",
)
prob += (
lpSum([fiber_dict[i] * food_vars[i] for i in food_names]) >= min_fiber,
"FiberRequirement",
)
prob += (
lpSum([protein_dict[i] * food_vars[i] for i in food_names]) >= min_protein,
"ProteinRequirement",
)
prob.writeLP("MealOptimization.lp")
prob.solve()
print("Status:", LpStatus[prob.status])
for v in prob.variables():
if v.varValue > 0:
print(v.name, "=", v.varValue)
print("Total Cost of Food per day = ", value(prob.objective))
Results
In order to get 120 grams of protein and 40 grams of fiber, I would need to spend 128 Philippine Pesos on 269 grams of chicken breast fillet, and 526 grams of mung beans. This... does not sound bad at all considering how much I love both ingredients. I will definitely try it out, maybe for a week or a month just to see how much money I would save despite having just enough nutrition.
That was it for this chapter of Tracking Health with Data Engineering, if you want to see the data I worked on in this chapter, visit the repository or visit the notebook for this page. Do leave a comment if you have any and try to stay healthy.
Top comments (0)