<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kazi Priom</title>
    <description>The latest articles on DEV Community by Kazi Priom (@itsizakb).</description>
    <link>https://dev.to/itsizakb</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/itsizakb"/>
    <language>en</language>
    <item>
      <title>Alternatives - low_cal_alt_update-10</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Fri, 14 Feb 2025 18:51:48 +0000</pubDate>
      <link>https://dev.to/itsizakb/alternatives-lowcalaltupdate-10-5ca</link>
      <guid>https://dev.to/itsizakb/alternatives-lowcalaltupdate-10-5ca</guid>
      <description>&lt;p&gt;I have not imported all the necessary data from the US Food DB into my PostgreSQL server. However, while I was looking into this, I found a DB called OpenFoodFacts. From my quick search, using OpenFoodFacts as my primary DB has certain benefits and drawbacks:&lt;/p&gt;

&lt;p&gt;Pros of Using OpenFoodFacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The information is more user-friendly. It has pictures, barcodes, and names of the food item. This is absent in the US Food DB, and why I wanted to use Nutrionix at the beginning&lt;/li&gt;
&lt;li&gt;The data is easier to handle as it's all in long table&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Drawbacks of using OpenFoodFacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The data comes from user contributions, allowing for many more mistakes. In fact a lot of the information is different from the US Food DB.&lt;/li&gt;
&lt;li&gt; Lots of data is missing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With these issues, I don't think I will use OpenFoodFacts as my primary DB. However, I might use it near the end of the workflow if I want to provide the user pictures, barcodes and names of the products.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Alternatives - low_cal_alt_update-10</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Fri, 14 Feb 2025 18:51:48 +0000</pubDate>
      <link>https://dev.to/itsizakb/alternatives-lowcalaltupdate-10-23c0</link>
      <guid>https://dev.to/itsizakb/alternatives-lowcalaltupdate-10-23c0</guid>
      <description>&lt;p&gt;I have not imported all the necessary data from the US Food DB into my PostgreSQL server. However, while I was looking into this, I found a DB called OpenFoodFacts. From my quick search, using OpenFoodFacts as my primary DB has certain benefits and drawbacks:&lt;/p&gt;

&lt;p&gt;Pros of Using OpenFoodFacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The information is more user-friendly. It has pictures, barcodes, and names of the food item. This is absent in the US Food DB, and why I wanted to use Nutrionix at the beginning&lt;/li&gt;
&lt;li&gt;The data is easier to handle as it's all in long table&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Drawbacks of using OpenFoodFacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The data comes from user contributions, allowing for many more mistakes. In fact a lot of the information is different from the US Food DB.&lt;/li&gt;
&lt;li&gt; Lots of data is missing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With these issues, I don't think I will use OpenFoodFacts as my primary DB. However, I might use it near the end of the workflow if I want to provide the user pictures, barcodes and names of the products.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Progress - lowCal_update 9</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Mon, 03 Feb 2025 19:54:50 +0000</pubDate>
      <link>https://dev.to/itsizakb/progress-lowcalupdate-9-20n7</link>
      <guid>https://dev.to/itsizakb/progress-lowcalupdate-9-20n7</guid>
      <description>&lt;p&gt;Hello everyone. During the time between the last post and this one, I have made significant progress. I met with a NutrionIX employee, and I figured out that database licensing costs are above my budget for this project. However, they pointed me back in the direction of the USDA Food Database for the scope of my project. I've had quite a lot of navigating the database, so I was hesitant. But, I've started to figure out how the database works. I'm currently importing the nutrients of the foods into my PostgreSQL server. I'll let you know on the progress.&lt;/p&gt;

&lt;p&gt;I was also playing around with Tiny-Bert, an AI model to help with locating low-calorie alternatives to foods. I plan to use semantic similarity to aid this goal. I'll try to figure it out in the next stage of this project.&lt;/p&gt;

</description>
      <category>database</category>
      <category>python</category>
      <category>softwaredevelopment</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Popcorn Problem - lowCal_update 8</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Tue, 31 Dec 2024 00:10:05 +0000</pubDate>
      <link>https://dev.to/itsizakb/the-popcorn-problem-lowcalupdate-8-5e05</link>
      <guid>https://dev.to/itsizakb/the-popcorn-problem-lowcalupdate-8-5e05</guid>
      <description>&lt;p&gt;I'm trying to find a way to use the Nutrionix API to work with my model. I don't have the entire database, which is problematic, but perhaps running a script will give me enough data for the model to learn. The worst case is that I have to get a license for the DB with money. &lt;/p&gt;

&lt;p&gt;I will have to calculate the calorie density of the food and use that as the metric. I will try with calories/gram, however, this might be too simplistic in the vast data of food. One simple example of a food with a lot of density with an average calorie/gram ratio is popcorn. This is due to the large amount of air in popcorn. This causes an issue in using the calories/gram metric as air doesn't have weight. Perhaps another metric can be used: calories/serving. However, this causes the issue of the serving size being chosen by the manufacturer. Many foods have very small serving sizes, leading the consumer to think that a food has 200 calories, while in reality it has 400 as it has two serving sizes. But a potentially perfect metric might be calorie/cup. Cup is a volume metric, and is perfect for my use case. Work needs to be done in this area.&lt;/p&gt;

</description>
      <category>database</category>
      <category>softwaredevelopment</category>
      <category>python</category>
      <category>programming</category>
    </item>
    <item>
      <title>Big Discovery - lowCalAlt_update 7</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Mon, 23 Dec 2024 20:29:17 +0000</pubDate>
      <link>https://dev.to/itsizakb/big-discovery-lowcalaltupdate-7-c6h</link>
      <guid>https://dev.to/itsizakb/big-discovery-lowcalaltupdate-7-c6h</guid>
      <description>&lt;p&gt;I figured out something important yesterday. I was trying to figure out the issue of the missing columns, but I couldn't find the bulk data in the Nutrionix Bulk data. I kept looking around the documentation and I couldn't find anything. Then, something the customer service department of Nutrionix asked me crossed my mind. They asked me, "Where did you get that data from?" At the time, I thought I found it on their website, but I took a closer look yesterday. And I found out it was the US FoodData Central's database. I'm still not sure what this means for me. I would still like to use Nutrionix's DB, so I'll need to figure out how to do that.&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>python</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>lowCalAlt_update 6</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Sun, 22 Dec 2024 22:04:10 +0000</pubDate>
      <link>https://dev.to/itsizakb/lowcalaltupdate-6-1fp3</link>
      <guid>https://dev.to/itsizakb/lowcalaltupdate-6-1fp3</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4uwafk1mz8iem8f51yj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4uwafk1mz8iem8f51yj.png" alt="Image description" width="640" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm sorry for being away for some time. I was busy. However, I am back, and I have good and bad news. &lt;br&gt;
Bad News:&lt;br&gt;
Missing data in csv files still needs to be resolved. The company behind the food DB does not email with users who don't pay for their paid plans. Work needs to be done in this area&lt;/p&gt;

&lt;p&gt;Good News:&lt;br&gt;
I figured out that you can insert into PostgreSQL much quicker with the "COPY table_name (column 1, column 2, column 3) ..." instead of adding one by one. Was able to insert 2 million items in around 30 seconds, which is much faster than the 5 or so hours needed before.&lt;/p&gt;

&lt;p&gt;Updates should be more consistent now.&lt;/p&gt;

</description>
      <category>database</category>
      <category>fullstack</category>
      <category>python</category>
      <category>programming</category>
    </item>
    <item>
      <title>Adding new columns - lowCalAlt_update5</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Fri, 29 Nov 2024 06:37:51 +0000</pubDate>
      <link>https://dev.to/itsizakb/adding-new-columns-lowcalaltupdate5-156f</link>
      <guid>https://dev.to/itsizakb/adding-new-columns-lowcalaltupdate5-156f</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqysn43f087pdm6c55tp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqysn43f087pdm6c55tp.png" alt="Image description" width="800" height="678"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixx4o36t1gd6o0to0lub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixx4o36t1gd6o0to0lub.png" alt="Image description" width="800" height="750"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While I still needed information about the missing columns in the CSV files found in the NutritionIX DB, I went ahead and added the columns I had access to. These included brand_name, ss_metric_qty, ss_metric_unit, item_id. However, the insertion process took a lot longer than it should have. I have some theories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The insertion is being done in a Python for-loop. This might be causing a lot of overhead.&lt;/li&gt;
&lt;li&gt;I am adding one row at a time instead of all at once&lt;/li&gt;
&lt;li&gt;Indexing in PostgreSQL&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Please let me know what you think. I will be doing my own research with another python script.&lt;/p&gt;

</description>
      <category>database</category>
      <category>python</category>
      <category>postgres</category>
      <category>backend</category>
    </item>
    <item>
      <title>Missing columns - lowCalAlt_update4</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Thu, 28 Nov 2024 00:36:14 +0000</pubDate>
      <link>https://dev.to/itsizakb/missing-columns-lowcalaltupdate4-1n4i</link>
      <guid>https://dev.to/itsizakb/missing-columns-lowcalaltupdate4-1n4i</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69wlheod18bmj3odzeeq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69wlheod18bmj3odzeeq.png" alt="Image description" width="800" height="740"&gt;&lt;/a&gt;&lt;br&gt;
I've been working with CSV files from a bulk database download of the Nutrionix DB. However, a lot of important columns are missing. I'm not sure why this is, and I don't know how else to find the vital data. I'm contacting the support service for NutritionIX for information. However, I am working with what I have right now.&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>backend</category>
      <category>fullstack</category>
    </item>
    <item>
      <title>PostgreSQL CSV Errors-lowCalAlt_update3</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Tue, 26 Nov 2024 04:44:03 +0000</pubDate>
      <link>https://dev.to/itsizakb/postgresql-lowcalaltupdate3-9np</link>
      <guid>https://dev.to/itsizakb/postgresql-lowcalaltupdate3-9np</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk8efnlofcf4uehsoa6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk8efnlofcf4uehsoa6v.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've had a hard time recently working with the CSV files that Nutrionix published for their database. They are quite large, and that makes the files hard to open. However, the main issue was deleting the correct columns and having correct punctuation around items. This caused PostgreSQL to give me various errors. Here are the main ones.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The column names in the CSV were not the same as database&lt;/li&gt;
&lt;li&gt;I had an empty column which was represented by a ','&lt;/li&gt;
&lt;li&gt;I didn't have quotations around items&lt;/li&gt;
&lt;li&gt;single quotations (') needed another single quotation, (').&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I initially tried solving these issues using Excel, but Excel has a problem of deleting quotation marks in the file. I discovered that is a common issue for individuals, so I used pandas. Here is the Python script I used for one of the CSV files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
import pandas as pd

#read
file = pd.read_csv('food_insertion3(Done).csv')
#rename columns
file.rename(columns={'fdc_id' :
     'item_id', "description": "item_description", "food_category_id" :
    "food_category"}, inplace= True)
#drop unneeded columns
file.drop(columns='data_type', inplace=True)
#apply quotations
file = file.applymap(lambda x: f'"{x}"' if isinstance(x,str) else x)



file.to_csv('food_insertion3(Done).csv', index= False)
#deal with single quotes
import csv

with open('food_insertion3(Done).csv', 'r', newline='', encoding='utf-8') as infile:
    reader = csv.reader(infile)
    rows = []

    for row in reader:
        cor_row = []
        for field in row:
            cor_field = field.replace("'", "''")
            cor_row.append(cor_field)
        rows.append(cor_row)

with open('food_insertion3(Done).csv', 'w', newline='', encoding='utf-8') as outfile:
    writer = csv.writer(outfile)
    writer.writerows(rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>postgres</category>
      <category>csv</category>
      <category>programming</category>
      <category>database</category>
    </item>
    <item>
      <title>PostgreSQL - lowCalAlt_update2</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Sat, 23 Nov 2024 05:14:50 +0000</pubDate>
      <link>https://dev.to/itsizakb/postgresql-lowcal3-49df</link>
      <guid>https://dev.to/itsizakb/postgresql-lowcal3-49df</guid>
      <description>&lt;p&gt;I decided to upload the Nutrionix db into PostgreSQL. I could have queried directly using my API key, but there is a limited number of queries I can use. I also find it easier to work it.&lt;/p&gt;

&lt;p&gt;The DB will be simple for now. Until optimizations are needed, this should be good enough. Here is a general ER diagram representing it. It's not perfect. I still need to make some decisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hbdmirggobfmww119o8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hbdmirggobfmww119o8.png" alt="Image description" width="595" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>sql</category>
      <category>database</category>
      <category>postgres</category>
      <category>webdev</category>
    </item>
    <item>
      <title>lowCalAlt_update1</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Fri, 22 Nov 2024 01:05:08 +0000</pubDate>
      <link>https://dev.to/itsizakb/lowcalproj-update-1-4n6b</link>
      <guid>https://dev.to/itsizakb/lowcalproj-update-1-4n6b</guid>
      <description>&lt;p&gt;So far, my concern has been to look for databases for common and branded foods. I found out a few things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The US Department of Agriculture has a database of foods called: FoodData Central. 
Issue: the information is not stored as a relational database. This makes querying very difficult if you want a specific name, nutrient, etc.&lt;/li&gt;
&lt;li&gt;Nutritionix has a relational database of food, and this will probably be my solution.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I'm currently trying input all the necessary Nutritionix information into my PostgreSQL server. Information is spread across different CSV files, so its been a little difficult.&lt;/p&gt;

</description>
      <category>database</category>
      <category>learning</category>
      <category>python</category>
      <category>sql</category>
    </item>
    <item>
      <title>Hey, welcome to my blog</title>
      <dc:creator>Kazi Priom</dc:creator>
      <pubDate>Thu, 21 Nov 2024 17:31:45 +0000</pubDate>
      <link>https://dev.to/itsizakb/hey-welcome-to-my-blog-j2b</link>
      <guid>https://dev.to/itsizakb/hey-welcome-to-my-blog-j2b</guid>
      <description>&lt;p&gt;Hey, welcome. I plan to document my experiences in creating front-end, back-end, and full stack projects. If anyone wants to discuss I am willing to.&lt;/p&gt;

&lt;p&gt;The project I am currently working on is an application that finds low-calorie alternatives to food. This stems from my personal experience in weight loss. I've lost around 30 pounds over the last year. While you might think this was the result of following a strict diet, this was not the case. I ate ice cream, burritos, fried chicken, pancakes, and more just by finding low-calorie alternatives (For example, instead of fried chicken, you can use air-fried chicken breast.) So I want to develop an app that can help others in making right food choices.&lt;/p&gt;

&lt;p&gt;I plan to use machine learning to build a model that can find low-calorie alternatives to food. I am very new at this, so please let me know if you can help.&lt;/p&gt;

&lt;p&gt;Thanks.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>fullstack</category>
      <category>machinelearning</category>
      <category>postgressql</category>
    </item>
  </channel>
</rss>
