<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shamit</title>
    <description>The latest articles on DEV Community by Shamit (@shamit_r).</description>
    <link>https://dev.to/shamit_r</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shamit_r"/>
    <language>en</language>
    <item>
      <title>Python, Browsers, and Ubuntu</title>
      <dc:creator>Shamit</dc:creator>
      <pubDate>Thu, 24 Apr 2025 07:49:35 +0000</pubDate>
      <link>https://dev.to/shamit_r/python-browsers-and-ubuntu-210m</link>
      <guid>https://dev.to/shamit_r/python-browsers-and-ubuntu-210m</guid>
      <description>&lt;p&gt;During my role at Summa Linguae Technologies, I was working on a Python tool that tracked test statuses and included a feature to open a webpage in the browser automatically. The issue was that this functionality worked fine on some systems but wasn’t working on Ubuntu.&lt;/p&gt;

&lt;p&gt;To debug, I systematically went through possible causes:&lt;br&gt;
Checked if the browser was installed—ensured that a default browser was set.&lt;br&gt;
Verified user permissions—confirmed that the script had the necessary permissions to launch applications.&lt;br&gt;
Tested different commands separately—ran xdg-open, gio open, and sensible-browser manually to see which worked.&lt;/p&gt;

&lt;p&gt;After testing, I found that the usual Python method (webbrowser.open()) wasn’t reliable on Ubuntu because it didn’t always respect the default browser settings. Instead, using xdg-open directly resolved the issue. I updated the script to detect the OS and use the appropriate command dynamically.&lt;/p&gt;

</description>
      <category>ubuntu</category>
      <category>python</category>
      <category>browser</category>
      <category>default</category>
    </item>
    <item>
      <title>API Key Exposure &amp; Insecure Supabase Access: a Bug Hunt</title>
      <dc:creator>Shamit</dc:creator>
      <pubDate>Tue, 22 Apr 2025 05:56:19 +0000</pubDate>
      <link>https://dev.to/shamit_r/api-key-exposure-insecure-supabase-access-a-bug-hunt-4a93</link>
      <guid>https://dev.to/shamit_r/api-key-exposure-insecure-supabase-access-a-bug-hunt-4a93</guid>
      <description>&lt;p&gt;Like other developers I regularly lurk dev space subreddits, and came across a post by a business dev at a startup who had developed a lead generation micro-SaaS.&lt;/p&gt;

&lt;p&gt;User’s need to make an account and can use the site to find sales leads, do targeted outreach, etc using various paid subscription options.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Observing the Signup Flow:&lt;/u&gt;&lt;br&gt;
The developer asked other redditors to test the site and provide feedback, so as a first validation step I tried to sign up with an invalid email to test it and took a closer look at the sign up POST request. &lt;br&gt;
Given it’s an invalid email, I did not actually sign up or login, I was still on the sign up page with an error saying the email was invalid.&lt;br&gt;
From the POST request info I saw that the site used a Supabase, and I could also see the API key and Authorization in the headers.&lt;/p&gt;

&lt;p&gt;Generally, if this is an anon key and the backend has RLS (Row Level Security) enabled this is not a problem. The frontend uses this token to make calls to Supabase, and RLS + policies determine what the user is allowed to see/do. Supabase's client libraries often expose an &lt;code&gt;anon&lt;/code&gt; API key for public access, but it &lt;strong&gt;must be protected with Row Level Security (RLS)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Investigating the API Key &amp;amp; Triaging the Bug:&lt;/u&gt;&lt;br&gt;
To test that theory, I made a GET request to an endpoint I guessed https://.supabase.co/rest/v1/leads - the SaaS provides marketing leads, so I figured /v1/leads is as good as a guess as any) and added the exposed API keys as part of the request header - I got back hundreds of leads data.&lt;/p&gt;

&lt;p&gt;This confirmed that the table was &lt;strong&gt;publicly accessible&lt;/strong&gt;, likely without any RLS or access restrictions, thanks to the exposed API key.&lt;br&gt;
I tested further by exploring if other tables were exposed. I had read access to pretty much everything as long as I could guess a valid endpoint - leads, users, marketing data, etc. For obvious reasons I did not test any write access. &lt;/p&gt;

&lt;p&gt;&lt;u&gt;Disclosure:&lt;/u&gt;&lt;br&gt;
I reached out to the site owner, shared the issue privately and  explained how the API key was exposed. They acknowledged the issue and said they would investigate and patch the exposure. &lt;/p&gt;

&lt;p&gt;Client-side keys must be used with backend-enforced access controls like RLS. Whether you're a frontend dev, tester, or backend engineer, this was a great reminder of how small oversights can lead to critical data leaks.&lt;/p&gt;

&lt;p&gt;So if you’re working with Supabase, make sure your tables are protected with RLS, and don't assume API key obfuscation is a substitute for access control.&lt;/p&gt;

</description>
      <category>apisecurity</category>
      <category>fullstack</category>
      <category>supabase</category>
    </item>
    <item>
      <title>U-Net Image Analysis - Getting back into it</title>
      <dc:creator>Shamit</dc:creator>
      <pubDate>Mon, 03 Mar 2025 08:30:58 +0000</pubDate>
      <link>https://dev.to/shamit_r/u-net-image-analysis-getting-back-into-it-d9j</link>
      <guid>https://dev.to/shamit_r/u-net-image-analysis-getting-back-into-it-d9j</guid>
      <description>&lt;p&gt;My job involving fullstack dev, testing and validation kept me busy, but I did get to dabble into data analysis to optimize our client's limited server resource allocation. &lt;/p&gt;

&lt;p&gt;Getting back into PyTorch, and inspired by the AI in Medicine Lab (&lt;a href="https://aimlab.ca/" rel="noopener noreferrer"&gt;https://aimlab.ca/&lt;/a&gt;) at UBC, I wanted to get back to my undergraduate roots of machine learning algorithms and review what I had learned.&lt;/p&gt;

&lt;p&gt;So, back to image analysis...the underlying technique behind much of the work done by DALL-E, Midjourney and Stable Diffusion is an architecture called U-Net.&lt;/p&gt;

&lt;p&gt;The U-Net architecture was originally proposed to tackle medical image segmentation, and in general it works as you would expect:&lt;br&gt;
1) The model gets an input image&lt;br&gt;
2) It makes a guess at segmenting the different aspects of the image&lt;br&gt;
3) We compare the loss and use the error to adjust our model's parameters to improve the output&lt;br&gt;
4) Repeat&lt;/p&gt;

&lt;p&gt;Not only U-Net models segment the objects on the image, it is very good at creating near pixel perfect masks around the objects - so it's a bit like a variation of classification as each pixel gets a class it belongs to.&lt;br&gt;
So what makes this convolutional model different and so much better?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tyamsx22gx5fhkntqg8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tyamsx22gx5fhkntqg8.png" alt="U-Net Architecture Diagram" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The architecture contains two parts. The 'Encoders' on the left, and the 'Decoders' on the right. &lt;/p&gt;

&lt;p&gt;The encoders condenses the information on an image on each layer, reducing the size of the image. At every stage as we go down the layers, the size of the image halves but the number of channels double.&lt;br&gt;
'Channel' here refers to the number of feature maps and they double as each convolutional layer applies multiple filters to the input, capturing different aspects of the image.&lt;/p&gt;

&lt;p&gt;In the early layers filters are used to capture basic edges, basic textures, etc, and in lower layers the model learns high level features such as object shapes, spatial information, etc.&lt;/p&gt;

&lt;p&gt;Before I get into the decoding part, the arrows you going from the left to the right? They signify the feature mapping information that we got from the encoding steps being fed into the corresponding decoders. &lt;/p&gt;

&lt;p&gt;These skip connections help restore lost spatial information by combining high-level semantic features from the encoder with the finer details, helping the decoder layers to locate features accurately. &lt;/p&gt;

&lt;p&gt;Overall, decoders use transposed convolutions (or upsampling layers with convolutions) to reconstruct the image back to its original size.&lt;br&gt;
These fine details are what helps the models to identify exactly where the objects are after they are classified by the encoders.&lt;/p&gt;

&lt;p&gt;If encoders identify the 'what's of the image, decoders identify the 'where's. Encoders extract what is in the image, focusing on feature representation, while decoders reconstruct where objects are, ensuring precise localization.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>unet</category>
    </item>
  </channel>
</rss>
