<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Charles Lueilwitz</title>
    <description>The latest articles on DEV Community by Charles Lueilwitz (@charles_lueilwitz_).</description>
    <link>https://dev.to/charles_lueilwitz_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/charles_lueilwitz_"/>
    <language>en</language>
    <item>
      <title>My OSINT Stack for Image &amp; Identity Monitoring</title>
      <dc:creator>Charles Lueilwitz</dc:creator>
      <pubDate>Tue, 24 Feb 2026 10:25:08 +0000</pubDate>
      <link>https://dev.to/charles_lueilwitz_/my-osint-stack-for-image-identity-monitoring-h18</link>
      <guid>https://dev.to/charles_lueilwitz_/my-osint-stack-for-image-identity-monitoring-h18</guid>
      <description>&lt;p&gt;Monitoring where your data (or your client's data) ends up is a full-time job. I’ve spent the last few months streamlining my OSINT workflow to focus on image leaks and identity theft.&lt;/p&gt;

&lt;p&gt;Here is the "minimalist" stack I actually use daily:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Social Media Hunting: Sherlock&lt;br&gt;
If you have a username, &lt;a href="https://github.com/sherlock-project/sherlock" rel="noopener noreferrer"&gt;Sherlock&lt;/a&gt; is still the king. It’s a CLI tool that hunts down accounts across hundreds of platforms. Fast, simple, and essential.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Infrastructure Intelligence: SpiderFoot&lt;br&gt;
For a deep dive into IP addresses, domains, and subdomains associated with a leak, I use SpiderFoot. The automation here is insane—it links dots I didn't even know existed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Visual Monitoring: ERASA&lt;br&gt;
This is my go-to for the "hard stuff." While Google Images is okay for landscapes, it's terrible for tracking specific facial leaks across shady corners of the web. I use the &lt;a href="https://www.erasa.net/content-monitoring/reverse-face-search" rel="noopener noreferrer"&gt;reverse face search&lt;/a&gt; logic here to automate the monitoring of image rights. It’s much more efficient than manual searching.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Breach Notification: Have I Been Pwned&lt;br&gt;
Basic, but mandatory. If an image leak starts with a credential breach, Troy Hunt’s tool is where the trail usually begins.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Metadata Scrubbing: ExifTool&lt;br&gt;
Before I re-upload or move sensitive images during an investigation, I run them through ExifTool to strip GPS and device tags. Never trust a "private" photo to stay private if the metadata is still attached.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>automation</category>
      <category>cybersecurity</category>
      <category>infosec</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Why reverse face search is so hard to build at scale (and what we learned)</title>
      <dc:creator>Charles Lueilwitz</dc:creator>
      <pubDate>Tue, 10 Feb 2026 10:31:22 +0000</pubDate>
      <link>https://dev.to/charles_lueilwitz_/why-reverse-face-search-is-so-hard-to-build-at-scale-and-what-we-learned-2kgk</link>
      <guid>https://dev.to/charles_lueilwitz_/why-reverse-face-search-is-so-hard-to-build-at-scale-and-what-we-learned-2kgk</guid>
      <description>&lt;p&gt;I've been looking into facial recognition workflows lately, and it’s a total rabbit hole. Most people think you just throw an image into a model and get a "match."&lt;/p&gt;

&lt;p&gt;In reality, the infrastructure is the real headache. Here are three things I've learned while working on the monitoring logic at &lt;a href="https://www.erasa.net/content-monitoring/reverse-face-search" rel="noopener noreferrer"&gt;reverse face search&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The "False Positive" Trap: Using high-confidence thresholds means you miss half the results; lowering them means you get people who just "look similar." Finding that sweet spot in a vector database (like Milvus) is a constant battle.&lt;/p&gt;

&lt;p&gt;Speed vs. Accuracy: Moving from a simple index to HNSW (Hierarchical Navigable Small World) graphs was a game changer for us. It’s the difference between a 10-second wait and sub-second results.&lt;/p&gt;

&lt;p&gt;The Ethical Gray Area: This is the elephant in the room. Scrapers are everywhere. We’ve found that the best way to use this tech isn't for "stalking," but for defensive monitoring—finding where your own data is being leaked before someone else uses it.&lt;/p&gt;

&lt;p&gt;If you’re building something similar or struggling with image indexing, I’d love to hear how you handle vector storage. It’s still a bit of a Wild West out there.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>database</category>
      <category>performance</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Some Thoughts on Privacy and Everyday Technology</title>
      <dc:creator>Charles Lueilwitz</dc:creator>
      <pubDate>Mon, 09 Feb 2026 09:50:49 +0000</pubDate>
      <link>https://dev.to/charles_lueilwitz_/some-thoughts-on-privacy-and-everyday-technology-5ag8</link>
      <guid>https://dev.to/charles_lueilwitz_/some-thoughts-on-privacy-and-everyday-technology-5ag8</guid>
      <description>&lt;p&gt;Over the last few years, I’ve been paying closer attention to how privacy is lost in modern digital systems — not through dramatic breaches, but through small, incremental design choices that slowly shift control away from individuals.&lt;/p&gt;

&lt;p&gt;Most privacy issues today don’t come from someone “hacking” a system. They emerge from how platforms optimize for convenience, scale, and interoperability. Features that make systems easier to use — single sign-on, unified profiles, content sharing, cross-platform identity — also make it easier for information to travel farther than originally intended.&lt;/p&gt;

&lt;p&gt;What’s interesting is that many of these risks sit in the gaps between systems, not inside any single one.&lt;/p&gt;

&lt;p&gt;Research in areas like usable security and privacy-by-design has shown this pattern repeatedly: users rarely make explicit decisions to give up privacy. Instead, privacy erodes when defaults favor visibility, when friction is removed, and when systems quietly assume reuse — of usernames, images, profiles, or metadata — as a normal behavior.&lt;/p&gt;

&lt;p&gt;Identity is a good example.&lt;br&gt;
Usernames and profile images were never designed to be portable identifiers, yet in practice they function that way. Reuse across platforms makes discovery easier, but it also creates unintended linkages: accounts that were meant to be separate become trivially connected, and content shared in one context can resurface in another.&lt;/p&gt;

&lt;p&gt;From a technical perspective, this isn’t caused by a single bad actor or flawed algorithm. It’s an emergent property of interconnected systems behaving exactly as designed. From a human perspective, though, the consequences feel very real — loss of control, misattribution, and exposure that’s hard to reverse once it happens.&lt;/p&gt;

&lt;p&gt;I’m interested in exploring these edge cases: where technology works “correctly,” but the outcome still feels wrong. Writing here is a way for me to think through how these systems interact in practice, and what that means for privacy in everyday digital life.&lt;/p&gt;

</description>
      <category>design</category>
      <category>privacy</category>
      <category>security</category>
      <category>ux</category>
    </item>
  </channel>
</rss>
