<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: S. Han</title>
    <description>The latest articles on DEV Community by S. Han (@shoutzu_han_a327ff8a7342).</description>
    <link>https://dev.to/shoutzu_han_a327ff8a7342</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shoutzu_han_a327ff8a7342"/>
    <language>en</language>
    <item>
      <title>Designing AI-Driven Self-Reflection: Beyond Mood Tracking</title>
      <dc:creator>S. Han</dc:creator>
      <pubDate>Sat, 22 Mar 2025 01:37:24 +0000</pubDate>
      <link>https://dev.to/shoutzu_han_a327ff8a7342/designing-ai-driven-self-reflection-beyond-mood-tracking-g16</link>
      <guid>https://dev.to/shoutzu_han_a327ff8a7342/designing-ai-driven-self-reflection-beyond-mood-tracking-g16</guid>
      <description>&lt;p&gt;In recent years, mental health apps have become increasingly popular—many offering meditation, mood tracking, or quick-access therapy. These tools serve a purpose, but often fail to address the core challenge many users face: understanding the deeper &lt;em&gt;why&lt;/em&gt; behind recurring emotional discomfort.&lt;/p&gt;

&lt;p&gt;As a software engineer with a deep interest in cognitive structures and behavior modeling, I began exploring how artificial intelligence can facilitate deep self-reflection—beyond symptom tracking or daily mood labels.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41llwtgp3c10gpcy6y4y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41llwtgp3c10gpcy6y4y.png" alt="Image description" width="800" height="2121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Existing Tools Fall Short
&lt;/h2&gt;

&lt;p&gt;Most wellness apps follow a familiar pattern: users record their mood, receive affirmations or breathing exercises, and try to “feel better.” While this helps surface-level stress, it rarely facilitates structural insight into emotional patterns, subconscious triggers, or cognitive loops.&lt;/p&gt;

&lt;p&gt;Users who are emotionally overwhelmed, socially isolated, or unfamiliar with therapy often need more than advice—they need a safe, private, intelligent mirror that helps them ask &lt;em&gt;the right questions&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Design Philosophy
&lt;/h2&gt;

&lt;p&gt;The system I’m building is centered around AI-guided introspective conversation. It doesn’t give advice. It asks adaptive questions based on the user’s thought patterns.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fadq3ry5u7w43i69kn8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fadq3ry5u7w43i69kn8.png" alt="Image description" width="800" height="1184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Structured NLP dialog flow based on psychological models
&lt;/li&gt;
&lt;li&gt;Emotion-to-behavior mapping via reflection prompts
&lt;/li&gt;
&lt;li&gt;Visualized mental loops and emotional triggers
&lt;/li&gt;
&lt;li&gt;No account required. No judgment. Just deep thinking with an AI mirror&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Target Impact
&lt;/h2&gt;

&lt;p&gt;This system is designed to help underserved populations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Immigrants who avoid formal therapy
&lt;/li&gt;
&lt;li&gt;Low-income individuals without insurance
&lt;/li&gt;
&lt;li&gt;People who want self-guided emotional insight but not “therapy apps”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By lowering access barriers and offering deeper functionality, the goal is to supplement—not replace—traditional support, while empowering users to explore their internal structure on their own terms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e6l6shn3sr6ge0nhr7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e6l6shn3sr6ge0nhr7q.png" alt="Image description" width="800" height="1973"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;I’m currently developing the MVP while researching adaptive cognitive models and user safety patterns. If you're working in emotion-aware AI, psychological UX, or cognitive mapping—I’d love to connect.&lt;/p&gt;

&lt;p&gt;This isn't about replacing therapists. It's about building intelligent tools that help people help themselves.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mentalhealth</category>
      <category>psychology</category>
      <category>cognitivescience</category>
    </item>
    <item>
      <title>AI Defense Strategies Against Adversarial Attacks: A Practical Comparison</title>
      <dc:creator>S. Han</dc:creator>
      <pubDate>Fri, 21 Feb 2025 16:19:13 +0000</pubDate>
      <link>https://dev.to/shoutzu_han_a327ff8a7342/ai-defense-strategies-against-adversarial-attacks-a-practical-comparison-325a</link>
      <guid>https://dev.to/shoutzu_han_a327ff8a7342/ai-defense-strategies-against-adversarial-attacks-a-practical-comparison-325a</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;1️⃣ Why Did We Conduct This Experiment?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Adversarial attacks pose a serious risk to AI models, leading them to make incorrect predictions even when small, imperceptible modifications are applied to input data. This vulnerability is particularly concerning in critical applications such as &lt;strong&gt;autonomous driving, medical diagnostics, and cybersecurity&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Thus, we conducted this experiment to evaluate &lt;strong&gt;which defense strategies are effective at mitigating adversarial attacks&lt;/strong&gt;, helping AI models remain robust against such threats.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;2️⃣ What Is the Purpose of This Experiment?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This experiment aims to answer the following questions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Which AI defense strategies are most effective against adversarial attacks?&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How does noise affect AI models, and which methods can mitigate it?&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Can simple image processing techniques significantly enhance model robustness?&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To explore these questions, we tested multiple AI defense strategies against adversarially perturbed images and compared their effectiveness.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What is Noise in AI?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before diving into the defense strategies, it's important to understand &lt;strong&gt;what noise is&lt;/strong&gt; in the context of AI security. &lt;strong&gt;Noise is any unwanted or disruptive alteration in an image, which can be natural or intentionally crafted to deceive AI models.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Types of Noise&lt;/strong&gt;
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Noise Type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gaussian Noise&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Random variations in pixel values, often appearing as grainy textures&lt;/td&gt;
&lt;td&gt;Low-light camera images&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Salt &amp;amp; Pepper Noise&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Random black and white pixels scattered throughout an image&lt;/td&gt;
&lt;td&gt;Old TV static&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Compression Artifacts&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Visual distortions caused by image compression techniques like JPEG&lt;/td&gt;
&lt;td&gt;Blurry text in low-quality images&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Adversarial Noise&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Carefully designed pixel modifications that are invisible to humans but mislead AI models&lt;/td&gt;
&lt;td&gt;AI misclassifies a panda as a gibbon&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Noise Affects AI?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Natural noise&lt;/strong&gt; (like Gaussian noise) can degrade image quality but usually doesn't affect AI classification significantly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adversarial noise&lt;/strong&gt; is crafted specifically to trick AI models into making incorrect predictions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Defense strategies must be able to differentiate between natural and adversarial noise while maintaining classification accuracy.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;3️⃣ Defense Strategies and Their Effectiveness&lt;/strong&gt;
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Defense Strategy&lt;/th&gt;
&lt;th&gt;Effectiveness&lt;/th&gt;
&lt;th&gt;Strengths&lt;/th&gt;
&lt;th&gt;Weaknesses&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gaussian Blur&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ Almost Ineffective&lt;/td&gt;
&lt;td&gt;Simple, fast&lt;/td&gt;
&lt;td&gt;Reduces detail, doesn't remove adversarial noise&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;JPEG Compression&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Most Effective&lt;/td&gt;
&lt;td&gt;Removes high-frequency noise&lt;/td&gt;
&lt;td&gt;May degrade image quality if overcompressed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Bilateral Filter&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;⚠️ Moderately Effective&lt;/td&gt;
&lt;td&gt;Preserves edges while reducing noise&lt;/td&gt;
&lt;td&gt;Computationally expensive, still vulnerable to strong attacks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Median Filter&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;⚠️ Partially Effective&lt;/td&gt;
&lt;td&gt;Works well for salt &amp;amp; pepper noise&lt;/td&gt;
&lt;td&gt;Not useful against stronger adversarial attacks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Experiment Process:&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Applied adversarial noise&lt;/strong&gt; to a dataset of images using perturbation techniques.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tested each defense strategy&lt;/strong&gt; by applying it to the perturbed images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compared the classification accuracy&lt;/strong&gt; before and after applying each defense strategy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analyzed the results&lt;/strong&gt; to determine which strategy worked best.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;4️⃣ Conclusion: Which Defense Strategy Works Best?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JPEG Compression was the most effective&lt;/strong&gt; defense strategy, as it removed high-frequency noise where adversarial perturbations typically exist.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gaussian Blur was almost completely ineffective&lt;/strong&gt;, as it blurred the image without effectively mitigating adversarial perturbations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bilateral Filter and Median Filter provided some level of defense&lt;/strong&gt;, but they were not strong enough to counteract advanced adversarial attacks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Overall, JPEG Compression is recommended as the best image-based adversarial defense strategy in our experiment.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🔗 Try It Yourself: Open-Source Adversarial Defense Toolkit
&lt;/h2&gt;

&lt;p&gt;To make AI security research more accessible, we developed an &lt;strong&gt;open-source toolkit&lt;/strong&gt; that allows researchers and engineers to experiment with adversarial defense methods.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/52147/adversarial-defense-toolkit" rel="noopener noreferrer"&gt;GitHub Repository: Adversarial Defense Toolkit&lt;/a&gt;&lt;br&gt;&lt;br&gt;
🎮 &lt;a href="https://adversarial-defense-frontend.vercel.app/" rel="noopener noreferrer"&gt;Live Demo&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffothem6rr56276shj6i5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffothem6rr56276shj6i5.gif" alt="gif image" width="480" height="404"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Features of the Toolkit:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Apply various defense methods (Gaussian Blur, JPEG Compression, Bilateral Filter, Median Filter)&lt;/li&gt;
&lt;li&gt;Evaluate AI model robustness under adversarial attacks&lt;/li&gt;
&lt;li&gt;Easy-to-use API for integrating with existing ML models&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;If you're working on AI security or adversarial robustness, we invite you to try it out and contribute to the project.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⭐ &lt;strong&gt;If this toolkit helps you, consider giving it a Star on GitHub to support further research!&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Final Thoughts &amp;amp; Future Directions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Adversarial attacks remain a major challenge in AI security. While many defense strategies exist, our findings show that some popular methods are ineffective in practice. &lt;strong&gt;JPEG compression and bilateral filtering stand out as promising solutions&lt;/strong&gt;, but there is still much work to be done.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🔍 How Can We Further Secure AI Models?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To further improve AI robustness, researchers and engineers may explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Adversarial Training:&lt;/strong&gt; Training models with adversarial examples to improve resistance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cryptographic Approaches:&lt;/strong&gt; Leveraging encryption techniques to authenticate input integrity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Neural Network Architecture Enhancements:&lt;/strong&gt; Designing models with built-in resilience against adversarial perturbations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid Defense Systems:&lt;/strong&gt; Combining multiple defenses for enhanced robustness.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time Anomaly Detection:&lt;/strong&gt; Implementing monitoring systems that detect adversarial manipulations in real-time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With continued research, we can move towards building &lt;strong&gt;more secure and trustworthy AI systems&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;What other adversarial defense methods have you tested? &lt;strong&gt;Let’s discuss in the comments!&lt;/strong&gt; 🚀&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>cybersecurity</category>
      <category>adversarialdefense</category>
    </item>
  </channel>
</rss>
