<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kamolchanok Saengtong</title>
    <description>The latest articles on DEV Community by Kamolchanok Saengtong (@kamolchanoksaengtong).</description>
    <link>https://dev.to/kamolchanoksaengtong</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kamolchanoksaengtong"/>
    <language>en</language>
    <item>
      <title>Adversarial Attacks and Defenses in Deep Learning Systems: Threats, Mechanisms, and Countermeasures</title>
      <dc:creator>Kamolchanok Saengtong</dc:creator>
      <pubDate>Sat, 21 Mar 2026 14:39:54 +0000</pubDate>
      <link>https://dev.to/kamolchanoksaengtong/adversarial-attacks-and-defenses-in-deep-learning-systems-threats-mechanisms-and-countermeasures-6jn</link>
      <guid>https://dev.to/kamolchanoksaengtong/adversarial-attacks-and-defenses-in-deep-learning-systems-threats-mechanisms-and-countermeasures-6jn</guid>
      <description>&lt;p&gt;&lt;strong&gt;Hello y'all, I'm back again in 2026🔥🔥&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Last Wednesday I just had the opportunity to join in the special talk about &lt;em&gt;&lt;strong&gt;Deep Learning Security&lt;/strong&gt;&lt;/em&gt; with &lt;strong&gt;Anadi Goyal&lt;/strong&gt; who's the talented research assistant from IT Guwahati under the topic:&lt;/p&gt;

&lt;p&gt;"&lt;em&gt;Adversarial Attacks and Defenses in Deep Learning Systems: Threats, Mechanisms, and Countermeasures&lt;/em&gt;"&lt;/p&gt;

&lt;p&gt;In this special talk, he mainly focused about the potential threat or vulnerability and mechanisms that the attackers could use to attack the machine learning model in deep learning systems. At the same time, we also learned how to defend against these attacks and explored various countermeasures we could use to handle such potential threats.&lt;/p&gt;

&lt;p&gt;This topic is especially interesting and important in the AI era where the machine learning model is becoming the prime targets for the attackers to tamper with them..&lt;/p&gt;

&lt;p&gt;Ok...technically, for this post session, we  will learn about how to be both attacker (mechanism for attacking the ML model) and learn to be the defender (countermeasure the threat) at the same time!!!!!🔥&lt;/p&gt;

&lt;p&gt;I swear this topic is so interesting and also beginner friendly which is suitable for anyone who's just started to learn about ML and deep learning security and as a 3rd year cybersecurity student I would like to share what I have learnt from this special talk to y'all. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, let's study together from this post!!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;But first of all let's review some definitions of necessary phase related to ML and deep learning security briefly before we getting to the main point&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI (Artificial Intelligence) vs ML (Machine Learning) vs DL (Deep Learning)&lt;/strong&gt;💀🧐&lt;/p&gt;

&lt;p&gt;First &lt;strong&gt;AL (Artificial Intelligence)&lt;/strong&gt; eg. chatGPT, Claude, Gemini, etc.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; This refers to the any method and technique that allows the computers to mimics human behavior to decide before making any decision.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;ML (Machine Learning)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ML is the application of AI which means it' also the subset of AI if we talk about the goal to make our machines or computers to be smart (AI's goal) we also have to talk about the technique or any method to make that AI be as smart as expect and that method often related to &lt;em&gt;&lt;strong&gt;machine learning (ML)&lt;/strong&gt;&lt;/em&gt; which refers to the ability of the machines or computers to learn from the data without fully or explicitly programmed for the specific task or new scenario.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;What's about deep learning???? Is it the same as ML?&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Technically, deep learning is the subset of ML which means it's also the ML technique that makes the machine learn from the data but deep learning is more advanced than that it utilize neural network with multi layer to solve the complex problems and extract the pattern from the data to make machine learn better. eg MLP (Multi layer Perceptron), CNNs (Convolutional Neural Networks), ViTs (Vision Transformers) etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8yyodm055s878uo5e7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8yyodm055s878uo5e7g.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ok now we already learnt about some basic background in AI world now our next question is... is there any potential threat that the attackers could do with it?&lt;br&gt;
If our goal is to make our machines be smart and make the accurate prediction for any tasks by learning the pattern from data without being programmed fully for any new scenarios..&lt;br&gt;
&lt;strong&gt;&lt;em&gt;what if someone try to make it stupid by making our machine misclassify or wrong prediction???&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
this is real threat that likely to happen for sure, right? &lt;br&gt;
Let's talk about the basic example of image classification&lt;br&gt;
As we know the model doesn't see the picture as human see, the pictures are often seen as the pixel number by the computer and after being trained for multiple times until it learn, we can easily extract some pattern from data to classify something. &lt;/p&gt;

&lt;p&gt;However this only works with the clean the data, if someone add some noise to the the image, the machine could be so confused and misclassify the image into something else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For example&lt;/strong&gt;,&lt;br&gt;
before adding the noise &lt;br&gt;
&lt;strong&gt;input: Cat (model predicts) -&amp;gt; Ca&lt;/strong&gt;t&lt;br&gt;
after adding the noise &lt;br&gt;
&lt;strong&gt;input: Cat (model predicts) -&amp;gt; Ducky (lmaoo)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This must look simple and often dangerous because what if they utilize this into the critical systems like automated driving system...&lt;em&gt;&lt;strong&gt;what could be the worst&lt;/strong&gt;&lt;/em&gt;...&lt;/p&gt;

&lt;p&gt;So, basically the goal of attacker often to be make the machine confused to make wrong decision by &lt;strong&gt;creating x' that looks identical to human's eye (x) but somehow adding some noise in it that human couldn't see but leads the machine to misclassify&lt;/strong&gt;...&lt;/p&gt;

&lt;p&gt;that &lt;strong&gt;x'&lt;/strong&gt; called "&lt;strong&gt;adversarial examples&lt;/strong&gt;"&lt;/p&gt;

&lt;p&gt;Now let's talk the technique that attacker utilize to create x':&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FGSM: Fast Gradient Sign Method&lt;/strong&gt;&lt;br&gt;
Add noise in the same direction of gradient of the loss function💀&lt;br&gt;
(In other words, pixel values are either increased or decreased in a way that maximizes the model’s prediction error🔥).&lt;/p&gt;

&lt;p&gt;For technical details, let's see the formula (of course, I copied from my teacher's slide)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x' = x + ε · sign( ∇ₓ L(f(x), y) )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;x = clean input (original data)&lt;br&gt;
ε = noise&lt;br&gt;
sign( ∇ₓ L(f(x), y) ) = the sign of gradient of the loss function&lt;/p&gt;

&lt;p&gt;which means for any clean input you add some noise to the data in the direction that could minimize the loss value (error).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key concept: only implement in 1 step or 1 time&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PGD: Projected Gradient Descent (iterative, stronger)&lt;/strong&gt;&lt;br&gt;
Instead of doing one big move like FGSM, PGD takes many small steps in the direction that increases the error, repeating the process (taking small step many times) until the attack becomes strong enough (reach the maximize of loss value).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x(t+1) = Clip_ε { x(t) + α · sign( ∇ₓ L(f(x(t)), y) ) }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deepfool&lt;/strong&gt;&lt;br&gt;
In Deepfool method, it's different from other method a bit because you only try to find the smallest noise that could lead the model into misclassification. Which means the adversary image stills look like a cat to you even though the computer already see it as the donkey -_- (so it's harder for human to detect the noise).&lt;/p&gt;

&lt;p&gt;Anyways instead of using these kind of methods, you can just use a visible patch like a watermark or something into one place on the picture, if that patch is designed well enough that can become the main or important region of the image. No matter what image you predict, the model will misclassify anyways since it's manipulated by that patch or watermark. This technique called &lt;em&gt;&lt;strong&gt;"Adversarial Patch Attacks"&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now, we already have learnt about some adversary technique that the attacker could use to tamper the machine learning or deep learning model. &lt;/p&gt;

&lt;p&gt;Let's talk about how we can defend this.😑🔥&lt;br&gt;
(For this section we will talk about how to defend the ViT (vision transformer) from adversarial patch attacks.&lt;/p&gt;

&lt;p&gt;Our main goal is to find the adversary patch area inside the image to detect and reduce the affect the could happen from that patch!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;how...&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, ViT (Vision transformer) doesn't see the image as the sequence of pixel like the traditional deep learning method like CNNs, instead, it sees the image as the sequence of patch by splitting each part of image to be a patch where each patch (1 patch) = 1 token:&lt;/p&gt;

&lt;p&gt;After that we could detect by using entropy by finding which part of patch inside the image has the high level of entropy (which signal to high randomness). This can be solved by utilizing STRAP-ViT Adversarial Defense Framework, let's see how we could use this in real defense, &lt;/p&gt;

&lt;p&gt;This methods refers to the paper: "STRAP-ViT: Segregated Tokens with&lt;br&gt;
Randomized - Transformations for Defense against Adversarial Patches in ViTs"&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, we insert the adversarial patch (the image that's included with the adversary patch) into the STRAP-ViT framework.&lt;/li&gt;
&lt;li&gt;After that, we get into ViT pre-processing where it splits each part of the photo into the patch or the token.&lt;/li&gt;
&lt;li&gt;After being splitted into the token or patch, we can now detect the malicious patch by measuring the entropy (utilizing jensen shannon divergence) where high entropy which indicate the high level of randomness.&lt;/li&gt;
&lt;li&gt;Select the suspicious or malicious patch.&lt;/li&gt;
&lt;li&gt;Mitigate the malicious patch by utilizing transformation method.&lt;/li&gt;
&lt;li&gt;After all tokens/patches are clean we will forward into the transformation block and MLP head to make the model truly learn and classify from the real data and make decision.&lt;/li&gt;
&lt;li&gt;Now we can classify the model correctly!!!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cat is Cat&lt;br&gt;
No donkey anymore !!!!!!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now we have learnt all about the DL attack and DL defense in real world application (which is so much fun and easily to understand, right?......)&lt;/p&gt;

&lt;p&gt;Since this post, I only explained about the basic core concept in DL attack and defenses. For technical details, I recommend y'all to read the recommendation paper above for deeper insight!!!!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now this session has ended... finally...&lt;br&gt;
See y’all again!! Let’s see what the next post will be about...&lt;/strong&gt;😏&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1dkbr7muey5ehuutni3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1dkbr7muey5ehuutni3.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>machinelearning</category>
      <category>princeofsongklauniversity</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>Ep.2 🔥 Verifiable ML Property Cards using Hardware-assisted Attestations feat. University of Waterloo🇨🇦</title>
      <dc:creator>Kamolchanok Saengtong</dc:creator>
      <pubDate>Sat, 03 May 2025 12:54:39 +0000</pubDate>
      <link>https://dev.to/kamolchanoksaengtong/ep2-verifiable-ml-property-cards-using-hardware-assisted-attestations-feat-waterloo-university-efp</link>
      <guid>https://dev.to/kamolchanoksaengtong/ep2-verifiable-ml-property-cards-using-hardware-assisted-attestations-feat-waterloo-university-efp</guid>
      <description>&lt;h2&gt;
  
  
  💡 Hey y’all – EP.2 has come😎
&lt;/h2&gt;

&lt;p&gt;since i promised y'all that in this EP i will talk about the topic: &lt;strong&gt;&lt;em&gt;Verifiable ML Property Cards? Lamination Time.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;so, I’m here to unpack it the way my brain understood it!! no fluff, just the real deal.🔥 (cook level:100%)&lt;/p&gt;

&lt;p&gt;let's see what i have learned from this topic🔥🔥👇🏻&lt;/p&gt;

&lt;h2&gt;
  
  
  😑🎀-&amp;gt; What’s this talk even about?
&lt;/h2&gt;

&lt;p&gt;We’ve all seen those model cards and datasheets floating around AI projects, right? Basically, they’re like “nutrition labels” for machine learning models which tell you what data was used, what the model can do, and sometimes how it was trained.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;BUT HERE’S THE CATCH:&lt;/em&gt;&lt;/strong&gt; They’re all based on trust. You hope the person publishing that card isn’t lying.&lt;/p&gt;

&lt;p&gt;And in a world where AI regulations are about to go wild (Europe, U.S., everyone catching up), “hoping” isn’t good enough anymore!!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;🧯 Sooo... we need something stronger&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
This talk hits this idea: what if model cards and datasheets could actually be verified? Like with hard evidence that a model was trained on ethical data, or behaves the way it claims?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;That’s where ML property attestation comes in.😎&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3u84b6fqamik15st297i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3u84b6fqamik15st297i.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s basically the model trainer (the prover) showing evidence to the model user or buyer (the verifier) that “yes, this model was trained the right way.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr5tc0h8u5qi3ja70fdx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr5tc0h8u5qi3ja70fdx.png" alt="Image description" width="800" height="433"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;But how do you prove that without blowing up compute costs or building a whole new crypto protocol every time?🤔&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Existing ML Property Attestation Mechanisms
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;ML-based attestations are often not robust and can be error-prone.&lt;/li&gt;
&lt;li&gt;Cryptographic attestation mechanisms (like Zero Knowledge Proofs and Multi-Party Computation) are inefficient.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔍 Why not just use crypto or regular ML?
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Well, they looked into that too but!!!&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ML-based attestation? Too brittle and easy to trick&lt;/li&gt;
&lt;li&gt;Cryptographic proofs like ZKPs? &lt;strong&gt;Too slow!!!&lt;/strong&gt; (13+ minutes for large models and also need to retraining model each times (eg MOC for distributional-property), so, not versatile&amp;gt;&amp;gt; require designing specific ZKPs for new properties&amp;gt;&amp;gt; &lt;code&gt;not reusable!!!!💀🔪&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Enter Hardware-Assisted TEEs&lt;/strong&gt;🔥🔥
&lt;/h2&gt;

&lt;p&gt;To overcome these limitations, Trusted Execution Environments (TEEs) provide isolated execution environments and protected storage for sensitive data. These TEEs can create a remote attestation to prove that a model behaves as expected without revealing sensitive data.&lt;/p&gt;

&lt;p&gt;Remote attestation is a process where the verifier checks the current state and behavior of the prover. It must meet two criteria:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Authenticity&lt;/strong&gt;: The evidence represents the real state of the prover.&lt;br&gt;
&lt;strong&gt;- Freshness:&lt;/strong&gt; The evidence reflects the current state of the prover.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can TEEs Enable ML Property Attestation?&lt;/strong&gt;&lt;br&gt;
Recent developments in hardware, like Intel’s AMX extensions and NVIDIA’s H100 GPUs, make it feasible to run ML training and inference within TEEs efficiently. This means that the Laminator framework can verify ML properties using hardware-assisted attestations in a way that is both practical and scalable.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛡️ Enter: TEEs &amp;amp; Laminator
&lt;/h2&gt;

&lt;p&gt;This is where things start to get really clever.🔥🔥&lt;br&gt;
They introduce this framework called Laminator, and it uses something called a Trusted Execution Environment (TEE).&lt;br&gt;
Think of it like a secure little bubble inside a computer that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keeps data isolated&lt;/li&gt;
&lt;li&gt;Seals info safely&lt;/li&gt;
&lt;li&gt;Can show others proof that “this is really what happened” (called remote attestation)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Contributions of Laminator&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It runs PyTorch inside a TEE using Gramine(on Intel SGX)&lt;/li&gt;
&lt;li&gt;A tool inside the TEE, called the measurer, checks specific ML properties&lt;/li&gt;
&lt;li&gt;The TEE creates a property card fragment with verifiable proof
These get bundled together into an assertion package that anyone can independently verify&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;So Laminator is kind of the middle path: strong guarantees with much better performance.&lt;/em&gt;&lt;/strong&gt;🔥🔥&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Experimental Setup and Evaluation&lt;/strong&gt;: The Laminator framework was tested against baseline setups to evaluate its overhead. The results show that Laminator performs efficiently, even with the extra layer of hardware-assisted attestation&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can TEEs Enable ML Property Attestation?&lt;/strong&gt;&lt;br&gt;
Recent developments in hardware, like Intel’s AMX extensions and NVIDIA’s H100 GPUs, make it feasible to run ML training and inference within TEEs efficiently. This means that the Laminator framework can verify ML properties using hardware-assisted attestations in a way that is both practical and scalable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Work and Extensions🎀
&lt;/h2&gt;

&lt;p&gt;They are looking ahead, the team plans to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run Laminator natively on Intel TDX (removing the need for Gramine)&lt;/li&gt;
&lt;li&gt;Use GPUs (like the H100) instead of the current CPU-only implementation&lt;/li&gt;
&lt;li&gt;Support more advanced models, including LLMs and text-to-image diffusion models&lt;/li&gt;
&lt;li&gt;Implement run-time property attestations for real-time verification&lt;/li&gt;
&lt;li&gt;Enable verification for multiple entities and organizations, creating a more distributed ecosystem for AI model validation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧵 Wrap-Up: What did I learn?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🔒 Verifiable ML property cards aren’t just nice-to-have anymore!! they’re gonna be required.&lt;/li&gt;
&lt;li&gt;🔧 Current tools are either too weak or too slow.&lt;/li&gt;
&lt;li&gt;🛠️ Laminator hits that sweet spot using TEEs to create property cards you can actually trust.&lt;/li&gt;
&lt;li&gt;🚀 It’s efficient, scalable, versatile, and pretty much made for the regulatory world we’re heading into.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔧 Summary
&lt;/h2&gt;

&lt;p&gt;In the end, Laminator provides a much-needed solution for verifiable ML property cards, preventing malicious model providers from falsely claiming properties about their models. By utilizing hardware-assisted attestations via TEEs, it makes the process of verifying model properties efficient, scalable, and versatile. With the growing need for regulations in AI, Laminator is well-positioned to address these concerns and play a key role in the future of AI accountability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ffflbpzam5xivqtzjqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ffflbpzam5xivqtzjqw.png" alt="Image description" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wow, we’re already done with this episode! What an interesting dive into the world of verifiable ML property cards and Laminator. We explored how TEEs can help us trust the AI models we’re working with. This talk was jam-packed with insights. It’s clear that the world of AI verification is evolving fast, and Laminator might just be the key to solving some of the toughest challenges we’ll face.&lt;/p&gt;

&lt;p&gt;This is just the beginning, though. New technologies like Intel’s AMX extensions and NVIDIA’s H100 GPUs are paving the way for a more secure and verifiable future. &lt;strong&gt;Exciting, right?&lt;/strong&gt;😎&lt;/p&gt;

&lt;p&gt;The future is wide open, and we’ve barely scratched the surface. There’s so much more to explore, from generative models to distributed property attestations. So, stick around for more deep dives. It’s a wild ride ahead! 🚀&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;See y’all in the next one 👋&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Stay laminated. Stay verified.&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
— Episode 2 in the books 🔥&lt;/p&gt;
&lt;/blockquote&gt;

</description>
    </item>
    <item>
      <title>Meta Concerns in ML Security and Privacy feat. Waterloo University.</title>
      <dc:creator>Kamolchanok Saengtong</dc:creator>
      <pubDate>Sat, 03 May 2025 09:07:15 +0000</pubDate>
      <link>https://dev.to/kamolchanoksaengtong/meta-concerns-in-ml-security-and-privacy-feat-waterloo-university-2dc5</link>
      <guid>https://dev.to/kamolchanoksaengtong/meta-concerns-in-ml-security-and-privacy-feat-waterloo-university-2dc5</guid>
      <description>&lt;p&gt;Hi y’all, it’s Creamy again🌷✨ long time no blog! I know I’ve been real quiet on here, blame it on endless studying, deadlines, and debugging nightmares🐻💀 But finally, something cool happened that’s totally worth sharing!&lt;br&gt;
A few days ago, I got the chance to join an invited talk with University of Waterloo on the topic: &lt;em&gt;&lt;strong&gt;“Meta Concerns in ML Security and Privacy.”&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At first, I thought it was gonna be another typical boring session, but turns out, this talk went way deeper which is really interesting, and honestly made me rethink a lot of things about AI security 🤯&lt;/p&gt;

&lt;p&gt;But hey, since this is a blog and I don’t wanna bore y’all with a wall of technical jargon, I’m just gonna summarize the coolest things I learned and why I think they’re super relevant for anyone working on or using machine learning today. Let’s dive in !! (lemme cook🔥)&lt;/p&gt;

&lt;h2&gt;
  
  
  Ok but first... Why are we even talking about "meta" concerns???
&lt;/h2&gt;

&lt;p&gt;So like… AI is being used in every field and literally everywhere now like predicting diseases, detecting fraud, debug the code, and even use in cybersecurity field too😅 and in my opinion, honestly it’s the most powerful and helpful tool in these day for real!!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;But the question is.....&lt;/em&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;how can we protect them if they’re under attack?&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This talk didn’t just throw around random hacks or fixes. Instead, it zoomed way out to explore the meta problems, the big-picture stuff which is really important because protecting machine learning models isn’t just about slapping on quick fixes or random hacks, it’s about thinking in big picture and tackling the core issues. We need to understand who might want to mess with our models, whether they’re trying to steal them, manipulate them, or bypass our defenses. And while techniques like watermarking and fingerprinting can help, they’re not foolproof😞, malicious actors can still find ways to manipulate them. The real challenge is making sure that any defense we put in place actually holds up in the real world and doesn’t create new weaknesses. To truly protect our ML models, we need to think through all the possible threats, make sure our defenses work together, and keep our models as secure as possible!!&lt;/p&gt;

&lt;h2&gt;
  
  
  What kinds of attacks are we dealing with in nowadays?
&lt;/h2&gt;

&lt;p&gt;Actually, from the talk, professor Asokan (The speaker) gave me a bunch of realistic threat models but here is a few example that I think y’all might be familiar:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3zzl09jiqkwqfz6kjan.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3zzl09jiqkwqfz6kjan.png" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Compromised Input (Model Integrity Attacks):&lt;/em&gt;&lt;/strong&gt;
Adversary gives malicious inputs to evade the model. Like tricking a spam filter or facial recognition system. There’s even a new attack that works on major AI chatbots and we don’t have a 100% fix yet.💀&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln440ezevcx8weccmgen.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln440ezevcx8weccmgen.png" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Malicious Client (Model Confidentiality Attacks):&lt;/em&gt;&lt;/strong&gt;
In federated learning or API-based models, the attacker pretends to be a client and slowly extracts the model by querying it strategically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Are We Using the Right Adversary Models?&lt;/strong&gt;&lt;br&gt;
When it comes to securing ML models, we need to ask ourselves if we're using the right adversary models. Model ownership resolution (MOR) must be strong enough to handle adversaries who are either trying to steal models or falsely claim ownership. There are two types of adversaries to consider:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Malicious Suspects:&lt;/strong&gt; These adversaries will try to get around verification using techniques like pruning, fine-tuning, or adding noise to the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Malicious Accusers:&lt;/strong&gt; These adversaries will falsely claim they own the model, trying to frame someone else.&lt;/p&gt;

&lt;p&gt;One of my PSU professor asked the interesting question to professor Asokan that &lt;em&gt;&lt;strong&gt;"what is the criteria to successfully steal the model?"&lt;/strong&gt;&lt;/em&gt; he answered the simple answer that &lt;strong&gt;&lt;em&gt;"well... if your stolen model performs nearly as well as the original, then yeah ,you basically succeeded in model theft"&lt;/em&gt;&lt;/strong&gt; 💀&lt;br&gt;
We also got a deep dive into one of the most slept-on topics in ML security: “&lt;strong&gt;Defending Model Ownership&lt;/strong&gt;” and this part was super interesting. Like, let’s say you trained a ML model and someone steals it so...how the heck can even you prove it’s yours?&lt;/p&gt;

&lt;p&gt;Well… here’s what I learned !👇🏻🎀&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔐 Watermarking ,aka “trap data” for model thieves&lt;/strong&gt;&lt;br&gt;
The idea is to sneak in some weird, carefully chosen data during training stuff like out-of-distribution (OOD) images with intentionally wrong labels. These are your watermarks.&lt;br&gt;
Here’s how it works:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Watermark Generation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose OOD samples and label them wrong on purpose&lt;/li&gt;
&lt;li&gt;Train your model with both normal data and watermark samples&lt;/li&gt;
&lt;li&gt;The model memorizes those strange labels&lt;/li&gt;
&lt;li&gt;Then, commit to this watermark by generating a timestamp (kind of like proof that you trained this version first)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Watermark Verification&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test the suspect model with the same watermark samples.&lt;/li&gt;
&lt;li&gt;Compare its predictions to the incorrect labels:
If the predictions match a lot, the model is likely stolen.
If the predictions don’t match much, it’s probably not stolen.
The timestamp helps verify if someone else is falsely claiming ownership.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;sounds smart!!!!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fingerprinting , like giving your model a secret ID&lt;/strong&gt;&lt;br&gt;
fingerprinting embeds a kind of unique identity into the model, like a digital signature or password.&lt;br&gt;
It’s usually more robust than watermarking because there’s no obvious "trap" data involved, but:&lt;br&gt;
implementing fingerprinting effectively can be complex, as it requires careful integration to avoid impacting the model’s performance. If not executed properly, it could lead to a reduction in the model's accuracy or efficiency. Therefore, while fingerprinting offers a stronger method for proving ownership, it demands careful attention to avoid unintended consequences.😎&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mitigating False Claims Against MORs&lt;/strong&gt;&lt;br&gt;
Watermarking is a good defense, but it has its own challenges. For one, generating and verifying watermarks or fingerprints can be a bottleneck. Verifying the watermark properly can be expensive and difficult. So, improving the ability to defend against false claims is something we still need to work on.&lt;br&gt;
So yeah… model ownership resolution (MORs) sounds great on paper, but in practice? Still need to be improved anyways...&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;💀Meta Problem: Sensible Adversary Models&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
It’s important to identify the right adversary models and figure out what they’re trying to do. This includes looking at the adversary’s knowledge and capabilities, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Access&lt;/li&gt;
&lt;li&gt;Target Model Access&lt;/li&gt;
&lt;li&gt;Adversary Type (e.g., malicious accuser vs. malicious suspect)&lt;/li&gt;
&lt;li&gt;Interaction Type (e.g., zero-shot, one-shot, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When we talk about adversaries, we should avoid using vague terms. For example, calling everything “adversarial attacks” or saying an adversary is “adaptive” without specifying what they can actually do doesn’t help us understand the real threats.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can We Deploy Defense Against Multiple Concerns?&lt;/strong&gt;&lt;br&gt;
One major challenge in ML security is how to protect against multiple risks at once. In previous research, defenses have focused on specific risks, but in real-life applications, we need to defend against a mix of threats.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unintended Interaction&lt;/strong&gt;: Different defenses might clash with each other, making things worse or introducing new vulnerabilities. For example, one defense might make the model more susceptible to a different attack.&lt;br&gt;
&lt;strong&gt;Defense vs. Other Risks:&lt;/strong&gt; We need to understand how a defense can influence the model's susceptibility to other unrelated risks.&lt;/p&gt;

&lt;p&gt;ok last but not least....&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Takeaways, for y'all 😎&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Are we using the right adversary models?&lt;br&gt;
This is a work in progress. Currently, there's a gap in robustness against false accusations in Model Ownership Resolution (MORs). We also don’t have widely accepted, streamlined adversary models in the ML security/privacy space. This needs to be addressed to improve how we defend against attacks and ensure trust in models.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can we deploy defenses against multiple concerns?&lt;br&gt;
Another big challenge. Right now, defenses are often designed for specific risks, but we don’t fully know how they interact when combined. Some might even worsen other vulnerabilities. It’s important to figure out how to deploy multiple defenses in a way that they don't cancel each other out or create new weaknesses.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tip🔥:&lt;/strong&gt; You can use &lt;strong&gt;Amulet&lt;/strong&gt; which is a toolkit designed for exploring different attacks and defenses, and it’s available as open-source. It’s a great resource for testing and understanding various security risks in ML models, and it can help you experiment with defenses in a more controlled way.😎&lt;br&gt;
 &lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts about this talk😎🔥
&lt;/h2&gt;

&lt;p&gt;This talk honestly changed how I think about AI security. It’s not just about stopping bad guys, it’s about building systems we can actually trust in messy, realistic environments. If you’re working on ML security or privacy, or even just using AI in your project, ask yourself: Are we solving the right problems, or just the easy ones? I feel like i'm gonna keep learning more in this area for sure. Hit me up if you’re also into privacy-preserving AI or model protection too!!!!,maybe we can build something cool together 😎 &lt;br&gt;
And hey ,this is just episode one! I’ve got another blog coming soon on “&lt;strong&gt;Lamination verification: verifying ML property cards using hardware-assisted attestations&lt;/strong&gt;” ,so &lt;strong&gt;stay tuned🔥&lt;/strong&gt; if you’re curious about the hardware side of trustworthy AI 🛠️💻&lt;br&gt;
 &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Studify: Revolutionizing Student Learning with Seamless University Search and Dev Teach Community✏️🎀</title>
      <dc:creator>Kamolchanok Saengtong</dc:creator>
      <pubDate>Fri, 06 Dec 2024 15:21:51 +0000</pubDate>
      <link>https://dev.to/kamolchanoksaengtong/studify-revolutionizing-student-learning-with-seamless-university-search-and-dev-teach-community-4761</link>
      <guid>https://dev.to/kamolchanoksaengtong/studify-revolutionizing-student-learning-with-seamless-university-search-and-dev-teach-community-4761</guid>
      <description>&lt;p&gt;&lt;strong&gt;Studify: Revolutionizing Student Learning with Seamless University Search and Dev Teach Community!!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In today's fast-paced digital world, students need tools that empower them to make informed decisions and stay updated with the latest in academia and technology. Enter Studify — a groundbreaking platform designed to not only help students discover universities with ease but also provide a personalized learning experience like no other.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmimfvbaah0y670ms14v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmimfvbaah0y670ms14v.png" alt="Image description" width="800" height="470"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;A Next-Level University Search Engine&lt;/strong&gt;&lt;br&gt;
Studify’s University Search Engine is a game-changer for students looking to explore higher education opportunities. With just a few clicks, users can search for universities, view detailed information, and be directly linked to their official websites. Gone are the days of endless browsing; Studify streamlines the process, making university exploration efficient and user-friendly. Whether you're a high school student looking for your next step or someone aiming to continue your education, Studify makes university research fast and straightforward.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;But that's just the beginning.&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lllxnwnx89gzp5cfbi3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lllxnwnx89gzp5cfbi3.png" alt="Image description" width="800" height="436"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Dev Teach Community:&lt;/strong&gt; A Cutting-Edge Learning Resource&lt;br&gt;
Studify’s Dev Teach Community is the heart of the platform, offering an immersive learning environment where students can stay updated with the latest trends, news, and discussions in tech. The platform uses an advanced &lt;strong&gt;fetch engine&lt;/strong&gt; that pulls data from Reddit’s tech and development communities, bringing you the most relevant and up-to-date content. Imagine having access to tech news, tutorials, and expert discussions all in one place — tailored to help you grow as a learner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkewx4ouxp6kf6g8180su.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkewx4ouxp6kf6g8180su.png" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But we didn’t stop there. We know that the learning journey doesn’t end with reading. &lt;strong&gt;Studify&lt;/strong&gt; allows you to save your favorite articles, discussions, and resources directly to your account**. Whether it’s an insightful Reddit thread on programming, a new development tool, or a tech career tip, you can store it all, ensuring that the best resources are always within reach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8sfe3bd3fkch89y2ewy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8sfe3bd3fkch89y2ewy.png" alt="Image description" width="786" height="468"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Using Prisma&lt;/strong&gt;&lt;/em&gt;, we’ve made it possible to store these precious articles in a secure and organized manner, giving users the flexibility to revisit their saved content at any time. You can create your own library of learning resources — it’s learning your way, on your terms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg5uskt0xx0zeyipip15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg5uskt0xx0zeyipip15.png" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;The Future of Studify: A Platform That Grows with You&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
What’s even more exciting? This is just the beginning. Studify is still evolving. As a student myself, I’m working tirelessly to continue developing the platform, adding new features, and fine-tuning the learning experience for everyone. The future of Studify will see even more integrations, community-driven features, and enhancements that will help you learn and grow like never before.&lt;/p&gt;

&lt;p&gt;One of the most exciting developments on the horizon is the addition of a Student Planner and To-Do List feature. Think of it as your personal assistant for organizing your studies, managing deadlines, and keeping track of all your academic tasks. While this feature isn’t live yet, we've created a prototype page to give you a sneak peek at what’s coming. This demo is just a preview — it’s not fully functional yet, but it offers a glimpse into the direction Studify is heading.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0kxtbz7bn0fahd9hgnt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0kxtbz7bn0fahd9hgnt.png" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Stay tuned as we continue to build and enhance this feature. In the future, you’ll be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add, edit, and delete tasks&lt;/li&gt;
&lt;li&gt;Set deadlines and reminders&lt;/li&gt;
&lt;li&gt;Organize your tasks by subject or category&lt;/li&gt;
&lt;li&gt;Sync your tasks with upcoming events or university deadlines&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  - This feature will be a valuable addition to help you stay organized, track your progress, and make the most of your time as a student. The future of Studify is bright, and we’re committed to providing you with the best tools to succeed._
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How to Use Studify: A Step-by-Step Guide&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Welcome to Studify! This platform is designed to help you discover universities and stay updated with the latest tech news and tutorials, all while saving your favorite resources for future reference. Follow the steps below to get the most out of Studify:&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Sign Up &amp;amp; Log In&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;To access all features of Studify, you’ll need to sign up and log into your account. Here’s how:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Sign Up:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the homepage, click on the &lt;strong&gt;Get started&lt;/strong&gt; button.&lt;/li&gt;
&lt;li&gt;Enter your &lt;strong&gt;email address&lt;/strong&gt; and create a &lt;strong&gt;strong password&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;After filling out the sign-up form, click &lt;strong&gt;Create Account&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Log In:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once your account is confirmed, go to the &lt;strong&gt;Login&lt;/strong&gt; page.&lt;/li&gt;
&lt;li&gt;Enter your &lt;strong&gt;email&lt;/strong&gt; and &lt;strong&gt;password&lt;/strong&gt; that you used during sign-up.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Login&lt;/strong&gt; to access your dashboard.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. University Search Engine&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Now that you’re logged in, let’s explore one of the most powerful features of Studify — the &lt;strong&gt;University Search Engine&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the dashboard, find the &lt;strong&gt;University Search&lt;/strong&gt; section.&lt;/li&gt;
&lt;li&gt;Type in the name of any university you're interested in into the search bar.&lt;/li&gt;
&lt;li&gt;Studify will display a list of universities that match your search.&lt;/li&gt;
&lt;li&gt;Click on the university name to view more details, such as location, academic programs, and other relevant information.&lt;/li&gt;
&lt;li&gt;You can also click the provided link to visit the official university website.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3. Discover &amp;amp; Join the Dev Teach Community&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Stay up-to-date with the latest in the tech world by joining the &lt;strong&gt;Dev Teach Community&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the dashboard, navigate to the &lt;strong&gt;Dev Teach Community&lt;/strong&gt; section.&lt;/li&gt;
&lt;li&gt;The platform will automatically fetch Reddit posts, news, tutorials, and discussions from relevant subreddits.&lt;/li&gt;
&lt;li&gt;Browse the list of articles and select any that interest you.&lt;/li&gt;
&lt;li&gt;You can read the full content directly within Studify.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  *&lt;em&gt;4. Save Articles to Your Account by clicking "add to favorite" *&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;Found an article that you want to keep for future reference? Studify lets you &lt;strong&gt;save articles&lt;/strong&gt; directly to your account (You create your own tag for each article).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When reading an article, you’ll see a &lt;strong&gt;add to favorite&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;5. Edit, Add, and Delete Articles from Your Account&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Studify gives you full control over the articles you save. You can &lt;strong&gt;edit&lt;/strong&gt;, &lt;strong&gt;add&lt;/strong&gt;, and &lt;strong&gt;delete&lt;/strong&gt; articles directly from your account.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Adding Articles Manually:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Go to the &lt;strong&gt;Your favorite&lt;/strong&gt; section in your community page.&lt;/li&gt;
&lt;li&gt;Click on the &lt;strong&gt;Add to favorite&lt;/strong&gt; button when you find something interesting.&lt;/li&gt;
&lt;li&gt;You’ll be prompted to enter tag for storing the article data.&lt;/li&gt;
&lt;li&gt;After filling out the details, click &lt;strong&gt;Add tag&lt;/strong&gt; to add it to your collection.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Deleting Articles:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;strong&gt;Delete* section, click the **Delete&lt;/strong&gt; button next to the article you want to remove.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Studify makes learning and university exploration simple and convenient. With easy access to universities, up-to-date tech news, and the ability to save and manage articles, you have everything you need to stay on top of your academic and career goals. Keep exploring, learning, and saving valuable resources — all in one place!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt; As Studify is still evolving, additional features and improvements will continue to be added to enhance your learning experience. Stay tuned for more updates and capabilities coming soon!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Stay tuned as we keep building and improving — the best is yet to come!&lt;/em&gt;&lt;br&gt;
Kamolchanok Saengtong [Digital Engineering student from PSU]&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
