<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ThanidaSangkasanya</title>
    <description>The latest articles on DEV Community by ThanidaSangkasanya (@thanidasangkasanya).</description>
    <link>https://dev.to/thanidasangkasanya</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thanidasangkasanya"/>
    <language>en</language>
    <item>
      <title>Invited Talk: BLERP: BLE Re-Pairing Attacks and Defenses</title>
      <dc:creator>ThanidaSangkasanya</dc:creator>
      <pubDate>Thu, 23 Apr 2026 14:19:35 +0000</pubDate>
      <link>https://dev.to/thanidasangkasanya/invited-talk-blerp-ble-re-pairing-attacks-and-defenses-2d14</link>
      <guid>https://dev.to/thanidasangkasanya/invited-talk-blerp-ble-re-pairing-attacks-and-defenses-2d14</guid>
      <description>&lt;p&gt;Bluetooth Low Energy is a widespread wireless technology connecting billions of gadgets, relying on a pairing process to generate secret keys for safe communication. Sometimes, previously connected equipment must negotiate a new security level through a procedure called re-pairing. Researchers discovered significant vulnerabilities within the official rules governing this re-pairing mechanism. Specifically, the standard lacks proper authentication checks and allows hackers to force connections into using weaker security states. Since these weaknesses stem from the core Bluetooth design, billions of compliant gadgets remain exposed to potential exploits. &lt;br&gt;
Taking advantage of these blind spots, an attacker can secretly intercept data or trick equipment into establishing a connection with a malicious machine. For example, a hacker could deceive a smartphone into believing it is communicating with a trusted wireless mouse. Researchers successfully executed these impersonation attacks against twenty-three different products from major brands like Apple, Google, Microsoft, and Logitech. Although certain companies acknowledged the flaws and released software patches, others ignored the reports, and the organization in charge of Bluetooth rules officially declined to update the vulnerable standard&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>iot</category>
      <category>networking</category>
      <category>security</category>
    </item>
    <item>
      <title>Invited talk : Adversarial Attacks and Defenses in Deep Learning Systems: Threats, Mechanisms, and Countermeasures</title>
      <dc:creator>ThanidaSangkasanya</dc:creator>
      <pubDate>Sun, 22 Mar 2026 15:52:36 +0000</pubDate>
      <link>https://dev.to/thanidasangkasanya/invited-talk-adversarial-attacks-and-defenses-in-deep-learning-systems-threats-mechanisms-and-5185</link>
      <guid>https://dev.to/thanidasangkasanya/invited-talk-adversarial-attacks-and-defenses-in-deep-learning-systems-threats-mechanisms-and-5185</guid>
      <description>&lt;p&gt;Adversarial attack is a deep learning image classifier that works well when the test image is clear and like what it has seen before. However, adversarial attacks will calculate noise put into images, making it misclassify and humans don’t notice if something changes in image. Previous works propose several defenses such as adversarial training, model architecture changes, detection methods, and adversarial purification. However, they still have limitations like high cost, complicated and limited robustness. &lt;br&gt;
Another approach to defend against adversarial attacks is proposed in the paper “STRAP-ViT: Segregated Tokens with Randomized Transformations for Defense against Adversarial Patches in ViTs.” First, they will patch localization by identifying tokens with the highest entropy, that is, which one changes compared to a clean image, and after that, when they can detect the token that was attacked, the system will clean noise without removing the token by applying a randomized combination of mathematical transformations only to neutralize the adversarial noise. After tokens are transformed and neutralized, they will combine with the rest of the untouched clean tokens and then be sent to the vision transformer for prediction. It only modifies hacked tokens, making it minimal cost with no extra training.&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>security</category>
    </item>
    <item>
      <title>Invited talk about: Adversarial Attacks and Defenses in Deep Learning Systems: Threats, Mechanisms, and Countermeasures</title>
      <dc:creator>ThanidaSangkasanya</dc:creator>
      <pubDate>Sat, 21 Mar 2026 19:05:48 +0000</pubDate>
      <link>https://dev.to/thanidasangkasanya/invited-talk-about-adversarial-attacks-and-defenses-in-deep-learning-systems-threats-mechanisms-fig</link>
      <guid>https://dev.to/thanidasangkasanya/invited-talk-about-adversarial-attacks-and-defenses-in-deep-learning-systems-threats-mechanisms-fig</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;A deep learning image classifier that works well when the test image is clear and like what it has seen before. However, adversarial attacks will calculate noise put into images, making them misclassify, and humans don’t notice if something changes in an image. Previous works propose several defenses, such as adversarial training, model architecture changes, detection methods, and adversarial purification. However, they still have limitations like high cost, complicatedness, and limited robustness. 

Another approach to defend against adversarial attacks is proposed in the paper “STRAP-ViT: Segregated Tokens with Randomized Transformations for Defense against Adversarial Patches in ViTs.” First, they will patch localization by identifying tokens with the highest entropy, that is, which one changes compared to a clean image, and after that, when they can detect the token that was attacked, the system will clean noise without removing the token by applying a randomized combination of mathematical transformations only to neutralize the adversarial noise. After tokens are transformed and neutralized, they will combine with the rest of the untouched clean tokens and then be sent to the vision transformer for prediction. It only modifies hacked tokens, making it minimal cost with no extra training.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>computerscience</category>
      <category>cybersecurity</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Laminator: Verifiable ML Property Cards using Hardware-assisted Attestations</title>
      <dc:creator>ThanidaSangkasanya</dc:creator>
      <pubDate>Thu, 08 May 2025 16:49:50 +0000</pubDate>
      <link>https://dev.to/thanidasangkasanya/laminator-verifiable-ml-property-cards-using-hardware-assisted-attestations-4837</link>
      <guid>https://dev.to/thanidasangkasanya/laminator-verifiable-ml-property-cards-using-hardware-assisted-attestations-4837</guid>
      <description>&lt;p&gt;AI models are valuable assets, but they face risks from theft, bias, and lack of transparency. Laminator is for solve these problems by providing hardware-assisted verification for machine learning models. This help to make sure that models still secure, reliable, and legally compliant, especially as rules about fair and right use of AI keep increasing. . If companies can prove ownership and integrity, AI adoption will be safer and more reliable, preventing misuse and encouraging responsible deployment.&lt;/p&gt;

&lt;p&gt;I would like to talk about Property cards. Al models often come with "property cards", like nutrition labels that show details about accuracy and data sources. But the problem is, we couldn't prove that developers fill these out themselves or not. Manual checks take too long, which makes it hard to enforce rules. Property attestation is good for solve this problem, it's technical way to confirm AI model properties using verifiable proof.This helps build trust, ensures compliance with the law, and gives regulators or users confidence that AI is fair and responsible.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Meta concerns in ML security/privacy</title>
      <dc:creator>ThanidaSangkasanya</dc:creator>
      <pubDate>Thu, 08 May 2025 14:29:19 +0000</pubDate>
      <link>https://dev.to/thanidasangkasanya/meta-concerns-in-ml-securityprivacy-501f</link>
      <guid>https://dev.to/thanidasangkasanya/meta-concerns-in-ml-securityprivacy-501f</guid>
      <description>&lt;p&gt;The security and privacy issues of AI are becoming more and more important. Because AI is being used in sensitive fields like healthcare, hiring, autonomous driving, and cybersecurity. Current research often focuses on individual threats, but real-world AI systems have to deal with many attacks at once. AI models can be easily tricked by small changes in input, leading to serious errors such as misclassifications in autonomous driving or making unfair choices when hiring.&lt;/p&gt;

&lt;p&gt;I would like to talk about Watermark. Watermarking is a good technique for proving ownership of AI models, like adding a signature to an image. It will add during training by modifying the model in a way that doesn't affect regular use but helps protect it if someone copies it. But it still has limitations, it doesn't work if the model is extracted through an API. Also, fingerprinting is another way, where doesn't change model, but it's recorded to verify ownership later. It's similar to fingerprints for people, but it's more expensive and less reliable. These methods couldn't protect 100%, but they also reduce the chances of unauthorized use.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>MiniProject : Atlanta BookShop</title>
      <dc:creator>ThanidaSangkasanya</dc:creator>
      <pubDate>Fri, 06 Dec 2024 14:53:56 +0000</pubDate>
      <link>https://dev.to/thanidasangkasanya/miniproject-atlanta-bookshop-493a</link>
      <guid>https://dev.to/thanidasangkasanya/miniproject-atlanta-bookshop-493a</guid>
      <description>&lt;h2&gt;
  
  
  Topic1 : System document: Some important/interesting functions in the project
&lt;/h2&gt;

&lt;p&gt;In this project, we use Next.js with TypeScript for building a robust and scalable frontend application. Tailwind CSS is employed to create a responsive and modern design. The project integrates Prisma as the ORM to handle database operations efficiently.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Book Search Functionality: The main page allows users to search for books by title, author, or other attributes.&lt;/li&gt;
&lt;li&gt;User Authentication: The application features login and registration functionality, allowing users to sign up, log in, and access their accounts before purchasing books.&lt;/li&gt;
&lt;li&gt;Admin/Staff Dashboard: A secured area for staff members to manage books, including adding, editing, or deleting book details such as title, image, stock, and release date. This data is stored in the Prisma database.&lt;/li&gt;
&lt;li&gt;Prisma Integration: The application interacts with the database through Prisma, ensuring smooth data management and easy database queries.&lt;/li&gt;
&lt;li&gt;API Routes: The project uses Next.js API routes for handling backend operations, such as user authentication and managing book-related data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Topic2: How to use System
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Main Page:&lt;br&gt;
On the homepage, users can search for any book by title or other criteria. This search feature helps users find books easily and efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navbar:&lt;br&gt;
The navbar provides Login and Register buttons. Customers must register or log in before making a purchase. User data is securely stored in the Prisma database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Staff Login and Dashboard:&lt;br&gt;
Staff members have a dedicated login section. Upon clicking the "Staff Login" button, they will be prompted to enter a password that is pre-configured to confirm their identity.&lt;br&gt;
After successful login, staff will be redirected to a Dashboard &lt;br&gt;
 where they can:&lt;br&gt;
  -Add new books by entering details such as book name, image, &lt;br&gt;
  -stock quantity, and release date.&lt;br&gt;
  -Edit book details if needed.&lt;br&gt;
  -Delete books from the system.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All data related to books and staff are managed and stored in the Prisma database for smooth operations.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
