DEV Community

Salinthip
Salinthip

Posted on

Meta concerns in ML security/privacy University of Waterloo x PSU

by Salinthip Keereerat.
2nd-year Digital Engineering student @ PSU Phuket 🇹🇭
Passionate about AI, Cybersecurity, and UX/UI

🎙️ I recently attended a really eye-opening talk called “Meta Concerns in ML Security and Privacy” by a professor from the University of Waterloo. It wasn’t your usual technical deep dive—it focused more on the big picture of how we can keep machine learning (ML) systems safe in the long run.

🌟 Why This Talk Was So Interesting
Machine learning is used to improve efficiency in important areas now—like healthcare, finance, and cybersecurity. But building a smart model isn’t enough anymore.
We need to make sure our models are secure, strong, and can be trusted, especially when real people’s lives or money are involved.

What really stuck with me was this:
ML security isn’t just about fixing problems when they show up—it’s about planning ahead, thinking about who might attack the model, why they’d do it, what harm it may cause, and how to defend it in smart ways.

🔑 What I Learned
There are way to slowdown/prevent model stealing but the current solution are not very good or effective.
you can embed the watermarking while training but this cannot resist model extraction.but you can prove it's yours.
this is to protect you against malicious accuser trying to frame a model owner timestamping or watermarking is the only defense in this case.

🤖It’s tricky to prove who actually “owns” a trained model. Techniques like watermarking (hiding info inside the model) or fingerprinting (adding unique traits) might help, but they’re not perfect and can be hard to verify.

đź‘€ Different Attackers = Different Problems
Not every attacker has the same goal. Some want to steal your model and use it, while others might just want to accuse you of stealing theirs. We need different kinds of protection for different threats—but most current tools only handle one.

⚠️ Too Many Defenses Can Backfire
Sometimes adding more security tools makes things worse. For example, one fix might block model theft but accidentally make the model easier to poison. So we have to find the right balance.

đź§© No One-Size-Fits-All Fix
ML security is complicated. There’s no single fix that works for everything. We need smarter, more connected strategies that can handle multiple risks at once.

🛠️ Amulet Toolkit
The speaker shared a tool called Amulet—it’s open-source and lets you try out different ML attacks and defenses. If you’re curious about ML security, it sounds like a great way to learn by doing!

đź’­ My Final Thoughts
This talk really changed how I see machine learning security. It’s not just about adding more tools—it’s about thinking long-term and making smart decisions.
As a Digital Engineering student, it got me super excited to explore this field more. There’s so much to discover—and so many chances to help build safer, smarter systems.

If you’re working on anything with AI or ML, I’ll leave you with this question:
“Are we solving the right problems—or just the easy ones?” 🤔
Real security isn’t about reacting to problems—it’s about being ready before they even happen. 🧠👣🚀

Top comments (0)