DEV Community

Cover image for Practical Black-Box Attacks against Machine Learning
Paperium
Paperium

Posted on • Originally published at paperium.net

Practical Black-Box Attacks against Machine Learning

How black-box attacks can trick online machine learning services

Imagine you can make a photo service or a smart app give the wrong answer, without ever seeing it's code or the data inside — sounds strange, but it works.
An attacker only needs to send inputs and watch the replies from a remote service, then build a small copy model that mimics the replies.
Using that copy they craft tiny changes to images, creating trick images that look normal to people but confuse the real system.
Researchers tried this on big online APIs and most of the time the services mislabelled the altered images, even when defenses were on.
The risk is real: bad content might be marked safe, or a car system could be made to act wrong, all without inside access to the model.
It's a wake up call that simple probing can break things, and we need better ways to protect these hidden systems before someone abuses them.

Read article comprehensive review in Paperium.net:
Practical Black-Box Attacks against Machine Learning

🤖 This analysis and review was primarily generated and structured by an AI . The content is provided for informational and quick-review purposes.

Top comments (0)