DEV Community

Mike Young
Mike Young

Posted on • Originally published at aimodels.fyi

1

Undetectable Backdoors in Outsourced Machine Learning Models: A Theoretical Vulnerability

This is a Plain English Papers summary of a research paper called Undetectable Backdoors in Outsourced Machine Learning Models: A Theoretical Vulnerability. If you like these kinds of analysis, you should join AImodels.fyi or follow us on Twitter.

Overview

  • Users may delegate the task of training machine learning models to a service provider due to the high computational cost and technical expertise required.
  • The paper shows how a malicious learner can plant an undetectable backdoor into a classifier.
  • The backdoored classifier behaves normally on the surface, but the learner maintains a mechanism to change the classification of any input with a slight perturbation.
  • The backdoor mechanism is hidden and cannot be detected by any computationally-bounded observer.
  • The paper presents two frameworks for planting undetectable backdoors, with different guarantees.

Plain English Explanation

Building powerful machine learning models can be extremely computationally expensive and technically complex. As a result, users may choose to outsource the training of these models to a service provider. However, this paper demonstrates that a malicious service provider could ...

Click here to read the full summary of this paper

Top comments (0)

The Most Contextual AI Development Assistant

Pieces.app image

Our centralized storage agent works on-device, unifying various developer tools to proactively capture and enrich useful materials, streamline collaboration, and solve complex problems through a contextual understanding of your unique workflow.

👥 Ideal for solo developers, teams, and cross-company projects

Learn more