Behavior of Machine Learning Algorithms in Adversarial Environments

Nov 23, 2010 - of machine learning algorithms and provide a brief history of the work that led me to this ...... On the other hand, an unscrupulous merchant may.
3MB Sizes 1 Downloads 184 Views
Behavior of Machine Learning Algorithms in Adversarial Environments

Blaine Nelson

Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2010-140 http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-140.html

November 23, 2010

Copyright © 2010, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission.

Behavior of Machine Learning Algorithms in Adversarial Environments by Blaine Alan Nelson

A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley

Committee in charge: Professor Anthony D. Joseph, Chair Professor J. D. Tygar Professor Peter L. Bartlett Professor Terry Speed Fall 2010

Behavior of Machine Learning Algorithms in Adversarial Environments

Copyright © 2010 by Blaine Alan Nelson

Abstract

Behavior of Machine Learning Algorithms in Adversarial Environments by Blaine Alan Nelson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Anthony D. Joseph, Chair Machine learning has become a prevalent tool in many computing applications and modern enterprise systems stand to greatly benefit from learning algorithms. However, one concern with learning algorithms is that they may introduce a security fault into the system. The key strengths of learning approaches are their adaptability and ability to infer patterns that can be used for predictions or decision making. However, these assets of learning can potentially be subverted by adversarial manipulation of the learner’s environment, which exposes applications that use machine learning techniques to a new class of security vulnerabilities. I analyze the behavior of learning systems in adversarial environments. My thesis is that learning algorithms are vulnerable to attacks that can transform the learner into a liability for the system they are intended to aid, but by critically analyzing potential security threats, the extent of these threat can be assessed, proper learning techniques can be selected to minimize the adversary’s impact, and failures of system can be averted. I present a systematic approach for identifying and analyzing threats against a machine learning system. I examine real-world learning systems, assess their vulnerabilities, demonstrate real-world attacks against their learning mechanism, and propose defenses that can successful mitigate the effectiveness of such attacks. In doing so, I provide machine learning practitioners with a systematic methodology for assessing a learner’s vulnerability and developing defenses to strengthen their system against such threats. Additionally, I also examine and answer theoretical questions about the limits of adversarial contamination and classifier evasion.

1

Contents Contents

i

List of Figures

iii

List of Tables

ix

Acknowledgments

xi

1 Introduction 1.1 Motivation and Methodology . . . 1.2 Guidelines from Computer Security 1.3 Historical Roadmap . . . . . . . . 1.4 Dissertation Organization . . . . .

. . . .

1 2 8 10 18

2 Background and Notation 2.1 Notation and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Statistical Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . .

21 21 25

3 A Framework for Secure Learning 3.1 Analyzing Phases of Learning