HtmlToText
nicolas papernot welcome! i earned my ph.d. in computer science and engineering at the pennsylvania state university , working with prof. patrick mcdaniel on the security and privacy of machine learning. if you'd like to learn more about my research, i recommend reading the blog posts i co-authored on cleverhans.io . i am also a google phd fellow in security. previously, i received my m.s. and b.s. in engineering sciences from the ecole centrale de lyon in france; which i attended after completing my classe préparatoire at the lycée louis-le-grand in paris. this website covers some of my background and current work. feel free to contact me directly for more information. address: w336 westgate building, university park, pa 16802, usa email: [email protected] twitter » github » google scholar » publications 2018 the challenges of making machine learning robust against adversarial inputs . ian goodfellow, patrick mcdaniel, nicolas papernot . communications of the acm (july 2018) column characterizing the limits and defenses of machine learning in adversarial settings . nicolas papernot . dissertation cleverhans v2.1.0: an adversarial machine learning library . nicolas papernot , fartash faghri, nicholas carlini, ian goodfellow, reuben feinman, alexey kurakin et al. technical report deep k-nearest neighbors: towards confident, interpretable and robust deep learning . nicolas papernot and patrick mcdaniel. preprint scalable private learning with pate . nicolas papernot , shuang song, ilya mironov, ananth raghunathan, kunal talwar, ulfar erlingsson. 6th international conference on learning representations, vancouver, canada conference ensemble adversarial training: attacks and defenses . florian tramèr, alexey kurakin, nicolas papernot , ian goodfellow, dan boneh, patrick mcdaniel. 6th international conference on learning representations, vancouver, canada conference towards the science of security and privacy in machine learning . nicolas papernot , patrick mcdaniel, arunesh sinha, and michael wellman. 3rd ieee european symposium on security and privacy, london, uk conference adversarial examples that fool both human and computer vision . gamaleldin f. elsayed, shreya shankar, brian cheung, nicolas papernot , alex kurakin, ian goodfellow, jascha sohl-dickstein. preprint 2017 adversarial examples for malware detection . kathrin grosse, nicolas papernot , praveen manoharan, michael backes, and patrick mcdaniel. european symposium on research in computer security, oslo, norway. conference on the protection of private information in machine learning systems: two recent approaches . martín abadi, úlfar erlingsson, ian goodfellow, h. brendan mcmahan, ilya mironov, nicolas papernot , kunal talwar, li zhang . 30th ieee computer security foundations symposium, santa barbara, ca, usa invited paper extending defensive distillation . nicolas papernot , patrick mcdaniel . workshop track at 38th ieee symposium on security and privacy, san jose, ca workshop adversarial attacks on neural network policies . sandy huang, nicolas papernot , ian goodfellow, yan duan, pieter abbeel . workshop track at the 5th international conference on learning representations, toulon, france. workshop semi-supervised knowledge transfer for deep learning from private training data . nicolas papernot , martín abadi, úlfar erlingsson, ian goodfellow, and kunal talwar . 5th international conference on learning representations, toulon, france conference best paper the space of transferable adversarial examples . florian tramèr, nicolas papernot , ian goodfellow, dan boneh, patrick mcdaniel preprint practical black-box attacks against machine learning . nicolas papernot , patrick mcdaniel, ian goodfellow, somesh jha, z.berkay celik, and ananthram swami . acm asia conference on computer and communications security, abu dhabi, uae conference on the (statistical) detection of adversarial examples . kathrin grosse, praveen manoharan, nicolas papernot , michael backes, and patrick mcdaniel preprint 2016 machine learning in adversarial settings . patrick mcdaniel, nicolas papernot , z. berkay celik. ieee security & privacy magazine column on the integrity of deep learning systems in adversarial settings . nicolas papernot . masters thesis transferability in machine learning: from phenomena to black-box attacks using adversarial samples . nicolas papernot , patrick mcdaniel, and ian goodfellow. technical report crafting adversarial input sequences for recurrent neural networks . nicolas papernot , patrick mcdaniel, ananthram swami, and richard harang . military communications conference (milcom), baltimore, md conference distillation as a defense to adversarial perturbations against deep neural networks . nicolas papernot , patrick mcdaniel, xi wu, somesh jha, and ananthram swami 37th ieee symposium on security and privacy, san jose, ca conference the limitations of deep learning in adversarial settings . nicolas papernot , patrick mcdaniel, somesh jha, matt fredrikson, z. berkay celik, and ananthram swami . 1st ieee european symposium on security and privacy, saarbrucken, germany conference 2015 enforcing agile access control policies in relational databases using views . nicolas papernot , patrick mcdaniel, and robert walls . military communications conference (milcom), tampa, fl conference 2014 security and science of agility . p. mcdaniel, t. jaeger, t. f. la porta, nicolas papernot , r. j. walls, a. kott, l. marvel, a. swami, p. mohapatra, s. v. krishnamurthy, i. neamtiu . acm workshop on moving target defense workshop blog i co-author a blog on the security and privacy of machine learning with ian goodfellow at www.cleverhans.io . i also write blog posts unrelated to machine learning on medium and keep track of them here. cleverhans.io privacy and machine learning: two unexpected allies? cleverhans.io the challenge of verification and testing of machine learning cleverhans.io is attacking machine learning easier than defending it? cleverhans.io breaking things is easy a review of “return-oriented programming: systems, languages, and applications.” detecting phishing websites using a decision tree kerberos: an authentication service for computer networks about usable security keys under doormats: mandating insecurity by requiring government access to all data and communications internet of things security at enigma 2016 healthcare security at enigma 2016 natural language processing presentations when a recording of the talk is available, the title links to the corresponding video. the following two embedded videos highlight works representative of my research on privacy (left) and security (right) in machine learning. 2018 security and privacy in machine learning (msr cambridge ai summer school) characterizing the space of adversarial examples in machine learning (nvidia) characterizing the space of adversarial examples in machine learning (2nd aro/iarpa workshop on aml) characterizing the space of adversarial examples in machine learning (mit-ibm watson ai lab) characterizing the space of adversarial examples in machine learning (microsoft research cambridge) characterizing the space of adversarial examples in machine learning (university of toronto) characterizing the space of adversarial examples in machine learning (epfl) characterizing the space of adversarial examples in machine learning (university of southern california) characterizing the space of adversarial examples in machine learning (university of michigan) characterizing the space of adversarial examples in machine learning (max planck institute for software systems) characterizing the space of adversarial examples in machine learning (columbia university) characterizing the space of adversarial examples in machine learning (university of virginia) characterizing the space of adversarial examples in machine learning (intel labs) characterizing the space of adversarial examples in machine learning (mcgill university) characteriz