Profile picture of Anish

Anish Athalye

PGP: 0xC3F6E4F5086B3B32
GitHub: @anishathalye
Twitter: @anishathalye


I am a PhD student at MIT in the PDOS group. I’m interested in formal verification, systems, security, and artificial intelligence.

In the past, I’ve been an undergraduate at MIT, an intern at OpenAI, an intern at Dropbox, and an intern at Google. During undergrad, I co-founded Code for Good and helped run HackMIT.


  1. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

    Anish Athalye*, Nicholas Carlini*, and David Wagner.

    35th International Conference on Machine Learning (ICML 2018).

    (Best Paper Award)

  2. Synthesizing Robust Adversarial Examples

    Anish Athalye*, Logan Engstrom*, Andrew Ilyas*, and Kevin Kwok.

    35th International Conference on Machine Learning (ICML 2018).

  3. Black-box Adversarial Attacks with Limited Queries and Information

    Andrew Ilyas*, Logan Engstrom*, Anish Athalye*, and Jessy Lin*.

    35th International Conference on Machine Learning (ICML 2018).

  4. pASSWORD tYPOS and How to Correct Them Securely

    Rahul Chatterjee, Anish Athalye, Devdatta Akhawe, Ari Juels, and Thomas Ristenpart.

    37th IEEE Symposium on Security and Privacy (SP 2016).

    (Distinguished Student Paper Award)

Short Papers

  1. Evaluating and Understanding the Robustness of Adversarial Logit Pairing

    Logan Engstrom*, Andrew Ilyas*, and Anish Athalye*.

    NeurIPS 2018 Workshop on Security in Machine Learning (SECML 2018).

  2. On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

    Anish Athalye* and Nicholas Carlini*.

    The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS 2018).


  1. On Evaluating Adversarial Robustness

    Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin.