Akshayvarun Subramanya

I am a fifth year PhD student at the Dept. of CSEE of University of Maryland, Baltimore County (UMBC), where I work on Machine learning and Computer vision. My advisor is Prof.Hamed Pirsiavash.

I spent the summer of 2020 as part of Amazon Rekognition working on Object Detection algorithms. I also spent the summer of 2019 at Dolby Laboratories working on generative modeling with Cong Zhou and Vivek Kumar.

I previously worked at Indian Institute of Science, where I was a research assistant for Prof.Venkatesh Babu at the Video Analytics Lab.

I am fortunate to have collaborated with Prof.Jelena Kovacevic and Siheng Chen during my internship at Carnegie Mellon University (CMU) in the summer of 2015.

I completed my undergraduate studies from PES Institute of Technology (now PES University).

Email  /  Twitter  /  Google Scholar  /  CV  /  LinkedIn

profile photo


During the course of my PhD, I have mainly focused on studying the Adversarial Robustness of Computer Vision models but I am broadly interested in studying robust generalization. I believe that building robust networks will help us in building Trustworthy AI and provide an insight into understanding how Deep networks generalize.

project image

Backdoor Attacks on Vision Transformers

Akshayvarun Subramanya, Aniruddha Saha ,Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
arxiv preprint, 2022
arxiv /

We show that Vision Transformers are vulnerable to backdoor attacks. Moreover, we find an intriguing difference between CNNs and Transformers where commonly used interpretation algorithms for CNN architectures are unable to highlight the backdoor trigger during test time, but Transformers achieve this quite easily. Based on this, we develop a test-time blocking defense which reduces attack success rates by a large margin.

project image

A Simple Approach to Adversarial Robustness in Few-shot Image Classification

Akshayvarun Subramanya, Hamed Pirsiavash
arxiv preprint, 2021
arxiv /

We develop a simple transfer-learning based algorithm to achieve adversarial robustness in Few-shot Image Classification. Such an approach bypasses the need for meta-learning based robustness methods, hence providing advantages such as efficiency, simplicity, scalability etc. We also show that our framework can be used to develop verifiably robust networks for few-shot settings.

project image

Role of Spatial Context in Adversarial Robustness for Object Detection

Aniruddha Saha*, Akshayvarun Subramanya*, Koninika Patil, Hamed Pirsiavash  *equal contribution
CVPR2020 Adversarial Machine learning Workshop, 2020
paper / code /

Most fast object detection algorithms rely on using spatial context which we show can lead to decreased adversarial robustness. We propose regularization techniques to improve robustness and also discuss using interpretation algorithms in object detection networks.

project image

Hidden Trigger Backdoor Attacks

Aniruddha Saha, Akshayvarun Subramanya, Hamed Pirsiavash
Oral presentation at 34th American Conference on Artificial Intelligence(AAAI), 2020
arxiv /

We explore poisoning methods to introduce backdoors in neural networks where the trigger remains hidden from the victim during training time. Hence, the poisoned examples cannot be identified upon manual inspection and the attacker can use the trigger to fool the model successfully at test time.

project image

Fooling Network Interpretation in Image Classification

Akshayvarun Subramanya*, Vipin Pillai*, Hamed Pirsiavash    *equal contribution
International Conference on Computer vision (ICCV), 2019
arxiv / workshop version / code /

We show that popular network interpretation algorithms do not necessarily show the correct reasoning for network’s prediction, by using adversarial examples. Our work highlights the need for developing more robust interpretation tools to analyze a neural network’s prediction.

project image

BatchOut: Batch-level feature augmentation to improve robustness to adversarial examples

Akshayvarun Subramanya, Konda Reddy Mopuri, R Venkatesh Babu
11th Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP), 2018
paper /

We propose a novel feature augmentation technique which can lead to improved robustness against multiple adversarial methods.

project image

Confidence estimation in deep neural networks via density modelling

Akshayvarun Subramanya, Suraj Srinivas, R Venkatesh Babu
International Conference on Signal Processing and Communications (SPCOM), 2018
arxiv /

We show that traditional softmax based confidence measures has drawbacks and propose a new confidence measure based on density modelling approaches. The proposed measure shows improvement for different kinds of noise introduced in images.

project image

Training Sparse Neural Networks

Suraj Srinivas, Akshayvarun Subramanya, R Venkatesh Babu
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Embedded vision workshop, 2017
paper /

We propose a new framework of training neural networks which implicitly use sparse computations. We introduce additional gate parameters which help in pruning, resulting in state-of-the-art compression results for neural networks.


      Expert Reviewer,    ICML 2021

      Reviewer,    ICML 2021 Socially Responsible Machine Learning workshop

      Reviewer,    ICCV 2021

      Reviewer,    ICLR 2021 Security and Safety in Machine Learning Systems workshop

      Reviewer,    ICLR 2021 Rethinking ML papers workshop

      Top 10% Reviewer,    NeurIPS 2020

      Reviewer,    International Conference on Pattern Recognition

      Reviewer,    AAAI 2021

      Reviewer,    Adversarial Robustness in the Real world, ECCV2020 workshop

      Reviewer,    Adversarial Machine learning, CVPR2020 workshop

      Reviewer,    Towards Trustworthy ML: Rethinking Security and Privacy for ML, ICLR 2020 workshop

      Reviewer,    IET Computer Vision

Design and source code from Jon Barron's website and Leonid Keselman's Jekyll fork