Research
During the course of my PhD, I have mainly focused on studying the Adversarial Robustness of Computer Vision models but I am broadly interested in studying robust generalization. I believe that building robust networks will help us in building Trustworthy AI and provide an insight into understanding how Deep networks generalize.
|
|
Backdoor Attacks on Vision Transformers
Akshayvarun Subramanya, Aniruddha Saha ,Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash
arxiv preprint, 2022
arxiv /
We show that Vision Transformers are vulnerable to backdoor attacks. Moreover, we find an intriguing difference between CNNs and Transformers where commonly used interpretation algorithms for CNN architectures are unable to highlight the backdoor trigger during test time, but Transformers achieve this quite easily. Based on this, we develop a test-time blocking defense which reduces attack success rates by a large margin.
|
|
A Simple Approach to Adversarial Robustness in Few-shot Image Classification
Akshayvarun Subramanya, Hamed Pirsiavash
arxiv preprint, 2021
arxiv /
We develop a simple transfer-learning based algorithm to achieve adversarial robustness in Few-shot Image Classification. Such an approach bypasses the need for meta-learning based robustness methods, hence providing advantages such as efficiency, simplicity, scalability etc. We also show that our framework can be used to develop verifiably robust networks for few-shot settings.
|
|
Role of Spatial Context in Adversarial Robustness for Object Detection
Aniruddha Saha*, Akshayvarun Subramanya*, Koninika Patil, Hamed Pirsiavash *equal contribution
CVPR2020 Adversarial Machine learning Workshop, 2020
paper /
code /
Most fast object detection algorithms rely on using spatial context which we show can lead to decreased adversarial robustness. We propose regularization techniques to improve robustness and also discuss using interpretation algorithms in object detection networks.
|
|
Hidden Trigger Backdoor Attacks
Aniruddha Saha, Akshayvarun Subramanya, Hamed Pirsiavash
Oral presentation at 34th American Conference on Artificial Intelligence(AAAI), 2020
arxiv /
We explore poisoning methods to introduce backdoors in neural networks where the trigger remains hidden from the victim during training time. Hence, the poisoned examples cannot be identified upon manual inspection and the attacker can use the trigger to fool the model successfully at test time.
|
|
Fooling Network Interpretation in Image Classification
Akshayvarun Subramanya*, Vipin Pillai*, Hamed Pirsiavash *equal contribution
International Conference on Computer vision (ICCV), 2019
arxiv /
workshop version /
code /
We show that popular network interpretation algorithms do not necessarily show the correct reasoning for network’s prediction, by using adversarial examples. Our work highlights the need for developing more robust interpretation tools to analyze a neural network’s prediction.
|
|
BatchOut: Batch-level feature augmentation to improve robustness to adversarial examples
Akshayvarun Subramanya, Konda Reddy Mopuri, R Venkatesh Babu
11th Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP), 2018
paper /
We propose a novel feature augmentation technique which can lead to improved robustness against multiple adversarial methods.
|
|
Confidence estimation in deep neural networks via density modelling
Akshayvarun Subramanya, Suraj Srinivas, R Venkatesh Babu
International Conference on Signal Processing and Communications (SPCOM), 2018
arxiv /
We show that traditional softmax based confidence measures has drawbacks and propose a new confidence measure based on density modelling approaches. The proposed measure shows improvement for different kinds of noise introduced in images.
|
|
Training Sparse Neural Networks
Suraj Srinivas, Akshayvarun Subramanya, R Venkatesh Babu
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Embedded vision workshop, 2017
paper /
We propose a new framework of training neural networks which implicitly use sparse computations. We introduce additional gate parameters which help in pruning, resulting in state-of-the-art compression results for neural networks.
|
|