Biotechnology Building-2, IIT Madras, Chennai-600036, India
Email: akankshasingh.2540[@]gmail[.]com
Akanksha Singh
I am a Post-Baccalaureate Fellow at the Centre for Responsible AI (CeRAI), IIT Madras, where I work under Prof. B. Ravindran. My current research focuses on explainable ensemble audio deepfake detection in Indian settings, with the aim of supporting fact-checkers and media organizations. Previously, I pursued my Master’s thesis research at IIT Jodhpur in the Image Analysis and Biometrics Lab (IAB), under the supervision of Prof. Richa Singh and Prof. Mayank Vatsa. this work contributed to a paper published at ICLR 2025.
I hold a BS-MS (Dual Degree) in Computer Science and Electrical Engineering from IISER Bhopal. My broader research interests lie in Responsible AI, spanning fairness evaluation and debiasing, model editing and unlearning, deepfake detection across audio, vision, and multimodal media, multilingual and low-resource learning, and explainability.
Outside of research, I am a sportsperson, and I enjoy films, dramas, and traveling to explore new places and experiences.
The proliferation of deepfakes an AI-generated content has led to a surge in media forgeries and misinformation, necessitating robust detection systems. However, current datasets lack diversity across modalities, languages, and real-world scenarios. To address this gap, we present ILLUSION (Integration of Life-Like Unique Synthetic Identities and Objects from Neural Networks), a large-scale, multi-modal deepfake dataset comprising 1.3 million samples spanning audio-visual forgeries, 26 languages, challenging noisy environments, and various manipulation protocols. Generated using 28 state-of-the-art generative techniques, ILLUSION includes faceswaps, audio spoofing, synchronized audio-video manipulations, and synthetic media while ensuring a balanced representation of gender and skin tone for unbiased evaluation. Using Jaccard Index and UpSet plot analysis, we demonstrate ILLUSION’s distinctiveness and minimal overlap with existing datasets, emphasizing its novel generative coverage. We benchmarked image, audio, video, and multi-modal detection models, revealing key challenges such as performance degradation in multilingual and multi-modal contexts, vulnerability to real-world distortions, and limited generalization to zero-day attacks. By bridging synthetic and real-world complexities, ILLUSION provides a challenging yet essential platform for advancing deepfake detection research. The dataset is publicly available at https://www.iab-rubric.org/illusion-database.
Looking for PhD opportiunities in AI/ML starting Fall 2026.
Jun 2025
Attended Microsoft Research (MSR) India Academic Summit 2025 held in Bengaluru on June 23-25, 2025. An event aimed at strengthening ties between the Indian academic community and researchers at MSR India.
Apr 2025
Attended the ICLR 2025 in Singapore from Apr 23-29, 2025 and presented my first poster!