Biotechnology Building-2, IIT Madras, Chennai-600036, India
Email: akankshasingh[dot]2540[@]gmail[dot]com
Akanksha Singh
I am actively looking for research positions while I am continually applying for PhD positions in Europe!
Hi,
I am a Post-Baccalaureate Fellow at the Centre for Responsible AI (CeRAI), IIT Madras, where I work under Prof. B. Ravindran. My work focuses on detecting textual, audio, visual, and multimodal deepfakes, specifically in the Indian context. Previously, I did my external Master’s thesis at IIT Jodhpur in the Image Analysis and Biometrics Lab (IAB), under the supervision of Prof. Richa Singh and Prof. Mayank Vatsa; the work on multimodal deepfake dataset contributed to a paper published at ICLR 2025.
I hold a BS-MS (Dual Degree) in Computer Science and Electrical Engineering from IISER Bhopal. My broader research interests lie in Responsible AI, spanning fairness and debiasing, model editing and unlearning, deepfake detection across audio, vision, and multimodal media, multilingual and low-resource learning, and explainability.
Outside of research, I am a “fitness freak”, recently did my first official 10k at Bengaluru, India and a half-marathon in Ghent, Belgium. I have also done couple of Himalayan treks (upto 15000ft). I thoroughly enjoy watching films and series, and occasionally write reviews. Lastly, like many others I live for traveling for new experiences. Will add a new page on my side-quests soon!
The proliferation of deepfakes an AI-generated content has led to a surge in media forgeries and misinformation, necessitating robust detection systems. However, current datasets lack diversity across modalities, languages, and real-world scenarios. To address this gap, we present ILLUSION (Integration of Life-Like Unique Synthetic Identities and Objects from Neural Networks), a large-scale, multi-modal deepfake dataset comprising 1.3 million samples spanning audio-visual forgeries, 26 languages, challenging noisy environments, and various manipulation protocols. Generated using 28 state-of-the-art generative techniques, ILLUSION includes faceswaps, audio spoofing, synchronized audio-video manipulations, and synthetic media while ensuring a balanced representation of gender and skin tone for unbiased evaluation. Using Jaccard Index and UpSet plot analysis, we demonstrate ILLUSION’s distinctiveness and minimal overlap with existing datasets, emphasizing its novel generative coverage. We benchmarked image, audio, video, and multi-modal detection models, revealing key challenges such as performance degradation in multilingual and multi-modal contexts, vulnerability to real-world distortions, and limited generalization to zero-day attacks. By bridging synthetic and real-world complexities, ILLUSION provides a challenging yet essential platform for advancing deepfake detection research. The dataset is publicly available at https://www.iab-rubric.org/illusion-database.
Volunteered at the Conclave on Safe and Trusted AI, a public event being hosted along with the Hybrid Meeting of the Safe & Trusted AI Working Group, under the India AI Impact Summit 2026, is being organized by the Centre for Responsible AI (CeRAI), IIT Madras, on Dec 11, 2025.
Nov 2025
Attended the highly enriching event TIACON 2025, an event on Information & Trust in the AI age. Hosted by Trusted Information Alliance at the India Habitat Centre, New Delhi, on Nov, 6 2025.