Research Interests
My primary research interests lie at the intersection of machine learning systems, representation learning, and biologically inspired computation. I am particularly interested in designing models that are both effective and aligned with principles of efficiency, privacy, and interpretability.
Core Areas of Interest
- Federated Learning and Privacy-Preserving Machine Learning
Designing decentralized learning systems that enable collaborative model training without sharing sensitive data. - Multimodal Representation Learning and Retrieval Systems
Learning aligned representations across vision and language modalities for efficient retrieval and memory-inspired systems. - Spiking Neural Networks and Neuromorphic Computing
Exploring biologically inspired neural models for temporal processing, energy efficiency, and brain-inspired learning mechanisms. - Contrastive Learning and Embedding-Based Methods
Studying representation learning techniques for robust embedding spaces and similarity-based retrieval. - Reproducible and Research-Grade Machine Learning Pipelines
Building end-to-end systems with transparent experimentation, evaluation, and reproducibility.
Research Direction
I aim to pursue research that bridges theory and systems, with a focus on scalable, interpretable, and biologically grounded machine learning models. I am open to research internships, academic collaboration, and graduate research opportunities aligned with these interests.