Selected Research

I publish my work at premier venues in interactive systems, sensing, and human-computer interaction.
Below are the highlights of my work:

Hyperspectral Imaging, At Your Fingertips!

Dhruv Verma et al. In Progress

More details coming soon.

Teaser

AttentioNet: Classifying Student Attention Types With EEG

Dhruv Verma, Sejal Bhalla, Sai Santosh, Saumya Yadav, Aman Parnami, and Jainendra Shukla. IEEE ACII 2023

Human attention is often characterized by using simplistic computational models, such as a binary: attentive or non-attentive, or degree: low, medium, high levels. However, the Clinical Model of Attention (Sohlberg and Mateer, 2001) suggests a more nuanced perspective. According to this model, there are five distinct attention states: focused, selective, sustained, alternating, and divided attention. In this research, we show that different attention types exhibit unique neural activity patterns measurable with EEG. Deep learning methods may be used to create comprehensive models of human attention considering these distinctions.

AttentioNet Illustration

SAMoSA: Sensing Activities With Motion and Subsampled Audio

Vimal Mollyn, Karan Ahuja, Dhruv Verma, Chris Harrison, and Mayank Goel. ACM UbiComp/IMWUT 2022

SAMoSA is a human-activity recognition system that leverages inertial sensors and low-fidelity audio signals from consumer-grade smartwatches. By using low-fidelity audio (sampled at less than 1 kHz) instead of typical audio (greater than 16 kHz), it offers a solution that is power-efficient and privacy-preserving, since speech becomes unintelligible at these low sampling rates. Despite using lower fidelity signals, SAMoSA manages to achieve a performance comparable to systems using 16 kHz audio data when it comes to classifying various daily activities, across different contexts.

SAMoSA Illustration

ExpressEar: Sensing Fine-Grained Facial Expressions with Earables

Dhruv Verma, Sejal Bhalla, Dhruv Sahnan, Jainendra Shukla, and Aman Parnami. ACM UbiComp/IMWUT 2021

ExpressEar is a system that takes advantage of inertial sensors found on everyday wireless earbuds to detect subtle facial expressions. It is based on the Facial Action Coding System (FACS), and has the capability to detect minute facial movements, also known as facial action units, with high accuracy. ExpressEar offers a high fidelity, privacy-preserving, and an unobtrusive alternative to camera-based approaches, suitable for both stationary and mobile contexts. Its applications span various fields, including affective computing, face gesture recognition, animation and graphics, among others.

ExpressEar Illustration

Fashionist: Personalising Outfit Recommendation for Cold-Start Scenarios

Dhruv Verma, Kshitij Gulati, and Rajiv Ratn Shah. ACM Multimedia 2020, IEEE BigMM 2020

Fashionist is an data-driven fashion recommendation system that utilizes a small set of style examples to model user preferences. It harvests knowledge related to fashion concepts, semantics, and context through a joint training process, which includes predicting fashion category (e.g., jacket, pant, skirt, etc.) and attributes (e.g., sleeve length, color, texture, etc.) on publicly available fashion datasets.

Fashionist Illustration