Research
My primary research goal is applying a human-centered lens to understand and improve security, privacy, and trust for end users on sociotechnical systems. In this page, I summarize my main research thrusts.
Trustworthiness of Human-AI Interfaces [ACSAC’23]
I evaluate the trustworthiness of human-AI interfaces – whether these interfaces generate content, (e.g., respond to user queries, make decisions) that end users can trust. End users consume security and privacy advice from various online resources (e.g., querying search engines, reading news articles and online forums). Additionally, the advent of end-user facing LLMs has resulted in these users leveraging LLMs for different types of advice (e.g., financial, personal life).
Motivated by this, I perform the first study to evaluate whether end-user facing language models are able to provide security and privacy advice. To do this, I curate a dataset of security and privacy misconceptions, evaluate two popular chat language models to expose their non-negligible error rates and how they harm users by pointing them towards false sources.
Investigating Monetization Exploits of Bad Actors [USENIX’22, NDSS’24]
I characterize the tactics and tools leveraged by bad actors to generate revenue by exploiting sociotechnical systems such as content creation and e-commerce platforms. For this, I draw on digital ethnography, a research method that is used to study people or communities who interact and communicate in digital environments such as online forums. First, I perform the first work to expose exploitative content monetization on YouTube, demonstrating the malicious behavior enacted by content creators and third-party service providers that harm content creation stakeholders (e.g., benign creators, viewers) [USENIX 22]. Second, I leverage a mixed-methods approach to study how abusive e-commerce vendors exploit and harm other sellers via an abusive business model [NDSS’24].
Identifying Impact of Abuse on End Users [USENIX’24 (x2)]
I explored how two abusive activities, toxic content and dark patterns, impact end users security and privacy decisions and perceptions. Here, I perform user-centered data collection, engaging with human participants consisting of lay users and vulnerable populations. First, I used a mixed-methods approach to gather the perspectives of 68 refugees and liaisons who work closely with them to characterize the impact of toxic content targeted at the refugee community. Later, my collaborators and I designed a large-scale survey to understand the perception and influence of dark patterns in App Tracking Transparency (ATT) permission prompts.