Research
We study how security, privacy, and trust emerge—and break down—in sociotechnical systems.
1. Trustworthiness in Human–AI Interfaces
We examine when and why users can trust AI systems that assist, advise, or act on their behalf. This includes auditing LLMs as sources of security and privacy advice, evaluating whether the guidance they provide is accurate and safe [ACSAC 2023]. We also develop audit and measurement tools to identify how interface design choices can subtly influence LLM-based web agents decisions with regards to security and privacy [IEEE S&P 2026]. Additionally, we investigate how implicit values surface in AI-generated actions during routine tasks (for example, budgeting or communication tone), revealing where AI systems align with or diverge from human expectations [EMNLP 2025, Main].
2. Monetization Abuse in Platform Ecosystems
We investigate how malicious actors exploit sociotechnical platforms such as content creation systems [USENIX 2022] and e-commerce marketplaces [NDSS 2024, IEEE S&P Magazine 2025] to generate profit while harming legitimate users. Using digital ethnography and mixed-methods analysis, we uncover the tools, tactics, and incentive structures that enable abusive behavior, ranging from exploitative content monetization to deceptive business models that target unsuspecting users.
3. Impact of Abuse on End Users
We study how toxic content [USENIX 2024], manipulative dark patterns [USENIX 2024, USENIX 2025], and online scams [IEEE S&P 2026] impact the security and privacy decisions of different user groups, including both everyday users and vulnerable communities. Through user-centered data collection and qualitative inquiry, we examine how experiences, risk exposure, and resource constraints influence whether protective mechanisms are usable, trusted, or feasible.
