Yash Maurya

Privacy Engineer

I am a graduate student in the Privacy Engineering Program at Carnegie Mellon University, pursuing my passion for creating privacy-conscious AI solutions and ensuring the ethical use of data. My mission is to design robust privacy systems for the greater good of society.

This summer, as an AI Governance intern at BNY AI Hub, I helped develop Eliza, BNY's AI platform for 15,000+ users. I implemented LLM safety measures, created benchmarking pipelines for LLMs and RAG agents, and contributed to governance validation. This work led to Eliza's feature in Fortune magazine.

My key interests are Privacy Preserving Machine Learning, Fairness, Federated Learning, Differential Privacy, and Responsible AI.

Here's my resume in case you need it.

Want to chat? Send me an email or text on LinkedIn!

Privacy is not something that I'm merely entitled to, it's an absolute prerequisite.

Recent News

Research

* indicates equal contribution

2024

Beyond the Accept Button: How Information and Control Shape Data Sharing and AI Engagement
Ibrahim Chhaya*, Yash Maurya*, Zuofei Hong*, Limin Ge*
Sponsored by Meta - MSIT-PE Capstone Report 2024
TL;DR: Studied how consent flow design impacts AI engagement and data sharing, examining length and control options in social media data sharing.
Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Pratiksha Thaker, Shengyuan Hu, Neil Kale, Yash Maurya, Zhiwei Steven Wu, Virginia Smith
IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), 2024
TL;DR: Discussion about the current state and limitations of LLM unlearning benchmarks with a focus on forget/retain set methods
Designing a Benefit Assessment Protocol for AI Systems
Rachel Kim*, Yash Maurya*, Goutam Mukku*
Course Project for Responsible AI Course(10-735) at CMU
TL;DR: A structured protocol for systematically assessing AI benefits to enable more comprehensive AI evaluation.
Unified Locational Differential Privacy Framework
Aman Priyanshu*, Yash Maurya*, Suriya Ganesh*, Vy Tran*
arXiv preprint arXiv:2405.03903
TL;DR: A privacy framework for aggregating sensitive location-based data while protecting individual privacy through differential privacy mechanisms.
AI Governance and Accountability: An Analysis of Anthropic's Claude
Aman Priyanshu*, Yash Maurya*, Zuofei Hong*
arXiv preprint arXiv:2407.01557
TL;DR: Case study on Anthropic's Claude using AI governance and accountability frameworks, examining compliance with NIST and EU AI Act standards.
Guardrail baselines for unlearning in LLMs
Pratiksha Thaker, Yash Maurya, Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith
ICLR 2024 Workshop on Secure and Trustworthy Large Language Models
TL;DR: Simple guardrails (prompting and filtering) match finetuning's effectiveness for unlearning in LLMs, challenging current evaluation metrics.
Xinran Alexandra Li, Yu-Ju Yang, Yash Maurya, Tian Wang, Hana Habib, Norman Sadeh, Lorrie Faith Cranor
Twentieth Symposium on Usable Privacy and Security (SOUPS 2024 Posters) & SOUPS 2024 Societal & User-Centered Privacy in AI Workshop (SUPA 2024)
TL;DR: UsersFirst taxonomy outperforms LINDDUN PRO in detecting privacy notice and choice threats in user study.
Tian Wang, Xinran Alexandra Li, Miguel Rivera-Lanas, Yash Maurya, Hana Habib, Lorrie Faith Cranor, Norman Sadeh
Twentieth Symposium on Usable Privacy and Security (SOUPS 2024 Posters) & SOUPS 2024 Workshop on Privacy Threat Modeling (WPTM 2024)
TL;DR: UsersFirst: A user-centric framework for identifying and mitigating privacy notice and choice threats, extending beyond LINDDUN
Through the Lens of LLMs: Unveiling Differential Privacy Challenges
Aman Priyanshu*, Yash Maurya*, Vy Tran*
2024 USENIX Conference on Privacy Engineering Practice and Respect(PEPR'24)
TL;DR: LLMs demonstrate stronger privacy attacks on Google's Topics API, bypassing differential privacy safeguards.
Is it Worth Storing Historical Gradients?
Joong Ho Choi*, Yingxin Liu*, Yash Maurya*
Course Project for Federated and Collaborative Learning Course(10-719) at CMU
TL;DR: Current weights beat historical gradients for detecting FL attacks, saving storage and enhancing privacy.

2022

Federated Learning for Colorectal Cancer Prediction
Yash Maurya*, Prahaladh Chandrahasan*, G Poornalatha
2022 IEEE 3rd Global Conference for Advancement in Technology (GCAT), 1-5
TL;DR: Federated learning enables privacy-preserving colorectal cancer prediction across hospitals with centralized-level accuracy

2021

Rakshit Naidu, Haofan Wang, Soumya Snigdha Kundu, Ankita Ghosh, Yash Maurya, Shamanth R Nayak K, Joy Michael
Responsible Computer Vision (RCV) Workshop at CVPR 2021
TL;DR: Slightly modified version of IS-CAM (described below)

2020

IS-CAM: Integrated Score-CAM for axiomatic-based explanations
Rakshit Naidu, Ankita Ghosh, Yash Maurya, Shamanth R Nayak K, Soumya Snigdha Kundu
arXiv preprint arXiv:2010.03023
TL;DR: Enhanced CNN interpretability through IS-CAM, integrating Score-CAM to produce sharper attribution maps