How can we design robots to collaborate with people in messy human environments? When they drive alongside us, help cook our food, and rescue our loved ones during disasters, robots must understand the implicit human contracts within our interactions. Robots must build trust through explainable AI: able to quickly and clearly communicate their decision-making policies to a human collaborator. Toward this goal, my current study explores how we can make logical robot policies quick and easy to interpret.
I am a PhD Student in EECS at MIT; I joined in Fall 2018. Before starting at MIT, I spent two years working at Google as an Associate Product Manager on Augmented Reality and Search. I graduated from Harvard College in 2016 with an A.B. in Computer Science. I am an NSF GRFP recipient and an MIT Jacobs Presidential Fellowship recipient.