You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Andy is an interdisciplinary Ph.D. Candidate with the Schwarzman College of Computing. He uses tools from Microeconomic Theory to understand multi-agent systems, recommendation engines, and automatic pricing tools in deployment and propose ways to mitigate undesirable consequences of seemingly innocuous algorithmic choices.
Mehul’s research broadly aims to improve multi-agent reinforcement learning systems using techniques and ideas from model-based RL, intrinsic motivation, curriculum learning, and reward design. Prior to joining MIT, he worked on developing general-purpose curriculum learning methods for reinforcement learning agents and on applying reinforcement learning to domains such as multi-agent pathfinding and multi-agent traffic signal control. His hobbies include reading and playing soccer.
Phillip is broadly interested in reinforcement learning topics including AI alignment, neurosymbolic AI, and multi-agent RL. Before the Algorithmic Alignment Group, Phillip was an undergraduate researcher at the University of Toronto, advised by Prof. Sheila McIlraith. His main hobbies include reading, playing piano, and composing music.
Cas works on tools for trustworthy, safe AI. His research emphasizes interpretability, adversaries, and robust reinforcement learning. Before his Ph.D, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI. He's also a member of the Effective Altruism community. Hobbies of his include biking, growing plants, and keeping insects.
48
+
Cas works on tools for trustworthy, safe AI. His research emphasizes interpretability, adversaries, and automated diagnostic tools. Before his Ph.D, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI. He's also a member of the Effective Altruism community. Hobbies of his include biking, growing plants, and keeping insects.
Stewart’s research interests center on aligning AI systems — such as reinforcement learning systems and large language models — with human goals. He is also interested in the theoretical foundations of alignment and the intersection of machine learning and causality. Outside of work, he enjoys playing piano, backpacking, salsa dancing, and philosophy.
Rui-Jie is an S.M. student in Technology and Policy doing research on regulatory mechanisms for AI. She is broadly interested in the societal impacts of AI and the organizational and policy incentives surrounding its development. Previously, she completed a joint BA in computer science and math from Scripps College as an off-campus major at Harvey Mudd.
78
+
Rui-Jie is an S.M. student in Technology and Policy. Her research interests are in the human-centered and legal aspects of computation, both in the design of regulation for emerging technologies as well as in the operationalization of legal values for technical systems. In 2021, she received a joint B.A. in computer science and mathematics from Scripps College as an off-campus student at Harvey Mudd College.
Taylor Lynn is an S.M. candidate in technology and policy. Prior to her current degree, she obtained a bachelor of software engineering with a minor in political science from McGill University in Montréal, Canada. Her research interests include the effectiveness of governance surrounding large, generative models, quantifying the societal impact of AI, and more generally the regulatory frameworks that function in the technology space.
Julian is an undergraduate student majoring in both Computer Science and Philosophy. Julian’s interests include evaluating language model understanding of theories of human conversation, reinforcement learning from qualitative, linguistic human feedback, and the possibility for AI research to help inform the way we think about morality.
0 commit comments