You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dylan is an assistant professor on the faculty of Artificial Intelligence and Decision-Making in the EECS Department and Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT). His research focuses on the problem of agent alignment: the challenge of identifying behaviors that are consistent with the goals of another actor or group of actors. His work aims to identify algorithmic solutions to alignment problems that arise from groups of AI systems, principal-agent pairs (i.e., human-robot teams), and societal oversight of ML systems.
Andy is an interdisciplinary Ph.D. Candidate with the Schwarzman College of Computing. He uses tools from Microeconomic Theory to understand multi-agent systems, recommendation engines, and automatic pricing tools in deployment and propose ways to mitigate undesirable consequences of seemingly innocuous algorithmic choices.
Jovana's interests lie broadly at the intersection of probabilistic inference, social cognition, and human-robot interaction. Her research focuses on building interactive AI agents that 1) effectively learn from human input, and 2) understand and act in accordance with human preferences, intentions, and values.
Mehul’s research broadly aims to improve multi-agent reinforcement learning systems using techniques and ideas from model-based RL, intrinsic motivation, curriculum learning, and reward design. Prior to joining MIT, he worked on developing general-purpose curriculum learning methods for reinforcement learning agents and on applying reinforcement learning to domains such as multi-agent pathfinding and multi-agent traffic signal control. His hobbies include reading and playing soccer.
Phillip is broadly interested in reinforcement learning topics including AI alignment, neurosymbolic AI, and multi-agent RL. Before the Algorithmic Alignment Group, Phillip was an undergraduate researcher at the University of Toronto, advised by Prof. Sheila McIlraith. His main hobbies include reading, playing piano, and composing music.
Cas works on tools for trustworthy, safe AI. His research emphasizes interpretability, adversaries, and automated diagnostic tools. Before his Ph.D, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI. He's also a member of the Effective Altruism community. Hobbies of his include biking, growing plants, and keeping insects.
48
+
Cas works on tools for trustworthy, safe AI. His research emphasizes interpretability, adversaries, and automated diagnostic/debugging tools. Before his Ph.D, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI. Hobbies of his include biking, growing plants, and keeping insects.
Stewart’s research interests center on aligning AI systems — such as reinforcement learning systems and large language models — with human goals. He is also interested in the theoretical foundations of alignment and the intersection of machine learning and causality. Outside of work, he enjoys playing piano, backpacking, salsa dancing, and philosophy.
Dana is broadly interested in understanding how the human normative system works and what enables cooperation. Through reverse-engineering the mechanisms of human collective intelligence, she hopes to contribute to efforts in designing and facilitating desirable interactions in our society. She draws insights from economics, anthropology, cognitive science, reinforcement learning, and social computing.
Julian is a Masters in Engineering (MEng) student working on alignment for Large Language Models. Julian’s interests include reward-modeling on reasoning, evaluating language model understanding of human conversation and intent, and the possibility for AI research to help inform the way we think about morality.
Olivia is a masters student interested in AI and robotics who gets most excited seeing algorithms come to life on physical robots. Prior to joining the Algorithmic Alignment group, she did her undergraduate at MIT in EECS and worked on soft robots in the Distributed Robotics Lab. Like any good New Englander, her hobbies include shellfishing, cycling, and maple syrup making.
Rui-Jie is an S.M. student in Technology and Policy. Her research interests are in the human-centered and legal aspects of computation, both in the design of regulation for emerging technologies as well as in the operationalization of legal values for technical systems. In 2021, she received a joint B.A. in computer science and mathematics from Scripps College as an off-campus student at Harvey Mudd College.
86
+
Prajna is an S.M. student in the Technology and Policy Program and EECS. Her research interests are broadly in the evaluation of algorithmic systems, both from a technical and regulatory perspective, algorithmic fairness and tools which facilitate the development of safer and more trustworthy AI. Prior to MIT, Prajna graduated from NYU Abu Dhabi in 2020 and was awarded the Post-graduate Research Fellow at NYU Abu Dhabi where she investigated bias propagation in recommender systems.
Taylor Lynn is an S.M. candidate in technology and policy. Prior to her current degree, she obtained a bachelor of software engineering with a minor in political science from McGill University in Montréal, Canada. Her research interests include the effectiveness of governance surrounding large, generative models, quantifying the societal impact of AI, and more generally the regulatory frameworks that function in the technology space.
93
+
Rui-Jie is an S.M. student in Technology and Policy. Her research interests are in the human-centered and legal aspects of computation, both in the design of regulation for emerging technologies as well as in the operationalization of legal values for technical systems. In 2021, she received a joint B.A. in computer science and mathematics from Scripps College as an off-campus student at Harvey Mudd College.
Julian is an undergraduate student majoring in both Computer Science and Philosophy. Julian’s interests include evaluating language model understanding of theories of human conversation, reinforcement learning from qualitative, linguistic human feedback, and the possibility for AI research to help inform the way we think about morality.
100
+
Taylor Lynn is an S.M. candidate in technology and policy. Prior to her current degree, she obtained a bachelor of software engineering with a minor in political science from McGill University in Montréal, Canada. Her research interests include the effectiveness of governance surrounding large, generative models, quantifying the societal impact of AI, and more generally the regulatory frameworks that function in the technology space.
Lena is a first-year M-Eng student whose research focuses on human-computer interfaces and machine learning. Her current project is focused on making effective data preparation scalable and inexpensive, in the hopes of mitigating biased model results commonly seen in big data predictions.
Max is researching AI governance. His current focus is on open source machine learning software and how it shapes AI research. He is especially inspired by the work of economist Elinor Ostrom and draws from fields ranging from the philosophy of science to the economics of innovation. Previously, he has researched computational cognitive science, worked at the White House Office of Science and Technology Policy, and published on AI policy at the Center for Security and Emerging Technology. He has a B.A. from MIT in computer science and loves bossa nova, Tibetan mythology and the history of technology.
Copy file name to clipboardExpand all lines: _site/feed.xml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.3.2">Jekyll</generator><link href="https://thestephencasper.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://thestephencasper.github.io/" rel="alternate" type="text/html" /><updated>2023-02-23T15:48:38-05:00</updated><id>https://thestephencasper.github.io/feed.xml</id><title type="html">Algorithmic Alignment Group</title><subtitle>Researching frameworks for human-aligned AI @ MIT CSAIL.</subtitle><entry><title type="html">Welcome to Jekyll!</title><link href="https://thestephencasper.github.io/jekyll/update/2022/02/19/welcome-to-jekyll.html" rel="alternate" type="text/html" title="Welcome to Jekyll!" /><published>2022-02-19T23:49:21-05:00</published><updated>2022-02-19T23:49:21-05:00</updated><id>https://thestephencasper.github.io/jekyll/update/2022/02/19/welcome-to-jekyll</id><content type="html" xml:base="https://thestephencasper.github.io/jekyll/update/2022/02/19/welcome-to-jekyll.html"><![CDATA[<p>You’ll find this post in your <code class="language-plaintext highlighter-rouge">_posts</code> directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run <code class="language-plaintext highlighter-rouge">jekyll serve</code>, which launches a web server and auto-regenerates your site when a file is updated.</p>
1
+
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.3.2">Jekyll</generator><link href="https://thestephencasper.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://thestephencasper.github.io/" rel="alternate" type="text/html" /><updated>2023-03-21T01:14:40-04:00</updated><id>https://thestephencasper.github.io/feed.xml</id><title type="html">Algorithmic Alignment Group</title><subtitle>Researching frameworks for human-aligned AI @ MIT CSAIL.</subtitle><entry><title type="html">Welcome to Jekyll!</title><link href="https://thestephencasper.github.io/jekyll/update/2022/02/19/welcome-to-jekyll.html" rel="alternate" type="text/html" title="Welcome to Jekyll!" /><published>2022-02-19T23:49:21-05:00</published><updated>2022-02-19T23:49:21-05:00</updated><id>https://thestephencasper.github.io/jekyll/update/2022/02/19/welcome-to-jekyll</id><content type="html" xml:base="https://thestephencasper.github.io/jekyll/update/2022/02/19/welcome-to-jekyll.html"><![CDATA[<p>You’ll find this post in your <code class="language-plaintext highlighter-rouge">_posts</code> directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run <code class="language-plaintext highlighter-rouge">jekyll serve</code>, which launches a web server and auto-regenerates your site when a file is updated.</p>
2
2
3
3
<p>Jekyll requires blog post files to be named according to the following format:</p>
Copy file name to clipboardExpand all lines: _site/research/index.html
+3-1Lines changed: 3 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -79,7 +79,9 @@ <h2 id="research">Research</h2>
79
79
80
80
<h3id="2023">2023</h3>
81
81
82
-
<p>Casper, S.<em>, Li, Y.</em>, Li, J.<em>, Bu, T.</em>, Zhang, K.*, Hadfield-Menell, D., (2022). <ahref="https://arxiv.org/abs/2302.10894">Benchmarking Interpretability Tools for Deep Neural Networks.</a> arXiv preprint arXiv:2302.10894</p>
82
+
<p>Casper, S., Li, Y., Li, J., Bu, T., Zhang, K., Hadfield-Menell, D., (2023). <ahref="https://arxiv.org/abs/2302.10894">Benchmarking Interpretability Tools for Deep Neural Networks.</a> arXiv preprint arXiv:2302.10894</p>
83
+
84
+
<p>Haupt, A., Hadfield-Menell, D., & Podimata, C. (2023). <ahref="https://arxiv.org/abs/2302.06559">Recommending to Strategic Users.</a> arXiv preprint arXiv:2302.06559.</p>
0 commit comments