You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Jovana's interests lie broadly at the intersection of probabilistic inference, social cognition, and human-robot interaction. Her research focuses on building interactive AI agents that 1) effectively learn from human input, and 2) understand and act in accordance with human preferences, intentions, and values.
He is broadly interested in reinforcement learning topics including AI alignment, neurosymbolic AI, and multi-agent RL. Before the Algorithmic Alignment Group, Phillip was an undergraduate researcher at the University of Toronto, advised by Prof. Sheila McIlraith. His main hobbies include reading, playing piano, and composing music.
34
+
Phillip is broadly interested in reinforcement learning topics including AI alignment, neurosymbolic AI, and multi-agent RL. Before the Algorithmic Alignment Group, Phillip was an undergraduate researcher at the University of Toronto, advised by Prof. Sheila McIlraith. His main hobbies include reading, playing piano, and composing music.
Dana is broadly interested in understanding how the human normative system works and what enables cooperation. Through reverse-engineering the mechanisms of human collective intelligence, she hopes to contribute to efforts in designing and facilitating desirable interactions in our society. She draws insights from economics, anthropology, cognitive science, reinforcement learning, and social computing.
Olivia is a masters student interested in AI and robotics who gets most excited seeing algorithms come to life on physical robots. Prior to joining the Algorithmic Alignment group, she did her undergraduate at MIT in EECS and worked on soft robots in the Distributed Robotics Lab. Like any good New Englander, her hobbies include shellfishing, cycling, and maple syrup making.
Rui-Jie is an S.M. student in Technology and Policy doing research on regulatory mechanisms for AI. She is broadly interested in the societal impacts of AI and the organizational and policy incentives surrounding its development. Previously, she completed a joint BA in computer science and math from Scripps College as an off-campus major at Harvey Mudd.
{"headline":"Algorithmic Alignment Group","@type":"WebPage","url":"http://localhost:4000/404.html","description":"Researching frameworks for human-aligned AI @ MIT CSAIL.","@context":"https://schema.org"}</script>
23
+
{"headline":"Algorithmic Alignment Group","@type":"WebPage","url":"https://thestephencasper.github.io/404.html","description":"Researching frameworks for human-aligned AI @ MIT CSAIL.","@context":"https://schema.org"}</script>
24
24
<!-- End Jekyll SEO tag -->
25
25
26
26
<!-- start custom head snippets, customize with your own _includes/head-custom.html file -->
{"headline":"Algorithmic Alignment Group","@type":"WebPage","url":"http://localhost:4000/contact/","description":"Researching frameworks for human-aligned AI @ MIT CSAIL.","@context":"https://schema.org"}</script>
23
+
{"headline":"Algorithmic Alignment Group","@type":"WebPage","url":"https://thestephencasper.github.io/contact/","description":"Researching frameworks for human-aligned AI @ MIT CSAIL.","@context":"https://schema.org"}</script>
24
24
<!-- End Jekyll SEO tag -->
25
25
26
26
<!-- start custom head snippets, customize with your own _includes/head-custom.html file -->
@@ -78,7 +78,7 @@ <h2 id="contact">Contact</h2>
78
78
79
79
<p>Find us at the <ahref="https://www.csail.mit.edu/about/stata-center">Stata Center</a> (32 Vassar St, Cambridge, MA 02139) in workspace 32-33X.</p>
80
80
81
-
<p>For individuals’ contact, see the team page. If you’d like to work with us, please reach out! For potential UROP or Ph.D. students, feel free to email Dylan.</p>
81
+
<p>For individuals’ contact, see the team page.</p>
Copy file name to clipboardExpand all lines: _site/feed.xml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.0">Jekyll</generator><link href="http://localhost:4000/feed.xml" rel="self" type="application/atom+xml" /><link href="http://localhost:4000/" rel="alternate" type="text/html" /><updated>2022-05-31T00:05:48-04:00</updated><id>http://localhost:4000/feed.xml</id><title type="html">Algorithmic Alignment Group</title><subtitle>Researching frameworks for human-aligned AI @ MIT CSAIL.</subtitle><entry><title type="html">Welcome to Jekyll!</title><link href="http://localhost:4000/jekyll/update/2022/02/19/welcome-to-jekyll.html" rel="alternate" type="text/html" title="Welcome to Jekyll!" /><published>2022-02-19T23:49:21-05:00</published><updated>2022-02-19T23:49:21-05:00</updated><id>http://localhost:4000/jekyll/update/2022/02/19/welcome-to-jekyll</id><content type="html" xml:base="http://localhost:4000/jekyll/update/2022/02/19/welcome-to-jekyll.html"><p>You’ll find this post in your <code class="language-plaintext highlighter-rouge">_posts</code> directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run <code class="language-plaintext highlighter-rouge">jekyll serve</code>, which launches a web server and auto-regenerates your site when a file is updated.</p>
1
+
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.0">Jekyll</generator><link href="https://thestephencasper.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://thestephencasper.github.io/" rel="alternate" type="text/html" /><updated>2022-08-23T20:40:43-04:00</updated><id>https://thestephencasper.github.io/feed.xml</id><title type="html">Algorithmic Alignment Group</title><subtitle>Researching frameworks for human-aligned AI @ MIT CSAIL.</subtitle><entry><title type="html">Welcome to Jekyll!</title><link href="https://thestephencasper.github.io/jekyll/update/2022/02/19/welcome-to-jekyll.html" rel="alternate" type="text/html" title="Welcome to Jekyll!" /><published>2022-02-19T23:49:21-05:00</published><updated>2022-02-19T23:49:21-05:00</updated><id>https://thestephencasper.github.io/jekyll/update/2022/02/19/welcome-to-jekyll</id><content type="html" xml:base="https://thestephencasper.github.io/jekyll/update/2022/02/19/welcome-to-jekyll.html"><p>You’ll find this post in your <code class="language-plaintext highlighter-rouge">_posts</code> directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run <code class="language-plaintext highlighter-rouge">jekyll serve</code>, which launches a web server and auto-regenerates your site when a file is updated.</p>
2
2
3
3
<p>Jekyll requires blog post files to be named according to the following format:</p>
{"name":"Algorithmic Alignment Group","headline":"Algorithmic Alignment Group","@type":"WebSite","url":"http://localhost:4000/","description":"Researching frameworks for human-aligned AI @ MIT CSAIL.","@context":"https://schema.org"}</script>
23
+
{"name":"Algorithmic Alignment Group","headline":"Algorithmic Alignment Group","@type":"WebSite","url":"https://thestephencasper.github.io/","description":"Researching frameworks for human-aligned AI @ MIT CSAIL.","@context":"https://schema.org"}</script>
24
24
<!-- End Jekyll SEO tag -->
25
25
26
26
<!-- start custom head snippets, customize with your own _includes/head-custom.html file -->
@@ -76,7 +76,7 @@ <h3 id="project_tagline">Researching frameworks for human-aligned AI @ MIT CSAIL
76
76
77
77
<h2id="home">Home</h2>
78
78
79
-
<p>Our focus is to better align the development of AI with human interests and values headed by <ahref="https://engineering.mit.edu/faculty/dylan-hadfield-menell/">Dylan Hadfield-Menell</a>. We are working in the <ahref="https://ei.csail.mit.edu/">Embodied Entelligence Group</a> at <ahref="https://www.csail.mit.edu/">MIT CSAIL</a> to build the conceptual understanding and algorithmic techniques that will be needed for more trustworthy AI. Our research is interdisciplinary but emphasizes how humans and AI systems interact in the contexts of value learning, incentives, recommendation, and debugging.</p>
79
+
<p>Our focus is to better align the development of AI with human interests and values headed by <ahref="http://people.csail.mit.edu/dhm/">Dylan Hadfield-Menell</a>. We are working in the <ahref="https://ei.csail.mit.edu/">Embodied Intelligence Group</a> at <ahref="https://www.csail.mit.edu/">MIT CSAIL</a> to build the conceptual understanding and algorithmic techniques that will be needed for more trustworthy AI. Our research is interdisciplinary but emphasizes how humans and AI systems interact in the contexts of value learning, incentives, recommendation, and debugging.</p>
{"headline":"Algorithmic Alignment Group","@type":"WebPage","url":"http://localhost:4000/research/","description":"Researching frameworks for human-aligned AI @ MIT CSAIL.","@context":"https://schema.org"}</script>
23
+
{"headline":"Algorithmic Alignment Group","@type":"WebPage","url":"https://thestephencasper.github.io/research/","description":"Researching frameworks for human-aligned AI @ MIT CSAIL.","@context":"https://schema.org"}</script>
24
24
<!-- End Jekyll SEO tag -->
25
25
26
26
<!-- start custom head snippets, customize with your own _includes/head-custom.html file -->
<p>Find us on <ahref="https://github.com/Algorithmic-Alignment-Lab">Github</a>.</p>
80
80
81
-
<!--- #### 2022
81
+
<h3id="2022">2022</h3>
82
82
83
-
Sutton, R. S., & Barto, A. G. (2018). [Reinforcement learning: An introduction.](https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf) MIT press. [BibTeX](https://scholar.googleusercontent.com/scholar.bib?q=info:t8N5xiW9bXoJ:scholar.google.com/&output=citation&scisdr=CgWTY31kEPyMgqtR1ow:AAGBfm0AAAAAYiBXzowXsFVJNPzBvJ5wFyFoi8IN8GkG&scisig=AAGBfm0AAAAAYiBXztuv9gUZtgxBLqD3ECitmd9rQZAc&scisf=4&ct=citation&cd=-1&hl=en) --->
83
+
<p>Christoffersen, P.J.K., Haupt, A.A, Hadfield-Menell, D. (2022). <ahref="https://arxiv.org/abs/2208.10469">Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RL</a>.</p>
84
+
85
+
<p>Yew, R.J. and Hadfield-Menell, D. (2022). <ahref="https://dl.acm.org/doi/10.1145/3514094.3534130">A Penalty Default Approach to Preemptive Harm Disclosure and Mitigation for AI Systems</a>. In Proceedings of the 5th AAAI/ACM Conference on AI, Ethics, and Society. <ahref="https://scholar.googleusercontent.com/scholar.bib?q=info:Zy8cJGbw9QUJ:scholar.google.com/&output=citation&scisdr=CgWTYX5AEPyMg45o47g:AAGBfm0AAAAAYwVu-7hfL7sgjbex8wF3U-g2nDKsY20o&scisig=AAGBfm0AAAAAYwVu-y80HvtCEX2eXNg2NM7Ki7kE-BiC&scisf=4&ct=citation&cd=-1&hl=en">BobTeX</a></p>
86
+
87
+
<p>Casper, S., Nadeau, M., Hadfield-Menell, D, & Kreiman, G (2022). <ahref="https://arxiv.org/abs/2110.03605">Robust Feature-Level Adversaries are Interpretability Tools</a>. <ahref="https://dblp.uni-trier.de/rec/journals/corr/abs-2110-03605.html?view=bibtex">BibTeX</a></p>
0 commit comments