Skip to content

Commit 376c96f

Browse files
paper update
1 parent 218f07c commit 376c96f

File tree

3 files changed

+5
-5
lines changed

3 files changed

+5
-5
lines changed

_includes/team.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ <h3>Principal Investigator</h3>
55

66
<p>
77
<img style="padding-right: 15px;" src="/docs/assets/dylan.png" width="160" height="160" alt="Dylan Hadfield-Menell">
8-
<strong>Dylan Hadfield-Menell</strong>>, <a href="dhm@csail.mit.edu">dhm@csail.mit.edu</a>, <a href="http://people.csail.mit.edu/dhm/">Website</a>
8+
<strong>Dylan Hadfield-Menell</strong>, <a href="dhm@csail.mit.edu">dhm@csail.mit.edu</a>, <a href="http://people.csail.mit.edu/dhm/">Website</a>
99
<br>
1010
Dylan is an assistant professor on the faculty of Artificial Intelligence and Decision-Making in the EECS Department and Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT). His research focuses on the problem of agent alignment: the challenge of identifying behaviors that are consistent with the goals of another actor or group of actors. His work aims to identify algorithmic solutions to alignment problems that arise from groups of AI systems, principal-agent pairs (i.e., human-robot teams), and societal oversight of ML systems.
1111
</p>

index.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ permalink: /
77

88
## Home
99

10-
Our focus is to better align the development of AI with human interests and values headed by [Dylan Hadfield-Menell](https://engineering.mit.edu/faculty/dylan-hadfield-menell/). We are working in the [Embodied Entelligence Group](https://ei.csail.mit.edu/) at [MIT CSAIL](https://www.csail.mit.edu/) to build the conceptual understanding and algorithmic techniques that will be needed for more trustworthy AI. Our research is interdisciplinary but emphasizes how humans and AI systems interact in the contexts of value learning, incentives, recommendation, and debugging.
10+
Our focus is to better align the development of AI with human interests and values headed by [Dylan Hadfield-Menell](http://people.csail.mit.edu/dhm/). We are working in the [Embodied Entelligence Group](https://ei.csail.mit.edu/) at [MIT CSAIL](https://www.csail.mit.edu/) to build the conceptual understanding and algorithmic techniques that will be needed for more trustworthy AI. Our research is interdisciplinary but emphasizes how humans and AI systems interact in the contexts of value learning, incentives, recommendation, and debugging.
1111

1212

1313

research.markdown

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,10 @@ permalink: /research/
77

88
## Research
99

10-
Find us on [Github](https://github.com/Algorithmic-Alignment-Lab).
10+
Find us on [Github](https://github.com/Algorithmic-Alignment-Lab).
1111

12-
<!--- #### 2022
12+
###2022
1313

14-
Sutton, R. S., & Barto, A. G. (2018). [Reinforcement learning: An introduction.](https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf) MIT press. [BibTeX](https://scholar.googleusercontent.com/scholar.bib?q=info:t8N5xiW9bXoJ:scholar.google.com/&output=citation&scisdr=CgWTY31kEPyMgqtR1ow:AAGBfm0AAAAAYiBXzowXsFVJNPzBvJ5wFyFoi8IN8GkG&scisig=AAGBfm0AAAAAYiBXztuv9gUZtgxBLqD3ECitmd9rQZAc&scisf=4&ct=citation&cd=-1&hl=en) --->
14+
Casper, S., Nadeau, M., Hadfield-Menell, D, & Kreiman, G (2022). [Robust Feature-Level Adversaries are Interpretability Tools](https://arxiv.org/abs/2110.03605). [BibTeX](https://dblp.uni-trier.de/rec/journals/corr/abs-2110-03605.html?view=bibtex)
1515

1616

0 commit comments

Comments
 (0)