From 16a0cc3063ec3daf97d64fc7ba4b459bc51b97f1 Mon Sep 17 00:00:00 2001
From: Maximilian Puelma Touzel
Date: Wed, 18 Feb 2026 14:33:16 -0500
Subject: [PATCH] Update index.html
---
stamina/index.html | 212 +++++++++++++++++++++++++++++++++++----------
1 file changed, 165 insertions(+), 47 deletions(-)
diff --git a/stamina/index.html b/stamina/index.html
index f61c5d5..f6c435f 100644
--- a/stamina/index.html
+++ b/stamina/index.html
@@ -50,7 +50,7 @@
Upcoming Talks
Past Talks
Organizers
- Back
+ ComplexDataLab
@@ -125,7 +125,7 @@ About STAMINA
- If you're interested in how platforms shape behavior, how AI extends platform affordances, or how data-driven approaches can improve agent/platform design and governance, STAMINA offers an interdisciplinary forum. We welcome participants from computer science, social sciences, psychology, policy, and related fields.
+ If you're interested in how platforms shape behavior, how AI extends platform affordances, or how data-driven approaches can improve agent/platform design and governance, STAMINA offers an interdisciplinary forum. We welcome participants from computer science, social sciences, psychology, policy, and related fields, as well as industry and start-up researchers.
@@ -133,10 +133,7 @@
About STAMINA
- This Working Group emerged from the LLM-based Social Simulation Workshop at COLM in Montreal.
-
-
- It is part of vision for a vibrant research community, alongside norms around research practise outlined in the pre-print, Time to Close The Validation Gap in LLM Social Simulations.
+ This Working Group emerged from discussions at LLM-based Social Simulation Workshop at COLM as a way to grow a vibrant research community with prodcutive research norms, e.g. as outlined in the pre-print, Time to Close The Validation Gap in LLM Social Simulations by members in the Complex Data Lab.
@@ -157,50 +154,134 @@ About STAMINA
Upcoming Talks
-
- 2026/02/17
+
-
+ Presenter: [PRESENTER NAME], [PRESENTER AFFILIATION]
+
Speaker Bio
-
+
- Gian Marco Orlando is currently a PhD Student at the University of Naples Federico II. He earned his Master’s Degree in Computer Engineering from the University of Naples Federico II, graduating with honors. His thesis highlights his expertise in the intersection of Artificial Intelligence and Social Network Analysis. His research interests lie in Social Network Analysis, Agent-Based Modeling and Big Data Analytics.
-
-Jinyi is a second-year CS PhD student co-advised by Dr. Emilio Ferrara and Dr. Luca Luceri. Her research lies in the intersection of computer science and social science, recently focusing on large-scale agentic simulations of human behavior, measuring and modeling collective dynamics in social networks, and empirical studies on AI and the future of work.
-
-Mahdi Saeedi is currently exploring large-scale LLM simulations because he is fascinated by understanding how these models actually work under the hood. His background spans both the theoretical foundations and hands-on engineering of generative AI systems, which has been great preparation for this research. He have also spent time thinking about how people interact with AI through thoughtful interface design, since he believes making these tools intuitive and accessible is just as important as the underlying technology.
+ [BIO TEXT]
+
+
+
+

+

+

+

+
+ Abstract
+
+
+
+ -->
+
2026/03/03
+
+ AI and the Future of Science
+
+ Presenter: Martin Weiss, Tiptree Systems
+
+ Speaker Bio
+
+
+
+ Martin Weiss is Co-Founder of Tiptree Systems, a startup building AI agents that help ML researchers find, create, and share knowledge more efficiently. Tiptree is deployed to researchers across many top-tier institutes including Mila, ELLIS, MIT, and many more. Martin holds a PhD in AI from Mila, where he studied under Hugo Larochelle and Chris Pal. Before his PhD, he was an early employee at YesGraph, a social graph startup acquired by Lyft.
+
+
+
+
+
+
+
+ Abstract
+
+
+
+ This talk examines three converging crises. First, the decoupling of control from comprehension — we can increasingly predict and manipulate systems without understanding why they work. Second, the collapse of the generator-verifier gap — AI makes it trivial to produce the aesthetics of deep thought. This makes peer review more difficult because we can no longer rely on easy-to-verify signals of work quality. Third, the credit assignment gap — our academic reward systems optimize for publication metrics, not the increase in understanding that a new paper produces.
+
+
+
-
-
-

-

+
+
+
2026/03/10
+
+ Testing and Improving Multi-Agent LLM Cooperation
+
+ Presenter: Zhijing Jin, University of Toronto
+
+ Speaker Bio
+
+
+
+ Zhijing Jin (she/her) is an Assistant Professor at the University of Toronto and Research Scientist at the Max Planck Institute. She serves as a CIFAR AI Chair, an ELLIS advisor, and a faculty member at the Vector Institute, and the Schwartz Reisman Institute. She co-chairs the ACL Ethics Committee, and the ACL Year-Round Mentorship. Her research focuses on Causal Reasoning with LLMs, and AI Safety in Multi-Agent LLMs. She has published over 80 papers and has received the ELLIS PhD Award, three Rising Star awards, and two Best Paper awards at NeurIPS 2024 Workshops.
+
+
+
+
+
+
+
+ Abstract
+
+
+
+ While progress has been made in evaluating single-agent LLMs for persona modeling, the behavior of these models within multi-agent groups remains underexplored. This presentation outlines a research series dedicated to closing this gap by testing LLM cooperation through autonomous social simulations. Specifically, we ask: what happens when personas are tasked to interact and cooperate?
+
+ To answer this, we introduce a suite of simulation environments (GovSim, MoralSim, and SanctSim) designed to stress-test persona interaction. These environments simulate high-stakes scenarios, such as the tragedy of the commons and ethical trade-offs, allowing us to investigate whether simulated societies can autonomously negotiate social order and how personas with differing ethical constraints navigate social dilemmas.
+
+ Our findings highlight implications for persona modeling. We show that agents exhibit a functional "theory of mind," capable of inferring the identities of their interlocutors and strategically adapting their behavior, sometimes exploiting specific model vulnerabilities. Furthermore, we discuss a counterintuitive phenomenon where advanced reasoning capabilities lead to exploitative behaviors that humans typically avoid, highlighting a significant misalignment between agent optimization and human social norms.
+
+
+
-
-
+
+
+ 2026/03/24
+
+ AI and the knowledge commons
+
+ Presenter: Marc-Antoine Parent, Solutions Conversence inc.
+
+ Speaker Bio
+
+
+
+ Marc-Antoine has worked in computational linguistics, knowledge representation, and more recently has focused on tools for augmented collective intelligence. He's especially interested in how to represent emergent and disputed knowledge.
+
https://conversence.com
+
https://hyperknowledge.org
+
+
+
+
+
+
Abstract
-
+
- Generative agents are rapidly advancing in sophistication, raising urgent questions about how they might coordinate when deployed in online ecosystems. This is particularly consequential in information operations (IOs), influence campaigns that aim to manipulate public opinion on social media. While traditional IOs have been orchestrated by human operators and relied on manually crafted tactics, agentic AI promises to make campaigns more automated, adaptive, and difficult to detect. This work presents the first systematic study of emergent coordination among generative agents in simulated IO campaigns. Using generative agent-based modeling, we instantiate IO and organic agents in a simulated environment and evaluate coordination across operational regimes, from simple goal alignment to team knowledge and collective decision-making. As operational regimes become more structured, IO networks become denser and more clustered, interactions more reciprocal and positive, narratives more homogeneous, amplification more synchronized, and hashtag adoption faster and more sustained.
-Remarkably, simply revealing to agents which other agents share their goals can produce coordination levels nearly equivalent to those achieved through explicit deliberation and collective voting.
-Overall, we show that generative agents, even without human guidance, can reproduce coordination strategies characteristic of real-world IOs, underscoring the societal risks posed by increasingly automated, self-organizing IOs.
+ We present a model of the formation of a knowledge commons for democratic, collective decision-making in society, and explain how generative AI disrupts the formation of this knowledge commons. We will also present ways in which to reinforce the collective processes around a knowledge commons, including the possible contributions of hybrid AI.
+
+ This is based on a paper that was presented at the IJCAI'25 democracy and AI workshop.
-
@@ -213,7 +294,6 @@ 2026/02/17
SECTION: PAST TALKS (currently commented out)
Uncomment this section when you have past talks to list
============================================================ -->
-
+
+
+ Abstract
+
+
+
+ Generative agents are rapidly advancing in sophistication, raising urgent questions about how they might coordinate when deployed in online ecosystems. This is particularly consequential in information operations (IOs), influence campaigns that aim to manipulate public opinion on social media. While traditional IOs have been orchestrated by human operators and relied on manually crafted tactics, agentic AI promises to make campaigns more automated, adaptive, and difficult to detect. This work presents the first systematic study of emergent coordination among generative agents in simulated IO campaigns. Using generative agent-based modeling, we instantiate IO and organic agents in a simulated environment and evaluate coordination across operational regimes, from simple goal alignment to team knowledge and collective decision-making. As operational regimes become more structured, IO networks become denser and more clustered, interactions more reciprocal and positive, narratives more homogeneous, amplification more synchronized, and hashtag adoption faster and more sustained.
+ Remarkably, simply revealing to agents which other agents share their goals can produce coordination levels nearly equivalent to those achieved through explicit deliberation and collective voting.
+ Overall, we show that generative agents, even without human guidance, can reproduce coordination strategies characteristic of real-world IOs, underscoring the societal risks posed by increasingly automated, self-organizing IOs.
+
+
@@ -235,13 +344,13 @@ [Date]
- -->
+
-
+
-
+
-
+
@@ -318,9 +429,16 @@
+
+
Interested helping STAMINA grow?
+
+ Please spread the word by forwarding our Socials/events on your networks!
+
+ Want to join the STAMINA organization team? Send us an email.
+
+
--->