Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
118 changes: 94 additions & 24 deletions stamina/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -186,65 +186,97 @@ <h4>[DATE Y/M/D]</h4>
</div>
</li>
--><!-- END TALK TEMPLATE -->
<h4>2026/03/10</h4>


<br>

<h4>2026/03/24</h4>
<li>
<b><a href="[PAPER LINK]">AI and the knowledge commons</a></b>
<br>
Presenter: <u><a href="https://conversence.com" target="_blank" rel="noopener noreferrer">Marc-Antoine Parent</a></u>, Solutions Conversence inc.
<a class="btn btn-info btn-xs" data-toggle="collapse" href="#20260324-bio" role="button" aria-expanded="false">
Speaker Bio
</a>
<div class="collapse" id="20260324-bio">
<div class="card card-body">
Marc-Antoine has worked in computational linguistics, knowledge representation, and more recently has focused on tools for augmented collective intelligence. He's especially interested in how to represent emergent and disputed knowledge.
<br><a href="https://conversence.com">https://conversence.com</a>
<br><a href="https://hyperknowledge.org">https://hyperknowledge.org</a>
</div>
</div>
<br>
<!-- <a href="[RECORDING LINK - ADD AFTER TALK]"><img src="https://img.shields.io/badge/Youtube-Recording-orange"></a>
<a href="[PAPER LINK]"><img src="https://img.shields.io/badge/Paper-link-important"></a>
<a href="[GITHUB_LINK]"><img src="https://img.shields.io/badge/Github-link-lightgrey"></a> -->
<!-- <a href="https://www.conversence.com/presentations/2025-08-18-ijcai.pptx"><img src="https://img.shields.io/badge/Talk-Slides-blue"></a> -->
<a class="btn btn-primary btn-xs" data-toggle="collapse" href="#20260324-abstract" role="button" aria-expanded="false">
Abstract
</a>
<div class="collapse" id="20260324-abstract">
<div class="card card-body">
We present a model of the formation of a knowledge commons for democratic, collective decision-making in society, and explain how generative AI disrupts the formation of this knowledge commons. We will also present ways in which to reinforce the collective processes around a knowledge commons, including the possible contributions of hybrid AI.
<br>
This is based on a paper that was presented at the IJCAI'25 democracy and AI workshop.
</div>
</div>
</li>

<br>

<h4>2026/04/14</h4>
<li>
<b><a href="https://drive.google.com/file/d/1UNVlGqzhnh2BNpviwctUlvue33MjN34k/view">Evaluating Cooperation in LLM Social Groups through Self-Organizing Leadership</a></b>
<br>
Presenter: <u><a href="https://www.cs.toronto.edu/~rfaulk/" target="_blank" rel="noopener noreferrer">Ryan Faulkner</a></u>, University of Toronto/Deepmind
<a class="btn btn-info btn-xs" data-toggle="collapse" href="#20260310-bio" role="button" aria-expanded="false">
Speaker Bio
</a>
<div class="collapse" id="20260310-bio">
<div class="collapse" id="20260414-bio">
<div class="card card-body">
Ryan is a Computer Scientist and Machine Learning researcher with a background in reinforcement learning and foundation models. He has worked as a Research Engineer over the past decade at Google Deepmind and he is also a PhD Student at the University of Toronto advised by Zhijing Jin. At GDM he works in the Concordia group led by Joel Leibo. At a high level his current research focus is on multi-agent systems, LLMs, and social learning. In this context he is interested in memory mechanisms, agent theory of mind, collective decision making, and simulating political systems.

</div>
</div>
<br>
<!-- <a href="[RECORDING LINK - ADD AFTER TALK]"><img src="https://img.shields.io/badge/Youtube-Recording-orange"></a> -->
<!-- <a href="[PAPER LINK]"><img src="https://img.shields.io/badge/Paper-link-important"></a> -->
<!-- <a href="[PAPERLINK]"><img src="https://img.shields.io/badge/Paper-link-important"></a> -->
<!-- <a href="[GITHUB_LINK]"><img src="https://img.shields.io/badge/Github-link-lightgrey"></a> -->
<!-- <a href="[SLIDES_LINK]"><img src="https://img.shields.io/badge/Talk-Slides-blue"></a> -->
<a class="btn btn-primary btn-xs" data-toggle="collapse" href="#20260310-abstract" role="button" aria-expanded="false">
<a class="btn btn-primary btn-xs" data-toggle="collapse" href="#20260414-abstract" role="button" aria-expanded="false">
Abstract
</a>
<div class="collapse" id="20260310-abstract">
<div class="collapse" id="20260414-abstract">
<div class="card card-body">
Governing common-pool resources requires agents to develop enduring strategies through cooperation and self-governance to avoid collective failure. While foundation models have shown potential for cooperation in these settings, existing multi-agent research provides little insight into whether structured leadership and election mechanisms can improve collective decision making. The lack of such a critical organizational feature ubiquitous in human society presents a significant shortcoming of the current methods. In this work we aim to directly address whether leadership and elections can support improved social welfare and cooperation through multi-agent simulation with LLMs. We present a new framework that simulates leadership through elected personas and candidate-driven agendas and carry out an empirical study of LLMs under controlled governance conditions. Our experiments demonstrate that structured leadership can improve social welfare scores by 55.4% and survival time by 128.6% across a range of high performing LLMs. Through the construction of an agent social graph we compute centrality metrics to assess the social influence of leader personas and also analyze rhetorical and cooperative tendencies revealed through a sentiment analysis on leader utterances. This work lays the foundation for developing prosocial, self-governing multi-agent systems capable of navigating complex resource dilemmas.
</div>
</div>
</li>

<br>

<h4>2026/03/24</h4>
<h4>2026/04/28</h4>
<li>
<b><a href="[PAPER LINK]">AI and the knowledge commons</a></b>
<b><a href="[PAPER LINK]">From Social Networks to Sensemaking Networks</a></b>
<br>
Presenter: <u><a href="https://conversence.com" target="_blank" rel="noopener noreferrer">Marc-Antoine Parent</a></u>, <a href="https://www.conversence.com/">Solutions Conversence inc.</a>
<a class="btn btn-info btn-xs" data-toggle="collapse" href="#20260324-bio" role="button" aria-expanded="false">
Presenter: <u><a href="https://cosmik.network/" target="_blank" rel="noopener noreferrer">Ronen Tamari</a></u>, Cosmik Network
<a class="btn btn-info btn-xs" data-toggle="collapse" href="#20260428-bio" role="button" aria-expanded="false">
Speaker Bio
</a>
<div class="collapse" id="20260324-bio">
<div class="collapse" id="20260428-bio">
<div class="card card-body">
Marc-Antoine has worked in computational linguistics, knowledge representation, and more recently has focused on tools for augmented collective intelligence. He's especially interested in how to represent emergent and disputed knowledge.
<br><a href="https://conversence.com">https://conversence.com</a>
<br><a href="https://hyperknowledge.org">https://hyperknowledge.org</a>
Ronen is a researcher and entrepreneur working on collective intelligence systems to help us think better, together. Ronen recently completed an Open Science fellowship at the Astera Institute, where he co-founded Cosmik, a mission driven R&D lab working on new kinds of social networks for collective sensemaking. Ronen also completed a PhD in computer science, with a focus on cognitive-inspired AI models for natural language comprehension. Ronen’s current research interests center around cooperative human-AI systems, institutional design for collective intelligence, and the role of epistemic environments in shaping human and machine intelligence.
</div>
</div>
<br>
<!-- <a href="[RECORDING LINK - ADD AFTER TALK]"><img src="https://img.shields.io/badge/Youtube-Recording-orange"></a>
<a href="[PAPER LINK]"><img src="https://img.shields.io/badge/Paper-link-important"></a>
<a href="[GITHUB_LINK]"><img src="https://img.shields.io/badge/Github-link-lightgrey"></a> -->
<!-- <a href="https://www.conversence.com/presentations/2025-08-18-ijcai.pptx"><img src="https://img.shields.io/badge/Talk-Slides-blue"></a> -->
<a class="btn btn-primary btn-xs" data-toggle="collapse" href="#20260324-abstract" role="button" aria-expanded="false">
<!-- <a href="[RECORDING LINK - ADD AFTER TALK]"><img src="https://img.shields.io/badge/Youtube-Recording-orange"></a> -->
<!-- <a href="[PAPER LINK]"><img src="https://img.shields.io/badge/Paper-link-important"></a> -->
<!-- <a href="[GITHUB_LINK]"><img src="https://img.shields.io/badge/Github-link-lightgrey"></a> -->
<!-- <a href="[SLIDES_LINK]"><img src="https://img.shields.io/badge/Talk-Slides-blue"></a> -->
<a class="btn btn-primary btn-xs" data-toggle="collapse" href="#20260428-abstract" role="button" aria-expanded="false">
Abstract
</a>
<div class="collapse" id="20260324-abstract">
<div class="collapse" id="20260428-abstract">
<div class="card card-body">
We present a model of the formation of a knowledge commons for democratic, collective decision-making in society, and explain how generative AI disrupts the formation of this knowledge commons. We will also present ways in which to reinforce the collective processes around a knowledge commons, including the possible contributions of hybrid AI.
<br>
This is based on a paper that was presented at the IJCAI'25 democracy and AI workshop.
What would social media look like if it were designed for sensemaking rather than engagement? We're exploring this question with Semble, a platform where researchers curate shareable collections, create knowledge trails that others can build on, and discover relevant work through their network's collective attention. Built on the AT Protocol, the open social networking protocol behind Bluesky, Semble offers researchers data portability and an open API designed for extension. We'll discuss how Semble enables new kinds of research tooling, from living semantic citation graphs to collaborative review and annotation. We'll also share how ATProto's open data layer creates unique opportunities for studying and designing epistemic infrastructure — from observing how knowledge trails form across a network to experimenting with platform affordances that support collective sensemaking.
</div>
</div>
</li>
Expand Down Expand Up @@ -332,6 +364,8 @@ <h4>2026/03/03</h4>
</div>
</div>
</li>

<br>

<br>

Expand Down Expand Up @@ -372,6 +406,42 @@ <h4>2026/02/17</h4>

<br>

<h4>2026/02/17</h4>
<li>
<b><a href="https://arxiv.org/abs/2510.25003">Emergent Coordinated Behaviors in Networked LLM Agents: Modeling the Strategic Dynamics of Information Operations</a></b>
<br>
Presenter: <u>Gian Marco Orlando, Jinyi Ye, Mahdi Saeedi</u>, University of Naples Federico II - University of Southern California (ISI)
<a class="btn btn-info btn-xs" data-toggle="collapse" href="#20260217-bio" role="button" aria-expanded="false">
Speaker Bio
</a>
<div class="collapse" id="20260217-bio">
<div class="card card-body">
Gian Marco Orlando is currently a PhD Student at the University of Naples Federico II. He earned his Master’s Degree in Computer Engineering from the University of Naples Federico II, graduating with honors. His thesis highlights his expertise in the intersection of Artificial Intelligence and Social Network Analysis. His research interests lie in Social Network Analysis, Agent-Based Modeling and Big Data Analytics.
<br><br>
Jinyi is a second-year CS PhD student co-advised by Dr. Emilio Ferrara and Dr. Luca Luceri. Her research lies in the intersection of computer science and social science, recently focusing on large-scale agentic simulations of human behavior, measuring and modeling collective dynamics in social networks, and empirical studies on AI and the future of work.
<br><br>
Mahdi Saeedi is currently exploring large-scale LLM simulations because he is fascinated by understanding how these models actually work under the hood. His background spans both the theoretical foundations and hands-on engineering of generative AI systems, which has been great preparation for this research. He have also spent time thinking about how people interact with AI through thoughtful interface design, since he believes making these tools intuitive and accessible is just as important as the underlying technology.
</div>
</div>
<br>
<a href="https://www.youtube.com/watch?v=BH9_gyYyvdw" target="_blank" rel="noopener noreferrer"><img src="https://img.shields.io/badge/Youtube-Recording-orange"></a>
<a href="https://arxiv.org/abs/2510.25003"><img src="https://img.shields.io/badge/Paper-link-important"></a>
<!-- <a href="[GITHUB_LINK]"><img src="https://img.shields.io/badge/Github-link-lightgrey"></a> -->
<!-- <a href="[SLIDES_LINK]"><img src="https://img.shields.io/badge/Talk-Slides-blue"></a> -->
<a class="btn btn-primary btn-xs" data-toggle="collapse" href="#20260217-abstract" role="button" aria-expanded="false">
Abstract
</a>
<div class="collapse" id="20260217-abstract">
<div class="card card-body">
Generative agents are rapidly advancing in sophistication, raising urgent questions about how they might coordinate when deployed in online ecosystems. This is particularly consequential in information operations (IOs), influence campaigns that aim to manipulate public opinion on social media. While traditional IOs have been orchestrated by human operators and relied on manually crafted tactics, agentic AI promises to make campaigns more automated, adaptive, and difficult to detect. This work presents the first systematic study of emergent coordination among generative agents in simulated IO campaigns. Using generative agent-based modeling, we instantiate IO and organic agents in a simulated environment and evaluate coordination across operational regimes, from simple goal alignment to team knowledge and collective decision-making. As operational regimes become more structured, IO networks become denser and more clustered, interactions more reciprocal and positive, narratives more homogeneous, amplification more synchronized, and hashtag adoption faster and more sustained.
Remarkably, simply revealing to agents which other agents share their goals can produce coordination levels nearly equivalent to those achieved through explicit deliberation and collective voting.
Overall, we show that generative agents, even without human guidance, can reproduce coordination strategies characteristic of real-world IOs, underscoring the societal risks posed by increasingly automated, self-organizing IOs.
</div>
</div>
</li>

<br>

</div>
</div>
</div>
Expand Down
Loading