Below, we outline the ELBICA lab’s trajectory. Visit here to see our frameworks, and here for occasional podcasts that summarize our research.

  • In previous work [1 – 3], Eliott successfully designed and developed a computational architecture that models moral reasoning and empathy in multi-agent tasks. In 2021, Eliott began my efforts to enhance and expand the architecture. The 2021 summer research guided the ELBICA lab to reflect on the differences between moral reasoning and moral intuition, as well as their influence on a group’s reputation and moral behavior [4, 5].
  • In 2022, Eliott was named a Scialog fellow for the “Molecular Basis of Cognition” group, demonstrating the relevance of the ELBICA lab’s cognitively inspired work; then, for 2022’s research cycle, the ELBICA lab investigated how to test and model moral reasoning and moral intuition in our cognitively inspired computational architecture, called EDA (Empathy-Driven Architecture) [5]. The ELBICA lab: 1) investigated the differences between moral reasoning and moral intuition, and the importance of emotions and feelings, such as empathy, as an attentional mechanism to facilitate decision-making; 2) Searched for clues of moral intuition, semantic meaning, and common sense in different images, inspiring the creation of 3-D scenes and a dataset, and 3) Used robot simulators to investigate what robot tasks are the most appropriate to test and assess EDA in the future, as well as the distinction between coordination and cooperation. Additionally, Eliott’s first-year Tutorial course inspired us to investigate how anthropomorphism affects our interactions with machines [6]. ELBICA lab’s efforts got Eliott intrigued about a dichotomy: cooperation vs. coordination.
  • In 2023, outcomes from our 3-D project [7] led us to tools for people with print disabilities. Additionally, the ELBICA lab developed MAS approaches that implement traditional RL techniques to drive insights into the distinction between cooperation/cooperation; our testbeds included networked agents [8] and the Smart Surface benchmark [9].
  • In 2024, we continued investigating tools for individuals with a humor comprehension deficit and began exploring the applicability of the cockroach’s scape system for robotic tasks, as well as the role of concepts in the design and explainability of AI systems. AI’s ubiquity highlights the importance of a well-thought-out use of concepts, which should be held even for systems that are too simple or not intended for human interaction. For instance, in the MAS literature, it is common to compare agents’ performances across different settings. However, it is not as common a reflection on the meaning of applied concepts, along with how/if meanings still hold in the experimental results, and this is a gap that the ELBICA lab has been working to address, particularly in terms of coordination versus cooperation [10]. Finally, Eliott compiled the outcomes of our 3D project, wrote an extensive multidisciplinary literature review, and submitted it to a journal [11], to which we were invited as a result of our best paper award [7].
  • In 2025, the ELBICA lab continued to explore the distinction between cooperation and coordination in MAS, identifying the relevance of sacrificial cooperation, and we submitted a pre-print version of this work [10]. These experiences have sharpened Dr. Eliott’s research interest in how emerging AI systems interact with human cultural and cognitive complexity, especially in educational contexts. They also inform her design of tools that help students reason about alignment, safety, and ethics while doing real technical work. Thus, in 2026, the ELBICA lab will launch its new arm: The Cognitive Robustness Research Studio: Untraining Predictability in the Face of GenAI systems.

REFERENCES
[1] Eliott, F., and Ribeiro, C. Emergence of cooperation through simulation of moral behavior. HAIS 2015. Springer International Pub., 2015.
[2] Eliott, F., and Ribeiro, C. Moral behavior and empathy modeling through the premise of reciprocity. In Procs. of the 1st International Conference on Human and Social Analytics. St. Julians, Malta: Huso, 2015.
[3] Eliott, F., and Ribeiro, C. A computational model for simulation of moral behavior. In Procs. of the I. Conf. on Neural Computation Theory and Applications (NCTA-2014). SCITEPRESS (Science and Technology Publications), 2014.
[4] He, M., Gao, M., Gao, Y., Eliott, F., Cascading Failures and the Robustness of Cooperation in a Unified Scale-Free Network Model. Booktitle: International Conference on Complex Networks and Their Applications, Springer Verlag, 2021.
[5] Yu, X., Morri, R. and Eliott, F. EDA, An Empathy-Driven Computational Architecture. 9th Workshop on Goal Reasoning, ACS 2021.
[6] Swaim, E. and Eliott, F. Complex Behavior Vs. Design – Interpreting AI: Reminders from Synthetic Psychology. The Ninth International Conference on Human and Social Analytics HUSO 2023, Barcelona, Spain.
[7] Awarded as a top paper. Chen, Ji. and Berman, E. and Noda, M. and Shermak, K. and Ye, Z. and Rothfusz, D. and Eliott, F. How do Abstraction and Emotions Travel Different Spaces? Proc. of The Tenth International Conference on Human and Social Analytics HUSO 2024.
[8] Xu, Z. and Chen, J. and Eliott, F. Networked Independent Reinforcement Learners Playing an Evolutionary Game. In: Quintián, H., et al. Hybrid Artificial Intelligent Systems. HAIS 2024. Lecture Notes in Computer Science, vol 14858. Springer, Cham.
[9] Awarded best paper: Situated Learners in a Sequential Decision-Making Setting. Kasimov, T. and Takei, S. and Lu, H. and Lee, M. and Eliott, F. The Fourteenth International Conference on Intelligent Systems and Applications INTELLI 2025.
[10] Shayak, N. and Eliott, F. Cooperation as Black Box: Conceptual Fluctuation and Diagnostic Tools for Misalignment in MAS, 2025.

[11] Berman, and Noda, and Shermak, and Ye, and Rothfusz, and Chen, and Leungpathomaram, and Shibue, and Liu, and Eliott. Connotation and 3D Modeling from Limited, Raw Textual Descriptions. International Journal On Advances in Life Sciences, v 16 n 3&4 2024.