bio

This page contains a short-form and a long-form biography of Bálint Gyevnár.

You are welcome to use the bio either verbatim (or otherwise) if you need some background about me for a presentation or talk. You are also welcome to use the photo at the bottom of the page if you need a picture of me for publicity.

(Last updated: 17/01/2025)


short version

Bálint Gyevnár is a final year PhD student at the University of Edinburgh supervised by Stefano Albrecht, Shay Cohen, and Chris Lucas. He focuses on building human-aligned explainable AI technologies for autonomous systems, drawing on research from multiple disciplines including reinforcement learning, cognitive science, natural language processing, and human-computer interaction. His goal is to create safer and more trustworthy autonomous systems that everyone can use and trust appropriately regardless of their knowledge of AI systems. He authored or co-authored several works at high-profile conferences such as CHI, AAAI, AAMAS, ECAI, ICRA, and IROS, and has been the recipient of multiple early career research awards from Stanford University, UK Research and Innovation, and IEEE Intelligent Transportation Systems Society.


long version

Bálint Gyevnár is a final year PhD student in the School of Informatics at the University of Edinburgh supervised by Stefano Albrecht, Shay Cohen, and Chris Lucas. He is a member of the Autonomous Agents Research Group; the Centre for Doctoral Training in Natural Language Processing; and the Institute for Language, Cognition and Computation. He focuses on building human-aligned explainable AI technologies for autonomous systems, drawing on research from multiple disciplines including reinforcement learning, cognitive science, natural language processing, and human-computer interaction. His goal is to create safer and more trustworthy autonomous systems that everyone can use and trust appropriately regardless of their knowledge of AI systems.

Bálint’s previous work explored the interaction of the law and explainable AI in a publication accepted at ECAI 2023 titled “Bridging the transparency gap: What can explainable AI learn from the AI Act?” His more recent work accepted at AAMAS 2024 combines counterfactual and causal reasoning to understand and explain the behaviour of autonomous agents in stochastic multi-agent systems. The resulting causal information is integrated into a conversational framework that delivers intelligible natural language explanations to end users in the form of a dialogue. His most recent work accepted at CHI 2025, titled “People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI”, explores the different ways explanations can be given from the perspective of cognitive psychology and present a framework of explanatory modes that can inform the design and evaluation of XAI tools.

He authored or co-authored several works at high-profile conferences such as CHI, AAAI, AAMAS, ECAI, ICRA, and IROS, and has been the recipient of multiple early career research awards from Stanford University, UK Research and Innovation, and IEEE Intelligent Transportation Systems Society.


photo

Portrait of Balint

Portrait photo with dimensions 2507px by 3134px.