Bálint Gyevnár
PhD student in AI safety and explainable AI

Hi, I am Bálint. Thanks for checking on my home page!
(My name is pronounced BAH-lint [baːlint])
I research trustworthy explainable autonomous agency in multi-agent systems for AI safety, with applications to autonomous vehicles. I like to describe this as giving AI agents the ability to explain themselves.
I am primarly interested in exploring better ways to create intelligible explanations to calibrate trust in and understand the reasoning of AI agents.
I also work on briding the epistemic foundations and resesarch problems of AI ethics and safety to foster cross-disciplinary collaboration.
I am a member of the Autonomous Agents Research Group, supervised by Shay Cohen and Chris Lucas. I was previously supervised by Stefano Albrecht.
news
Mar 21, 2025 | I am co-organising a workshop on “Evaluating Explainable AI and Complex Decision-Making” co-located with ECAI ‘25. Call for papers found here. |
---|---|
Feb 25, 2025 | New journal paper at Nature Machine Intelligence: AI Safety for Everyone. |
Feb 10, 2025 | I attended IASEAI ‘25: the inaugural conference of the International Association for Safe and Ethical AI. Program available here. |
Feb 03, 2025 | New conference paper at RLDM on Objective Metrics for Human-Subjects Evaluation in Explainable Reinforcement Learning. |
Jan 17, 2025 | New conference paper at CHI 2025: People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI. |
Dec 06, 2024 | New survey paper at IEEE T-ITS: Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review. |
Dec 06, 2024 | Gave invited talks at the Charles University of Prague and the Czech Technological University on the fundamental problems of classical XAI. [slides]. |