Bálint Gyevnár
Hi, I am Bálint. Thanks for checking on my home page!
(My name is pronounced BAH-lint [baːlint])
I research trustworthy explainable autonomous agency in multi-agent systems for AI safety, with applications to autonomous vehicles. I like to describe this as giving AI agents the ability to explain themselves.
I am lucky to be supervised by three amazing scientists: Stefano Albrecht, Shay Cohen, Chris Lucas.
Early on in my PhD, I realised quickly that the current state of explainable AI (XAI) is unsustainable. Existing methods only work for some models, with particular asumptions, they ignore OOD data, always assume that the AI agent is correct, and so on…
The biggest issue however is that explanations are simply not made to help users who interact with AI agents.
I am interested in exploring better ways to create intelligible explanations to calibrate trust according to the abilities of the AI agents.
I also research the epistemic backgrounds of AI ethics and AI safety to understand how explanations can help create more transparent systems considering both the short- and long-term risks of AI.
View of a corrie from Beinn Bhan near Applecross, Scotland shot by me.
news
Jan 17, 2025 | New paper accepted at CHI 2025: People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI |
---|---|
Dec 06, 2024 | New survey paper accepeted in IEEE T-ITS: Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review; |
Dec 06, 2024 | Gave invited talks at the Charles University of Prague and the Czech Technological University on the fundamental problems of classical XAI. [slides] |
Dec 06, 2024 | Heyo! I have just set up a new home page, so there is still content to be added.. |