Bálint Gyevnár
Hi, I am Bálint. Thanks for checking on my home page!
(My name is pronounced as BAH-lint [baːlint ɟɛvnaːɾ])
I research trustworthy explainable autonomous agency in multi-agent systems for AI safety, with applications to autonomous vehicles. I like to explain this as giving AI agents the ability to explain themselves.
I am lucky to be supervised by three amazing scientists: Stefano Albrecht, Shay Cohen, Chris Lucas.
Early on in my PhD, I realised quickly that the current state of explainable AI (XAI) is unsustainable. Existing methods only work for some models, with particular asumptions, they ignore OOD data, always assume that the AI agent is correct, and so on…
The biggest issue however is that explanations are simply not made to help users who interact with AI agents.
I am interested in exploring better ways to create intelligible explanations to calibrate trust according to the abilities of the AI agents.
I also research the epistemic backgrounds of AI ethics and AI safety to understand how explanations can help create more transparent systems considering both the short- and long-term risks of AI.
Image shot by me on Beinn Bhan near Applecross, Scotland.
news
Dec 06, 2024 | New survey paper accepeted in IEEE T-ITS: Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review; |
---|---|
Dec 06, 2024 | Gave invited talks at the Charles University of Prague and the Czech Technological University on the fundamental problems of classical XAI. [slides] |
Dec 06, 2024 | Heyo! I have just set up a new home page, so there is still content to be added.. |