people

members of the lab or group


prof_pic.jpg

Autonomous Agents Research Group (AARG)

Centre for Doctoral Training in Natural Language Processing

University of Edinburgh, UK

(My name is pronounced as BAH-lint [baːlint ɟɛvnaːɾ])

I research trustworthy explainable autonomous agency in multi-agent systems for AI safety, with applications to autonomous vehicles. I like to describe this as giving AI agents the ability to explain themselves.

I am lucky to be supervised by three amazing scientists: Stefano Albrecht, Shay Cohen, Chris Lucas.

Early on in my PhD, I realised quickly that the current state of explainable AI (XAI) is unsustainable. Existing methods only work for some models, with particular asumptions, they ignore OOD data, always assume that the AI agent is correct, and so on…

The biggest issue however is that explanations are simply not made to help users who interact with AI agents.

I am interested in exploring better ways to create intelligible explanations to calibrate trust according to the abilities of the AI agents.

I also research the epistemic backgrounds of AI ethics and AI safety to understand how explanations can help create more transparent systems considering both the short- and long-term risks of AI.


about me

Hi, I am Bálint. Thanks for checking on my home page!
(My name is pronounced BAH-lint [baːlint])

I research trustworthy explainable autonomous agency in multi-agent systems for AI safety, with applications to autonomous vehicles. I like to describe this as giving AI agents the ability to explain themselves.

I am primarly interested in exploring better ways to create intelligible explanations to calibrate trust in and understand the reasoning of AI agents.

I also work on briding the epistemic foundations and resesarch problems of AI ethics and safety to foster cross-disciplinary collaboration.

I am a member of the Autonomous Agents Research Group, supervised by Shay Cohen and Chris Lucas. I was previously supervised by Stefano Albrecht.