Biography
[Return to the home page].
This page contains a short-form and a long-form biography.
You are welcome to use either verbatim (or otherwise) if you need some background about me for a presentation, talk, etc. You are also welcome to use the photo at the bottom of the page if you need a picture of me.
Short bio
Bálint Gyevnár is a penultimate year PhD student at the University of Edinburgh supervised by Stefano Albrecht, Shay Cohen, and Chris Lucas. He focuses on building socially aligned explainable AI technologies for end users, drawing on research from multiple disciplines including autonomous systems, cognitive science, natural language processing, and the law. His goal is to create safer and more trustworthy autonomous systems that everyone can use and trust appropriately regardless of their knowledge of AI systems. He authored or co-authored several works at high-profile conferences such as ECAI, AAMAS, ICRA, and IROS, and has been the recipient of multiple early career research awards from Stanford, UK Research and Innovation, and IEEE Intelligent Transportation Systems Society.
Long bio
Bálint Gyevnár is a penultimate year PhD student at the School of Informatics at the University of Edinburgh supervised by Stefano Albrecht, Shay Cohen, and Chris Lucas. He is a member of the Centre for Doctoral Training in Natural Language Processing; the Institute for Language, Cognition and Computation; and the Autonomous Agents Research Group. He focuses on building socially aligned explainable AI technologies for end users, drawing on research from multiple disciplines including autonomous systems, cognitive science, natural language processing, and the law. His goal is to create safer and more trustworthy autonomous systems that everyone can use and trust appropriately regardless of their knowledge of AI systems.
Bálint’s previous work focused on interpretable goal-based prediction and planning for autonomous vehicles and accurate, interpretable, fast, and verifiable goal recognition for autonomous vehicles. He explored the interaction of the law and explainable AI in a publication accepted at ECAI 2023 titled “Bridging the transparency gap: What can explainable AI learn from the AI Act?”, in which 4 important conclusions are outlined to better align the motivations of explainable AI and the law.
His most recent work accepted at AAMAS-2024 combines counterfactual and causal reasoning to understand and explain the behaviour of autonomous agents in stochastic multi-agent systems. The resulting causal information is integrated into a conversational framework that delivers intelligible natural language explanations to end users in the form of a dialogue.
He authored or co-authored several works at high-profile conferences such as ECAI, AAMAS, ICRA, and IROS, and has been the recipient of multiple early career research awards from Stanford, UK Research and Innovation, and IEEE Intelligent Transportation Systems Society.
Photo
Portrait photo with dimensions 2507px by 3134px.
[Return to the home page].