Bálint Gyevnár

Portrait of Balint Research Postgraduate Student
Centre for Doctoral Training in Natural Language Processing
Institute for Language, Cognition and Computation
University of Edinburgh

I am a PhD student interested in building explainable technologies for legal, ethical, and social AI, with applications to autonomous vehicles, and the goal to achieve trustworthy AI.

My supervisors are Stefano Albrecht, Shay Cohen, and Chris Lucas.

Agents GroupTwitterGoogle ScholarGithub

Recent highlights

About my research

While AI methods have shown impressive results in recent times, they are yet to be widely adopted by the public, especially in high-risk domains such as health care or transportation. I am interested in combining technologies from explainable AI (XAI), causal reasoning, and natural language processing to support the creation of trustworthy AI systems, focusing especially on the domain of autonomous driving. In my view, there are four main criteria that trustworthy AI should fulfill:

  1. Be lawful. There is now a heightened interest from lawmakers to regulate AI technologies, and AI systems will have to adhere to the requirements set out in these laws.

  2. Be ethical. Novel technologies are often plagued by side effects — AI is no different. Biased and discriminatory decisions, subversive manipulation of people, and violations of privacy are some of the major concerns that need to be addressed urgently.

  3. Be social. The design of AI systems should consider human interactions as a core part of their workflow. Conversations and understanding of people’s cognitive models should help AI systems create relevant and targeted decisions.

  4. Be correct and robust. All the above considerations are pointless if the AI systems produce garbage or cannot be deployed under real-life circumstances. Therefore, the testing and validation of AI systems are essential.

My research focuses on building trustworthy AI for autonomous vehicles to support their wider public adoption. Using XAI, we can reduce the opacity of our systems, enabling accountability, demonstrating legality, and improving testability. In addition, cognitive modelling and NLP technologies allow us to address the social aspects of trustworthy AI. I gave a detailed outline of my vision for this project in an award-winning essay and a blog post.


I am originally from the small suburban town of Göd located some 30 minutes north of Budapest, Hungary. I received my undergraduate degree from the University of Edinburgh gaining a first-class integrated master’s degree in informatics (MInf). I also studied abroad for a year at the Nanyang Technological University in Singapore. My thesis supervisor was Maria Wolters with whom I worked on understanding how and why users deleted or hid their user accounts on social media during the early days of the COVID-19 pandemic.

I have also completed a research internship at Five, where I helped with the evaluation of the interpretable goal-based motion planner for autonomous vehicles called IGP2.

About me

I often spend my free time learning languages. Currently, I speak five. In decreasing order of fluency, these are: Hungarian, English, German, Japanese, and Russian.

I like playing volleyball and I am currently the vice president of the Edinburgh University Volleyball Club. I play setter. I also enjoy walking with people among the stark landscapes of the Scottish Highlands and taking some breathtaking photos while enduring harsh weather.

Occasionally, I sit down to practise the piano. At the moment, I am working through the second movement of Schubert’s piano sonata in B-flat major (D 960). Currently, I am reading “Gödel, Escher, Bach: An Eternal Golden Braid” by Douglas R. Hofstadter. A list of books I have read since I have begun keeping records is here.