Biography
Hasan Abu-Rasheed
Hasan Abu-Rasheed is a researcher in the field of artificial intelligence in education, with a particular focus on explainable AI (XAI), knowledge graphs, and semantic technologies. He is working at Goethe University Frankfurt, Germany, where he contributes to the development of AI-driven tools for higher education, ranging from intelligent dialogue systems (chatbots) to agent-based workflows for semantic information extraction and explainable learning analytics and feedback.
Hasan completed a Ph.D. in computer science and a Master’s degree in Mechatronics, both at the University of Siegen, Germany. His doctoral research focused on the development of context-aware, explainable recommendation systems for vocational education and training. This work integrated technical and pedagogical perspectives, involving domain experts from education and psychology, and explored methods to incorporate expert knowledge into the design of knowledge graphs and explainable AI systems.
He also participated in organizing the Joint European Summer School for Technology Enhanced Learning in 2024 and 2025, as well as organizing the first and second workshops of XAI in Education, at EC-TEL2024 and AIED2025 conferences.
His current research in Goethe University Frankfurt investigates the strategic and operational integration of AI technologies in teaching and learning processes, as well as the development of knowledge graphs as semantic infrastructures for AI systems and the explainability of decision-support tools in education. Through this work, he aims to develop neuro-symbolic AI approaches that not only learn from data but also reason with domain knowledge, ensuring that AI systems are not merely black boxes, but collaborative and interpretable partners in education.
Designing Transparent Decision Making in Education: Knowledge Graphs, Explainability, and Human-AI Complementarity
Abstract
Educational systems are increasingly powered by complex AI models. Recommendation engines are guiding learners, information retrieval systems are supporting curricula construction, and automated assessment is influencing institutional evaluations. At the core of these systems lies the concept of knowledge representation, the formal structures that define how educational content, competencies, and learners themselves are modeled.
This invited talk will explore how knowledge graphs serve as foundational architectures for such systems, enabling both semantic richness and user-centered automation. We will dive into technical strategies for building knowledge graphs, and how they could be developed to support complex models, such as LLMs, or the broader systems built around them, to improve transparency, user agency, and explainability. We will show how structured representations, such as curriculum-aligned graphs of concepts, skills, and outcomes, can be combined with neural models to support adaptive and explainable decision-making. This neuro-symbolic approach brings together the strengths of data-driven learning and symbolic reasoning, making AI systems more interpretable and complementary to human intelligence rather than mere black-box predictors. In turn, this supports stakeholders’ understanding of algorithms’ predictions and ability to make informed decisions based on it.