• J5-60084 - Managing AI: Integrating Explainable and Generative AI – Challenges and Opportunities for Knowledge Management in Organizations
The Client : ( J5-60084 )
Project type: Research projects ARRS
Project duration: 2025 - 2027
  • Description

This research project investigates the integration of Explainable AI (XAI) and Generative AI (GAI) within organizational settings against the EU's regulatory and ethical framework for AI. The project is driven by the imperative to demystify AI systems, particularly in high-stakes areas where transparency and trust are paramount. The project proposal outlines a comprehensive approach to AI, tackling the 'Black Box' issue inherent in current AI systems. It aims to develop AI solutions that are powerful in terms of performance and transparent and understandable to users. This approach is critical for AI applications in sensitive sectors such as healthcare, finance, and public administration, where decisions significantly impact human lives. An innovative aspect of this proposal is the synthesis of XAI and GAI. While XAI focuses on making AI's decision-making process transparent, GAI explores AI's creative potential in generating new, human-like content. The project seeks to harmonize these two AI paradigms, thus enhancing the functionality and acceptance of AI systems in various organizational contexts. The research adopts a transdisciplinary methodology, incorporating computer science, ethics, psychology, and social sciences insights. This broad perspective is crucial for understanding AI's multifaceted impact and ensuring that AI systems are designed to be ethically sound and socially beneficial. The project details a structured research approach, including theoretical modeling to conceptualize the integration of XAI and GAI, empirical studies to test these concepts in real-world scenarios, and the development of practical frameworks for implementation in organizations. The key objectives outlined in the research proposal's five work packages (WPs) are as follows: WP1 seeks to address the trade-off between the accuracy of AI predictions and the level of explainability desired in different AI applications. The objective is to understand and define acceptable thresholds for prediction accuracy and explainability, examining how deep learning techniques perform in conjunction with various XAI approaches. This work will contribute to optimizing AI systems, ensuring they are both effective and understandable. Second, with WP2 we aim to develop a framework for creating AI systems with customizable levels of explainability and transparency tailored to different user groups. This WP seeks to balance technical detail with user accessibility, thereby enhancing user understanding and satisfaction with AI systems. WP3 aims to enhance trust in AI by exploring how different XAI approaches influence user trust. It also addresses AI fairness, aiming to integrate XAI in developing AI systems to prevent and detect algorithmic biases. Additionally, WP3 explores the role of XAI in influencing privacy concerns, particularly how explanations impact users' willingness to share personal information. With WP4, we investigate the impact of GAI on organizational knowledge processes, including the creation, storage, and retrieval of knowledge. This WP aims to understand how GAI influences knowledge exchange and teamwork within organizations and to develop best practices for integrating human-sourced tacit knowledge with GAI-created explicit knowledge. The final WP focuses on optimizing the use of GAI in knowledge transfer and application within organizations. It explores the balance between internal and external data for GAI training, the impact of GAI on knowledge sharing, and how GAI can be used to enhance organizational productivity and innovation. In summary, the research project aims to help the development of AI systems that are not only technically robust but also ethically sound, user-friendly, and conducive to enhancing organizational knowledge management. This approach ensures that AI technologies are aligned with human needs and societal values, contributing to the responsible advancement of AI.