📄
Abstract
Autonomous decision-making systems increasingly rely on complex artificial intelligence models to operate in dynamic and safety-critical environments. While these models provide strong predictive capabilities, their black-box nature limits transparency, trust, and accountability. This study proposes a structured research methodology for integrating Explainable Artificial Intelligence (XAI) into autonomous decision-making systems. The research adopts a conceptual–analytical approach to develop an explainability-oriented framework that embeds transparency across perception, decision-making, and action execution stages. The methodology includes literature-driven problem identification, conceptual framework construction, classification and mapping of XAI methods, and formulation of explainability evaluation criteria. The results demonstrate that effective explainability in autonomous systems requires a hybrid integration strategy, combining in-model transparency with post-hoc explanation mechanisms. A structured mapping of XAI techniques to autonomous system components and a conceptual decision-flow diagram are presented to illustrate explainability integration. The findings highlight that layered and context-aware explainability enhances system interpretability, supports human oversight, and improves safety relevance without compromising autonomous operation. This study contributes a reusable methodological foundation for the design and evaluation of explainable autonomous systems, offering practical guidance for future empirical validation and real-world deployment in safety-critical applications.