📄
Abstract
Explainable Artificial Intelligence (XAI) has become a critical area of research within artificial intelligence, focusing on improving the transparency and interpretability of machine learning (ML) models, often referred to as "black-box" models. The need for XAI techniques arises from the inherent complexity of ML models, which can make their decision-making processes difficult for users to understand. This study investigates various XAI techniques, including LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), to assess their impact on model interpretability without significantly compromising predictive performance. A comparative experimental design was used, applying these XAI methods to different ML models, including deep neural networks and ensemble methods, within large-scale enterprise data analytics systems. The results indicate that XAI methods significantly enhance model transparency and decision traceability, allowing users to understand the influence of individual features on predictions. While a slight reduction in predictive accuracy was observed, especially with simpler models, the trade-off between interpretability and performance was deemed acceptable, particularly in fields requiring transparency, such as healthcare, finance, and autonomous systems. The use of XAI in enterprise data systems has practical implications for fostering trust and enabling informed decision-making among stakeholders. Furthermore, the study discusses the challenges and limitations of applying XAI techniques, such as complexity, scalability, and model-specific limitations. Future research is suggested to focus on developing more scalable and efficient XAI methods, enhancing their applicability across various model types, and addressing the challenges of real-time applications. This will be crucial in ensuring the widespread adoption of XAI in critical domains, promoting the ethical use of AI while maintaining predictive accuracy.