📄
Abstract
This research explores the role of human algorithm interaction mechanisms in enhancing trust, reliability, and user confidence in Decision Support Systems (DSS). Traditional DSS models often focus solely on algorithmic accuracy and performance, neglecting crucial factors such as transparency and user engagement, which are essential for building trust. By incorporating explainable AI (XAI) techniques like SHAP and LIME, real-time feedback mechanisms, and user-friendly interfaces, the study develops structured interaction models that improve the interpretability of AI-driven decisions. The results show that transparent decision-making processes and interactive features significantly enhance user trust, making DSS more reliable and easier to adopt. Users interacting with systems that provide clear, understandable explanations of decisions, along with real-time updates on the system’s confidence, reported higher levels of decision-making confidence, especially in high-stakes scenarios. These improvements lead to greater user engagement and adoption of the system in various domains, including healthcare and finance. The study also highlights the importance of balancing interpretability with efficiency in user interface design to ensure both trust and usability. The findings contribute to the design of more user-centric DSS that prioritize trust, interpretability, and cognitive factors, providing a framework for the successful integration of intelligent decision support systems in complex decision-making environments. Future research should focus on refining interaction models and exploring the broader applicability of these systems in different sectors.