SciRepID - Explainable End-to-End Autonomous Driving Using Vision-Based Deep Learning in Safety-Critical Scenarios


Explainable End-to-End Autonomous Driving Using Vision-Based Deep Learning in Safety-Critical Scenarios

Journal of Information Technology and Computer Science
International Forum of Researchers and Lecturers (IFREL)

📄 Abstract

End-to-end autonomous driving has emerged as a promising paradigm in which deep neural networks directly map raw visual inputs to continuous control actions. Despite its effectiveness, this approach suffers from limited transparency, posing significant challenges for deployment in safety-critical driving scenarios. This study addresses the lack of interpretability in vision-based end-to-end autonomous driving systems and aims to analyze model decision-making behavior under critical conditions such as sharp steering maneuvers and abrupt control transitions. To this end, an explainable end-to-end autonomous driving framework is proposed, combining a convolutional neural network trained via imitation learning with gradient-based visual attribution techniques, including Grad-CAM. The model predicts continuous steering, throttle, and braking commands directly from front-facing camera images, while explainability mechanisms are applied to reveal input regions influencing each control decision. Model performance is evaluated using both prediction accuracy and safety-oriented behavioral metrics. Experimental results show that the proposed explainable model achieves lower control prediction errors compared to a baseline end-to-end CNN, reducing steering mean squared error from 0.034 to 0.031, throttle error from 0.021 to 0.019, and brake error from 0.018 to 0.016. Moreover, safety-oriented analysis indicates improved driving stability, with steering variance reduced from 0.087 to 0.072 and abrupt control changes decreased from 14.6 to 10.3 events. Visual explanations consistently highlight road surfaces and lane-related structures during complex maneuvers, indicating reliance on semantically meaningful cues. In conclusion, the results demonstrate that integrating explainability into end-to-end autonomous driving not only preserves predictive performance but also correlates with smoother and more stable driving behavior. This framework contributes to the development of transparent and trustworthy autonomous driving systems suitable for safety-critical applications

🔖 Keywords

#End-to-End Driving; Explainable AI; Vision-Based Control; Safety-Critical Systems; Grad-CAM Analysis

ℹ️ Informasi Publikasi

Tanggal Publikasi
29 December 2025
Volume / Nomor / Tahun
Volume 1, Nomor 4, Tahun 2025

📝 HOW TO CITE

Sasmoko, Dani; Adi Supriyono, Lawrence; Wijanarko Adi Putra, Toni, "Explainable End-to-End Autonomous Driving Using Vision-Based Deep Learning in Safety-Critical Scenarios," Journal of Information Technology and Computer Science, vol. 1, no. 4, Dec. 2025.

ACM
ACS
APA
ABNT
Chicago
Harvard
IEEE
MLA
Turabian
Vancouver

🔗 Artikel Terkait dari Jurnal yang Sama

📊 Statistik Sitasi Jurnal

Tren Sitasi per Tahun