SciRepID - Toward Explainable AI for Cybersecurity: A NIST-Based Knowledge Graph for Transparent Semantic Reasoning


Toward Explainable AI for Cybersecurity: A NIST-Based Knowledge Graph for Transparent Semantic Reasoning

Journal of Information Technology and Computer Science
International Forum of Researchers and Lecturers (IFREL)

📄 Abstract

Explainable artificial intelligence (XAI) has become a critical requirement in cybersecurity due to the high-stakes nature of security decision-making and the limitations of black-box learning models. This study investigates the construction of an explainable cybersecurity knowledge representation by leveraging standardized terminology from the NIST cybersecurity glossary. The primary problem addressed is the lack of transparent and semantically grounded reasoning mechanisms in existing AI-driven cybersecurity systems, which limits trust, accountability, and analyst adoption. To address this challenge, we propose a NIST-based semantic knowledge graph that embeds explainability directly into its ontology structure and reasoning process. The proposed framework systematically extracts definitional entities and relations from NIST glossary entries to construct a domain ontology and a multi-relational knowledge graph. A rule-based semantic relation extraction method is employed to ensure faithful, interpretable, and reproducible reasoning paths. The resulting knowledge graph contains over 3,000 cybersecurity concepts and approximately 27,000 semantic relations, covering hierarchical, associative, dependency, and mitigation semantics. Experimental evaluation demonstrates that the proposed approach achieves a high level of explainability, with 92.4% of reasoning outcomes being fully traceable and only 1.4% classified as non-traceable. Most explainable reasoning paths are limited to two or three hops, indicating an effective balance between inferential depth and human interpretability. Structural analysis further confirms the presence of meaningful hub concepts that support multi-hop semantic inference. These results confirm that ontology-driven, standard-based knowledge graphs provide a robust foundation for explainable cybersecurity intelligence. The study concludes that explainability-by-design, grounded in authoritative standards, offers a viable and trustworthy alternative to opaque AI models for cybersecurity applications.

🔖 Keywords

#Explainable AI; Cybersecurity Ontology; Knowledge Graph; Semantic Reasoning; NIST Standards

ℹ️ Informasi Publikasi

Tanggal Publikasi
08 March 2026
Volume / Nomor / Tahun
Volume 2, Nomor 1, Tahun 2026

📝 HOW TO CITE

Pratama, Firman; Dahil, Irlon; Dien, Marion Erwin; Lase, Dewantoro, "Toward Explainable AI for Cybersecurity: A NIST-Based Knowledge Graph for Transparent Semantic Reasoning," Journal of Information Technology and Computer Science, vol. 2, no. 1, Mar. 2026.

ACM
ACS
APA
ABNT
Chicago
Harvard
IEEE
MLA
Turabian
Vancouver

🔗 Artikel Terkait dari Jurnal yang Sama

📊 Statistik Sitasi Jurnal

Tren Sitasi per Tahun