SciRepID - Scientific Publication Search


Publication Search

Complete collection of scientific articles — 15,551 publications available

15,551
Publications
385
Journals
1,447
Total Citations
33
Universities

Showing 1-20 of 198

Analytics

Simarmata, Simon; Boru, Meiton

Journal of Information Technology and Computer Science 2026 Vol. 2 (1) International Forum of Researchers and Lecturers

Inconsistent terminology across cybersecurity frameworks undermines global governance and interoperability. The National Institute of Standards and Technology Cybersecurity Framework (NIST CSF 2.0) and ISO/IEC 27001:2022 share similar objectives but diverge semantically in defining risk, control, and resilience. This semantic gap causes difficulties in compliance mapping and automated policy translation. Research Objectives: This study aims to analyze the semantic similarity and divergence between NIST and ISO/IEC 27000 terminologies, identify conceptual structures influencing interoperability, and propose an AI-assisted foundation for harmonizing cybersecurity language globally. Methodology: A mixed-method semantic comparative design integrates Natural Language Processing (NLP) and ontology mapping. Using the nist_glossary.csv dataset and ISO vocabularies, terms were normalized and analyzed via cosine similarity using sentence-transformer embeddings. Ontological alignment was visualized through the Semantic Threat Graph (STG) and validated by certified experts using Cohen’s Kappa reliability tests. Results: From 672 term pairs, results show 40.9% high semantic equivalence, 38.8% partial overlap, and 20.3% semantic divergence. Strongest alignment appears in “Protect” and “Identify” domains, while divergences occur in governance and recovery-related terms. Ontology mapping revealed three conceptual clusters—Risk Governance, Technical Safeguards, and Organizational Readiness. Conclusions: Findings confirm a 79.7% total semantic alignment, indicating strong potential for harmonizing global cybersecurity standards. The study contributes an empirical model combining computational linguistics and AI-based ontology mapping to establish semantic interoperability, enabling unified cybersecurity governance and AI-driven compliance automation. Keywords: Semantic Interoperability; Ontology Mapping; Cybersecurity Frameworks; Terminology Alignment; AI Harmonization

Pratama, Firman; Dahil, Irlon; Dien, Marion Erwin; Lase, Dewantoro

Journal of Information Technology and Computer Science 2026 Vol. 2 (1) International Forum of Researchers and Lecturers

Explainable artificial intelligence (XAI) has become a critical requirement in cybersecurity due to the high-stakes nature of security decision-making and the limitations of black-box learning models. This study investigates the construction of an explainable cybersecurity knowledge representation by leveraging standardized terminology from the NIST cybersecurity glossary. The primary problem addressed is the lack of transparent and semantically grounded reasoning mechanisms in existing AI-driven cybersecurity systems, which limits trust, accountability, and analyst adoption. To address this challenge, we propose a NIST-based semantic knowledge graph that embeds explainability directly into its ontology structure and reasoning process. The proposed framework systematically extracts definitional entities and relations from NIST glossary entries to construct a domain ontology and a multi-relational knowledge graph. A rule-based semantic relation extraction method is employed to ensure faithful, interpretable, and reproducible reasoning paths. The resulting knowledge graph contains over 3,000 cybersecurity concepts and approximately 27,000 semantic relations, covering hierarchical, associative, dependency, and mitigation semantics. Experimental evaluation demonstrates that the proposed approach achieves a high level of explainability, with 92.4% of reasoning outcomes being fully traceable and only 1.4% classified as non-traceable. Most explainable reasoning paths are limited to two or three hops, indicating an effective balance between inferential depth and human interpretability. Structural analysis further confirms the presence of meaningful hub concepts that support multi-hop semantic inference. These results confirm that ontology-driven, standard-based knowledge graphs provide a robust foundation for explainable cybersecurity intelligence. The study concludes that explainability-by-design, grounded in authoritative standards, offers a viable and trustworthy alternative to opaque AI models for cybersecurity applications.

Sutrisno, Sutrisno; Winny, Purbaratri

Journal of Information Technology and Computer Science 2026 Vol. 2 (1) International Forum of Researchers and Lecturers

This study examines the application of Transparent Artificial Intelligence (AI) for fraud detection in public welfare programs using publicly available administrative data. Persistent challenges in welfare governance such as misallocation, fraud, and data inaccuracy necessitate analytical frameworks that are both effective and explainable. The research aims to design and evaluate an interpretable anomaly detection system capable of identifying irregularities in welfare distribution while maintaining transparency and accountability. Methodologically, the study employs two unsupervised models Isolation Forest and Local Outlier Factor (LOF) to detect anomalies in sub-district-level welfare data, incorporating features such as population size, number of beneficiaries, and coverage ratio. An Explainable AI (XAI) framework integrating surrogate Random Forests, Permutation Feature Importance (PFI), and local linear surrogates (LIME-like) is applied to ensure interpretability of both global and local model behaviors. Findings reveal that receivers per 1000 population and percentage coverage are dominant determinants of anomaly scores. Fifteen administrative units were flagged for potential inconsistencies suggesting over- or under-reporting of beneficiaries. Cross-validation between IF and LOF models confirmed consistency in identifying anomalous regions. The integrated XAI explanations enhance transparency, enabling policymakers and auditors to trace the rationale behind detected anomalies. In conclusion, the proposed Transparent AI framework demonstrates that combining anomaly detection with interpretability tools can strengthen accountability and fairness in welfare administration. It offers a reproducible, ethical, and data-driven approach to social program monitoring, reinforcing public trust and supporting responsible AI governance.

Widiastuti, Tiwuk; Richard , Berlien; Maryo Indra, Manjaruni

Journal of Information Technology and Computer Science 2026 Vol. 2 (1) International Forum of Researchers and Lecturers

High-dimensional clinical data exhibit complex and non-linear relationships among patient attributes, where outcomes are often influenced by feature interactions rather than isolated variables. However, many existing machine learning models prioritize predictive performance while providing limited interpretability and insufficient insight into interaction structures. This study aims to address this limitation by developing an interpretable and robust framework for feature interaction mining in clinical data. We propose a hybrid tree–neural modeling framework that explicitly captures and ranks feature interactions while maintaining stable predictive performance. Tree-based ensemble models are employed to identify non-linear interaction patterns, while neural representations enhance learning flexibility and generalization. The framework integrates interaction importance analysis, cross-validation–based stability assessment, and evaluation across multiple data splits to ensure robustness and interpretability. Experiments conducted on a real-world high-dimensional clinical dataset demonstrate that the proposed approach achieves consistent predictive performance, with AUC values ranging from 0.628 to 0.641 across five cross-validation folds (mean AUC ≈ 0.633). Performance remains stable under varying train–test splits, indicating strong generalizability. Interaction analysis reveals that a small number of dominant feature interactions—such as age combined with length of hospital stay and medication count combined with diagnostic information—consistently contribute to model predictions, appearing in over 80% of validation folds. Ablation studies further confirm that removing interaction-aware components leads to noticeable performance degradation, highlighting their importance.  In conclusion, this study demonstrates that explicit feature interaction modeling enhances interpretability, stability, and generalization in clinical prediction tasks. The proposed hybrid framework provides a reliable foundation for developing trustworthy and transparent clinical decision-support systems

Devianto, Yudo; Saragih, Rusmin; Cahyana, Yana

Journal of Information Technology and Computer Science 2026 Vol. 2 (1) International Forum of Researchers and Lecturers

This research benchmarks multiple machine learning (ML) algorithms for large-scale loan default prediction using a real-world dataset of 255,000 borrower records, where default cases represent only ~9–12% of total observations. The study addresses the persistent gap in comparative analyses of ML models that balance predictive accuracy, interpretability, and computational efficiency for credit risk assessment. Six algorithmic families were evaluated Logistic Regression, Random Forest, XGBoost, LightGBM, CatBoost, Artificial Neural Networks (ANN), and Stacked Ensemble—using standardized preprocessing, hybrid imbalance handling (SMOTE, class weighting, under-sampling), and comprehensive evaluation metrics (AUC, F1, Recall, Precision, PR-AUC, and Brier Score). Empirical results show Logistic Regression achieved the highest AUC of 0.732, outperforming nonlinear models under the baseline configuration, while LightGBM attained perfect recall (1.0) but low precision (0.116), indicating over-prediction of defaults. Gradient boosting models demonstrated robust calibration (Brier ≈ 0.114–0.116) and the best computational efficiency, with LightGBM showing the fastest training and lowest memory use. CatBoost exhibited strong recall but the slowest computation, and ANN underperformed on tabular data (AUC ≈ 0.56). The Stacked Ensemble delivered balanced results with AUC = 0.664 and improved overall stability. These findings confirm that boosting-based models, particularly LightGBM and CatBoost, offer superior scalability and calibration, whereas Logistic Regression remains a valuable interpretable baseline. The study concludes that effective default prediction requires integrating rebalancing, calibration, and threshold optimization to enhance recall and operational deployment reliability in large-scale credit ecosystems.

Noe'man, Achmad; Samsinar; Wibowo, Agung

Journal of Information Technology and Computer Science 2025 Vol. 1 (4) International Forum of Researchers and Lecturers

Recommender systems play a critical role in shaping user decisions across digital platforms; however, the increasing complexity of recommendation algorithms has raised serious concerns regarding transparency, trust, and accountability. This study focuses on enhancing the transparency of recommender systems by integrating Explainable Artificial Intelligence (XAI) techniques within a MovieLens-based recommendation framework. The primary problem addressed is the opacity of conventional recommendation models, which limits user understanding of why certain items are recommended and may reduce trust, perceived fairness, and system acceptance. Accordingly, the main objective of this research is to design and evaluate a hybrid explainable recommender system that balances predictive accuracy with human-understandable explanations. The proposed approach combines Matrix Factorization, feature-importance-aware neural networks, and knowledge graph embeddings to construct a robust recommendation model. To enhance explainability, multiple XAI strategies are integrated, including model-agnostic methods (LIME, SHAP, and CLIME), argumentation-based explanations, and context-aware personalized explanations. A comprehensive evaluation framework is employed, incorporating algorithmic metrics (accuracy, fidelity, robustness, counterfactual consistency, and fairness) alongside human-centered evaluations measuring trust, transparency, cognitive load, and perceived usefulness. Experimental results demonstrate that the knowledge graph–enhanced hybrid model achieves superior recommendation accuracy compared to baseline approaches. Moreover, context-aware explanations consistently outperform other methods in terms of fidelity, robustness, and user-perceived transparency, while argumentation-based explanations are found to be the most persuasive. CLIME offers a strong balance between technical stability and interpretability. The findings indicate that no single explainability technique is universally optimal; instead, hybrid and adaptive explanation strategies are most effective. In conclusion, this study confirms that human-centered, context-adaptive XAI significantly improves transparency and user trust in recommender systems, highlighting explainability as a fundamental component rather than an optional enhancement.

Rachmatika, Rinna; Desyani, Teti; Khoirudin

Journal of Information Technology and Computer Science 2025 Vol. 1 (4) International Forum of Researchers and Lecturers

Diseases in primary health services exhibit complex spatial-temporal dynamics due to urbanization and population mobility. Conventional surveillance approaches are difficult to capture these patterns adaptively. Machine learning (ML) based on spatio-temporal modeling offers a solution with the ability to detect disease clusters automatically and with high precision. Research Objectives: This research aims to develop a machine learning model to detect disease hotspots from primary service data in Indonesia, with a focus on improving prediction accuracy, interpretability, and relevance of health policies. Methodology: The primary service dataset for 2024 (5,343 entries) was analyzed using three ML models Gradient Boosting Machine (GBM), Temporal Random Forest (TRF), and Multi-EigenSpot with spatial (village) and temporal (week, month) features. Performance evaluation includes predictive (AUC, F1-score) and spatial (Moran's I, Spatio-Temporal Correlation Index) metrics. Results: The results showed that Multi-EigenSpot achieved the best performance (AUC=0.91; F1=0.86), with the detection of dominant hotspots in Sungai Asam and Beringin Villages. Moran's I value of 0.63 indicates a strong spatial autocorrelation, while STCI=0.57 indicates moderate temporal stability. Conclusions: ML-based spatio-temporal models are effective in identifying hidden disease patterns and have the potential to be integrated into national digital surveillance systems. This approach supports precision public health by providing a scientific basis for real-time location- and time-based intervention policies.

Sasmoko, Dani; Adi Supriyono, Lawrence; Wijanarko Adi Putra, Toni

Journal of Information Technology and Computer Science 2025 Vol. 1 (4) International Forum of Researchers and Lecturers

End-to-end autonomous driving has emerged as a promising paradigm in which deep neural networks directly map raw visual inputs to continuous control actions. Despite its effectiveness, this approach suffers from limited transparency, posing significant challenges for deployment in safety-critical driving scenarios. This study addresses the lack of interpretability in vision-based end-to-end autonomous driving systems and aims to analyze model decision-making behavior under critical conditions such as sharp steering maneuvers and abrupt control transitions. To this end, an explainable end-to-end autonomous driving framework is proposed, combining a convolutional neural network trained via imitation learning with gradient-based visual attribution techniques, including Grad-CAM. The model predicts continuous steering, throttle, and braking commands directly from front-facing camera images, while explainability mechanisms are applied to reveal input regions influencing each control decision. Model performance is evaluated using both prediction accuracy and safety-oriented behavioral metrics. Experimental results show that the proposed explainable model achieves lower control prediction errors compared to a baseline end-to-end CNN, reducing steering mean squared error from 0.034 to 0.031, throttle error from 0.021 to 0.019, and brake error from 0.018 to 0.016. Moreover, safety-oriented analysis indicates improved driving stability, with steering variance reduced from 0.087 to 0.072 and abrupt control changes decreased from 14.6 to 10.3 events. Visual explanations consistently highlight road surfaces and lane-related structures during complex maneuvers, indicating reliance on semantically meaningful cues. In conclusion, the results demonstrate that integrating explainability into end-to-end autonomous driving not only preserves predictive performance but also correlates with smoother and more stable driving behavior. This framework contributes to the development of transparent and trustworthy autonomous driving systems suitable for safety-critical applications

Noronha, Marcelino Caetano; Dwiasnati, Saruni; Helena P Panjaitan, Cherlina

Journal of Information Technology and Computer Science 2025 Vol. 1 (4) International Forum of Researchers and Lecturers

Abstract: The rapid diffusion of Generative Artificial Intelligence (AI) has intensified public debate regarding its benefits, risks, and societal implications. This study investigates public sentiment and thematic structures surrounding Generative AI by analyzing Twitter discourse as a representation of large-scale, real-time public perception. The research addresses two main problems: how public sentiment toward Generative AI is distributed and what dominant themes shape this perception. Accordingly, the objective is to map both emotional polarity and thematic narratives embedded in social media conversations. A computational mixed-methods approach was employed using a dataset of 12,470 tweets collected on 17 December 2024. Sentiment classification was conducted using a transformer-based DistilBERT model, while semantic representations were generated with Sentence-BERT. Topic modeling was performed using BERTopic, integrating HDBSCAN clustering and class-based TF-IDF to extract coherent and interpretable topics. Human-in-the-loop validation supported the interpretive robustness of topic labeling. The findings reveal that public sentiment toward Generative AI is predominantly positive (41.8%), particularly in relation to productivity enhancement, education, and creative applications. Neutral sentiment (31.4%) reflects informational discourse, while negative sentiment (26.8%) centers on ethical concerns, privacy risks, misinformation, and AI hallucinations. Seven dominant topics were identified, with clear topic–sentiment alignment showing optimism in utility-driven themes and skepticism in ethics- and risk-related discussions. In conclusion, public perception of Generative AI is dualistic—characterized by strong enthusiasm alongside persistent caution. These results provide empirical insights for AI governance, responsible innovation, and future research on socio-technical impacts of Generative AI. *    

Sinaga, Rudolf; Frangky

Journal of Information Technology and Computer Science 2025 Vol. 1 (4) International Forum of Researchers and Lecturers

: The rapid expansion of cybersecurity standards and threat intelligence frameworks has led to significant semantic fragmentation among security terminologies, hindering effective information retrieval and interoperability across systems. Traditional keyword-based search approaches are inadequate for capturing the contextual meaning of security terms, particularly within formal frameworks such as NIST, MITRE ATT&CK, and CWE. This study addresses this challenge by proposing CyberBERT, a transformer-based semantic search framework designed to align cybersecurity terminologies through deep contextual representation and ontology-driven reasoning. Research Objectives: The primary objective of this research is to develop a semantic retrieval model capable of understanding conceptual relationships between security terms beyond lexical similarity. Methodology: The proposed methodology fine-tunes a BERT-based model on the NIST Glossary corpus using a combination of masked language modeling and triplet loss objectives to generate discriminative semantic embeddings. These embeddings are further aligned with cybersecurity ontologies, including MITRE ATT&CK and CWE, to enhance semantic consistency and explainability. Semantic retrieval is performed using cosine similarity within a 768-dimensional embedding space and evaluated using Mean Reciprocal Rank (MRR) and Precision@K metrics. Results: Experimental results demonstrate that CyberBERT achieves an MRR of 0.832, outperforming domain-adapted baselines such as SecureBERT and CyBERT. The integration of ontology alignment improves semantic accuracy by over 6%, while robustness evaluations confirm resilience against adversarial linguistic perturbations. Visualization using t-SNE reveals coherent semantic clustering aligned with the five core NIST Cybersecurity Framework functions. Conclusions: In conclusion, CyberBERT effectively bridges semantic gaps across cybersecurity terminologies by combining transformer-based contextual learning with ontological reasoning. The framework offers a robust, interpretable, and scalable solution for semantic search, supporting improved interoperability and knowledge discovery in cybersecurity operations and standards harmonization.

Ricky Imanuel Ndaumanu; Suprayuandi Pratama; Gulay Yusifli Elshad

Journal of Information Technology and Computer Science 2025 Vol. 1 (3) International Forum of Researchers and Lecturers

The increasing demand for cloud computing services has led to the rapid expansion of cloud data centers, which consume significant amounts of energy and contribute substantially to global CO2 emissions. As the IT industry grows, the environmental impact of these data centers becomes an urgent concern. Green Cloud Computing (GCC) has emerged as a solution to mitigate this impact by focusing on energy efficiency and reducing carbon footprints while maintaining the necessary functionality and performance of cloud infrastructures. However, traditional blockchain consensus algorithms such as Proof of Work (PoW) and Proof of Stake (PoS) face limitations regarding energy consumption and scalability, which exacerbates the environmental burden. This study proposes a quantum-inspired blockchain consensus algorithm designed to optimize energy consumption and reduce latency in cloud data centers. By integrating quantum principles such as superposition and entanglement, the algorithm enhances task scheduling and resource utilization, enabling more energy-efficient operations without sacrificing performance. Simulations in a green cloud environment showed that the quantum-inspired algorithm resulted in up to a 30% reduction in energy usage compared to traditional consensus methods, with a 40% improvement in consensus processing time. These results suggest that quantum-inspired algorithms hold significant potential for enhancing the sustainability of cloud infrastructures by improving energy efficiency and scalability. Furthermore, this study discusses the feasibility of implementing quantum-inspired algorithms on classical hardware, addressing challenges in scalability and integration into existing blockchain frameworks. The findings provide valuable insights into the potential of quantum-inspired technologies to drive energy-efficient solutions in cloud computing.

Anjun Dermawan; Efan Efan; Elay Yusifli Elshad

Journal of Information Technology and Computer Science 2025 Vol. 1 (3) International Forum of Researchers and Lecturers

The integration of Augmented Reality (AR) and Explainable AI (XAI) within Cyber-Physical Systems (CPS) design is transforming the industrial automation landscape. This study explores how combining AR’s immersive visualization with XAI’s decision transparency enhances collaborative design processes in CPS. The AR-XAI platform developed in this research improves team collaboration by offering real-time visual feedback and enabling interactive decision-making. The platform provides interpretable insights into AI-driven decisions, fostering trust among engineers and decision-makers. Key features of the platform include the ability to visualize complex CPS models, facilitating faster iterations, reducing design errors, and improving design accuracy. The integration of XAI ensures transparency in decision-making by offering clear explanations of AI predictions, which is essential for ensuring accountability and building trust in automated systems. Testing with industrial engineers confirmed that the AR-XAI platform significantly improved design outcomes, with a reduction in errors and enhanced team performance compared to traditional design methods. Moreover, the platform enabled faster decision-making and improved collaboration across diverse teams, demonstrating its potential to optimize CPS design workflows. This research provides valuable insights into the role of AR and XAI in advancing Industry 4.0 and paves the way for more advanced integrations of these technologies in industrial settings. Future research should focus on developing scalable solutions for various industrial applications and exploring more sophisticated AR-XAI integrations for emerging fields like smart cities and autonomous manufacturing.

Danang Danang; Febri Adi Prasetya; Rashad Huseynaga Asgarov

Journal of Information Technology and Computer Science 2025 Vol. 1 (3) International Forum of Researchers and Lecturers

The increasing integration and digitization of smart grid systems have exposed them to a variety of security threats, necessitating robust security measures to ensure their reliability and efficiency. This paper proposes a novel Digital Twin-Based Cyber-Physical Security Framework, incorporating AI-driven predictive maintenance and zero-trust architecture to address the evolving challenges of securing smart grids. By leveraging digital twin technology, this framework creates a real-time virtual representation of physical systems, enabling continuous monitoring and simulation for enhanced security and operational performance. Zero-trust security principles are integrated to ensure that no entity, whether inside or outside the network, is trusted by default, thus significantly reducing the risk of cyber-attacks. Additionally, AI-driven predictive maintenance enhances the framework’s reliability by proactively identifying potential failures before they occur, reducing downtime and improving system resilience. Through the development and simulation of this framework, including attack and failure scenarios, the paper demonstrates that the proposed system outperforms traditional methods in terms of anomaly detection, system downtime, and response times. The integration of predictive maintenance allows for early identification of component failures, thus enhancing the overall resilience of the grid. The zero-trust architecture further strengthens the cybersecurity posture, preventing unauthorized access and attacks. The study also identifies challenges, such as data synchronization and scalability, which must be addressed for broader implementation in large-scale smart grid systems. The findings suggest that the proposed framework could play a critical role in the future evolution of smart grid security, offering valuable insights for researchers and practitioners.  

Agustinus Suradi; Muhamad Aris Sunandar; Umna iftikhar

Journal of Information Technology and Computer Science 2025 Vol. 1 (3) International Forum of Researchers and Lecturers

The integration of blockchain technology with Multi-Agent Reinforcement Learning (MARL) presents a promising solution for optimizing resource allocation and ensuring security in decentralized network environments, particularly in 5G and 6G network slicing. This research proposes a model that combines the security features of blockchain with the adaptive, decentralized decision-making capabilities of MARL. Blockchain ensures the integrity and transparency of resource allocation by providing a secure, tamper-proof ledger for transaction validation, while MARL allows agents to dynamically allocate resources based on real-time network conditions. The simulation results demonstrate significant improvements in resource allocation efficiency, fairness among users, and resilience to cyberattacks. By combining these two technologies, the proposed model overcomes many of the challenges posed by traditional centralized systems and offers an enhanced, secure, and fair solution for resource distribution in future mobile networks. However, scalability remains a challenge, especially in large-scale networks where transaction processing and consensus overhead can create bottlenecks. Additionally, training complexity in MARL models presents computational challenges, particularly in highly dynamic network environments. The model's performance trade-offs, including the balance between high security and system overhead, are also discussed. Future research should focus on optimizing blockchain consensus mechanisms to improve scalability and enhancing MARL model training techniques to reduce computational costs and improve real-time decision-making. This integration holds significant potential for revolutionizing resource allocation in 5G and 6G networks, enabling more efficient, secure, and fair management of network resources in the increasingly complex and decentralized digital ecosystem

Genrawan Hoendarto; Thommy Willay; Pavan Kumar

Journal of Information Technology and Computer Science 2025 Vol. 1 (3) International Forum of Researchers and Lecturers

The rapid advancement of intelligent systems has accelerated the adoption of data-driven solutions across diverse industries, creating an increasing need for models that are both efficient and privacy-preserving. While traditional centralized machine learning approaches offer strong predictive capabilities, they often struggle with challenges related to data privacy, network latency, and computational inefficiency-especially in distributed environments with heterogeneous devices. To address these limitations, recent research has explored hybrid learning frameworks that integrate federated learning, edge computing, and dynamic model optimization techniques. These hybrid approaches enable models to process and learn from data closer to the source while maintaining stringent privacy requirements by keeping raw data localized. Additionally, the incorporation of pruning strategies, adaptive model compression, or multimodal data fusion contributes to improved speed, scalability, and accuracy in real-time inference tasks. Such frameworks have demonstrated notable promise in settings characterized by high data volume, operational complexity, and the necessity for fast anomaly detection or decision-making. However, despite these advancements, several challenges remain, including synchronization delays across edge nodes, variability in hardware capabilities, and the need for more efficient aggregation algorithms. Future developments may involve leveraging next-generation pruning techniques, energy-aware edge scheduling, decentralized orchestration protocols, or the integration of digital twin technologies to further enhance performance. Overall, hybrid distributed learning frameworks represent an important evolution toward more intelligent, secure, and autonomous computational ecosystems capable of supporting the next wave of smart applications.

Jarot Dian Susatyono; Sofiansyah Fadli; G Thippanna

Journal of Information Technology and Computer Science 2025 Vol. 1 (2) International Forum of Researchers and Lecturers

The integration of autonomous systems in traffic management has become increasingly important as urban populations and vehicle numbers continue to rise, leading to significant congestion. Traditional traffic signal control systems, which rely on fixed timing, are no longer sufficient to handle the dynamic and complex nature of urban traffic. To address these challenges, the proposed explainable Deep Reinforcement Learning (DRL) framework aims to optimize traffic signal control by dynamically adjusting traffic signals based on real-time data. This approach enhances traffic flow efficiency, reduces congestion, and improves overall system performance. The framework leverages Vehicle-to-Everything (V2X) communication, which enables real-time data exchange between vehicles, infrastructure, and other road users, extending the perception range of autonomous vehicles and providing valuable insights for traffic signal optimization. Additionally, the integration of smart infrastructure, such as smart intersections, plays a crucial role in enabling adaptive traffic management and facilitating better coordination across multiple intersections. One of the key advantages of the proposed system is its transparency, achieved through the implementation of explainable AI (XAI) techniques. These mechanisms provide clear insights into the decision-making processes, ensuring that traffic management authorities and system users can understand the rationale behind the system’s decisions. Although challenges such as data accuracy, scalability, and cybersecurity risks remain, the proposed DRL framework shows great promise in revolutionizing traffic management systems. Future research directions include enhancing data collection methods, improving the scalability of the system for larger cities, and further developing explainability features to improve trust and adoption in real-world applications.

Benny Martha Dinata; Ahmad Budi Trisnawan; Eram Abbasi

Journal of Information Technology and Computer Science 2025 Vol. 1 (2) International Forum of Researchers and Lecturers

This research focuses on the development and evaluation of an Adaptive Edge-AI framework designed to optimize real-time data processing and decision-making in resource-constrained environments, specifically within smart city infrastructures. The primary problem addressed is the challenge of minimizing latency, reducing energy consumption, and ensuring the reliability of Cyber-Physical Systems (CPS) when using Internet of Things (IoT) devices. The objective of the study is to assess the effectiveness of this framework in real-world smart city applications such as traffic monitoring, environmental sensing, and smart utilities management. The proposed method integrates lightweight AI models, edge computing, and adaptive resource management techniques, including Federated Learning and Neural Architecture Search, to ensure optimal performance while addressing hardware constraints. The main findings reveal that the framework significantly improves real-time inference speed, reduces energy consumption of IoT devices, and enhances CPS reliability by minimizing communication delays and ensuring continuous system operation despite network disruptions. The application of this framework to smart transportation and urban utilities further demonstrates its potential to optimize city management processes. The study concludes that the Adaptive Edge-AI framework offers a promising solution for smart cities, enhancing operational efficiency, sustainability, and resilience. It is recommended for integration into smart city infrastructures to enable better resource management and decision-making in real-time applications.

Eka Prasetya Adhy Sugara; Nurul Azwanti; Ivy Derla

Journal of Information Technology and Computer Science 2025 Vol. 1 (2) International Forum of Researchers and Lecturers

This paper explores the application of quantum-inspired optimization algorithms in the training of large-scale Graph Neural Networks (GNNs) within distributed cloud-edge environments. GNNs have gained significant attention due to their ability to model complex relationships in graph-structured data, yet their training presents challenges such as high computational demand, inefficient resource allocation, and slow convergence, especially for large datasets. Traditional meta-heuristic algorithms, while useful, often face scalability and performance issues when applied to such large-scale tasks. To address these challenges, we propose a quantum-inspired meta-heuristic algorithm that leverages quantum principles, such as superposition and entanglement, to enhance optimization processes. The algorithm was integrated into a hybrid cloud-edge system, where computational tasks are dynamically distributed between edge nodes and the cloud, optimizing resource utilization and reducing latency. Our experimental results demonstrate significant improvements in training speed, resource efficiency, and convergence rate when compared to traditional optimization methods such as Genetic Algorithms and Simulated Annealing. The quantum-inspired algorithm not only accelerates the training process but also reduces memory usage, making it well-suited for large-scale GNN applications. Furthermore, the system's scalability was enhanced by the hybrid cloud-edge architecture, which balances computational load and enables real-time data processing. The findings suggest that quantum-inspired optimization algorithms can significantly improve the training of GNNs in distributed systems, opening new avenues for real-time applications in areas such as social network analysis, anomaly detection, and recommendation systems. Future work will focus on refining these algorithms to handle even larger datasets and more complex GNN architectures, with potential integration into edge devices for enhanced real-time decision-making.

Mutiara S. Simanjuntak; Aji Priyambodo; Elshad Yusifov

Journal of Information Technology and Computer Science 2025 Vol. 1 (2) International Forum of Researchers and Lecturers

This study explores the integration of blockchain technology with federated learning (FL) to enhance cross-organizational healthcare analytics while ensuring privacy and data security. Federated learning allows multiple institutions to collaboratively train machine learning models without sharing sensitive patient data. Instead, local data is used to train models, and only model parameters are exchanged. However, privacy concerns and data sharing inefficiencies have hindered broader healthcare collaboration. Blockchain, a decentralized ledger technology, addresses these concerns by ensuring data integrity and transparency, providing an immutable and tamper-proof record of all transactions. This study investigates how the combination of blockchain and federated learning can overcome these challenges, facilitating secure and efficient data sharing between healthcare institutions. The study uses synthetic multi-institution healthcare datasets to simulate real-world collaboration scenarios. The blockchain-enabled federated learning system ensures that no raw patient data is shared, significantly reducing the risk of privacy breaches while still allowing healthcare institutions to collaborate on predictive model development. The results show that while there is a slight decrease in model accuracy compared to centralized methods, the trade-off is outweighed by the privacy and security benefits. Blockchain’s integration ensures that model updates are transparent, enhancing trust between institutions and reducing concerns about data integrity. Moreover, the use of blockchain’s smart contracts automates and enforces compliance, further streamlining collaboration. This research contributes to the field by demonstrating how blockchain-integrated federated learning can create a secure, scalable, and privacy-preserving framework for collaborative healthcare analytics. The findings underscore the potential for this approach to enhance healthcare outcomes and improve decision-making across institutions while ensuring patient data protection.

Atika Mutiarachim; Royke Lantupa Kumowal; Nigar Aliyeva

Journal of Information Technology and Computer Science 2025 Vol. 1 (2) International Forum of Researchers and Lecturers

This study explores the development and application of a digital twin-driven cybersecurity risk assessment model for Industrial Internet of Things (IIoT) networks. The increasing complexity and interconnectivity of IIoT systems have expanded the attack surface, making them vulnerable to a wide range of cyber threats. The digital twin model addresses this challenge by creating real-time virtual replicas of physical systems, which can simulate and predict network vulnerabilities and attack vectors. The model uses machine learning algorithms and real-time data to simulate cyberattacks, including Distributed Denial of Service (DDoS), malware, and data breaches. By providing continuous monitoring and dynamic risk predictions, the digital twin model enhances the resilience of IIoT networks compared to traditional cybersecurity frameworks. The findings indicate that the model's ability to predict potential cyber threats and simulate various attack scenarios provides a more proactive and accurate approach to cybersecurity in IIoT environments. Additionally, the study highlights key mitigation strategies, including adaptive security mechanisms, real-time anomaly detection, and the use of lightweight encryption for resource-constrained devices. Despite its effectiveness, challenges such as computational requirements, integration with legacy systems, and scalability were identified. This research underscores the strategic importance of digital twin models in securing IIoT systems and advancing Manufacturing 4.0 ecosystems. Future research should focus on enhancing model accuracy, expanding its application to diverse industrial sectors, and improving interoperability with legacy systems to further strengthen the security posture of IIoT networks.