SciRepID - Scientific Publication Search


Publication Search

Complete collection of scientific articles — 15,551 publications available

15,551
Publications
385
Journals
1,447
Total Citations
33
Universities

Showing 1-20 of 24

Analytics

Noe'man, Achmad; Samsinar; Wibowo, Agung

Journal of Information Technology and Computer Science 2025 Vol. 1 (4) International Forum of Researchers and Lecturers

Recommender systems play a critical role in shaping user decisions across digital platforms; however, the increasing complexity of recommendation algorithms has raised serious concerns regarding transparency, trust, and accountability. This study focuses on enhancing the transparency of recommender systems by integrating Explainable Artificial Intelligence (XAI) techniques within a MovieLens-based recommendation framework. The primary problem addressed is the opacity of conventional recommendation models, which limits user understanding of why certain items are recommended and may reduce trust, perceived fairness, and system acceptance. Accordingly, the main objective of this research is to design and evaluate a hybrid explainable recommender system that balances predictive accuracy with human-understandable explanations. The proposed approach combines Matrix Factorization, feature-importance-aware neural networks, and knowledge graph embeddings to construct a robust recommendation model. To enhance explainability, multiple XAI strategies are integrated, including model-agnostic methods (LIME, SHAP, and CLIME), argumentation-based explanations, and context-aware personalized explanations. A comprehensive evaluation framework is employed, incorporating algorithmic metrics (accuracy, fidelity, robustness, counterfactual consistency, and fairness) alongside human-centered evaluations measuring trust, transparency, cognitive load, and perceived usefulness. Experimental results demonstrate that the knowledge graph–enhanced hybrid model achieves superior recommendation accuracy compared to baseline approaches. Moreover, context-aware explanations consistently outperform other methods in terms of fidelity, robustness, and user-perceived transparency, while argumentation-based explanations are found to be the most persuasive. CLIME offers a strong balance between technical stability and interpretability. The findings indicate that no single explainability technique is universally optimal; instead, hybrid and adaptive explanation strategies are most effective. In conclusion, this study confirms that human-centered, context-adaptive XAI significantly improves transparency and user trust in recommender systems, highlighting explainability as a fundamental component rather than an optional enhancement.

Rachmatika, Rinna; Desyani, Teti; Khoirudin

Journal of Information Technology and Computer Science 2025 Vol. 1 (4) International Forum of Researchers and Lecturers

Diseases in primary health services exhibit complex spatial-temporal dynamics due to urbanization and population mobility. Conventional surveillance approaches are difficult to capture these patterns adaptively. Machine learning (ML) based on spatio-temporal modeling offers a solution with the ability to detect disease clusters automatically and with high precision. Research Objectives: This research aims to develop a machine learning model to detect disease hotspots from primary service data in Indonesia, with a focus on improving prediction accuracy, interpretability, and relevance of health policies. Methodology: The primary service dataset for 2024 (5,343 entries) was analyzed using three ML models Gradient Boosting Machine (GBM), Temporal Random Forest (TRF), and Multi-EigenSpot with spatial (village) and temporal (week, month) features. Performance evaluation includes predictive (AUC, F1-score) and spatial (Moran's I, Spatio-Temporal Correlation Index) metrics. Results: The results showed that Multi-EigenSpot achieved the best performance (AUC=0.91; F1=0.86), with the detection of dominant hotspots in Sungai Asam and Beringin Villages. Moran's I value of 0.63 indicates a strong spatial autocorrelation, while STCI=0.57 indicates moderate temporal stability. Conclusions: ML-based spatio-temporal models are effective in identifying hidden disease patterns and have the potential to be integrated into national digital surveillance systems. This approach supports precision public health by providing a scientific basis for real-time location- and time-based intervention policies.

Sasmoko, Dani; Adi Supriyono, Lawrence; Wijanarko Adi Putra, Toni

Journal of Information Technology and Computer Science 2025 Vol. 1 (4) International Forum of Researchers and Lecturers

End-to-end autonomous driving has emerged as a promising paradigm in which deep neural networks directly map raw visual inputs to continuous control actions. Despite its effectiveness, this approach suffers from limited transparency, posing significant challenges for deployment in safety-critical driving scenarios. This study addresses the lack of interpretability in vision-based end-to-end autonomous driving systems and aims to analyze model decision-making behavior under critical conditions such as sharp steering maneuvers and abrupt control transitions. To this end, an explainable end-to-end autonomous driving framework is proposed, combining a convolutional neural network trained via imitation learning with gradient-based visual attribution techniques, including Grad-CAM. The model predicts continuous steering, throttle, and braking commands directly from front-facing camera images, while explainability mechanisms are applied to reveal input regions influencing each control decision. Model performance is evaluated using both prediction accuracy and safety-oriented behavioral metrics. Experimental results show that the proposed explainable model achieves lower control prediction errors compared to a baseline end-to-end CNN, reducing steering mean squared error from 0.034 to 0.031, throttle error from 0.021 to 0.019, and brake error from 0.018 to 0.016. Moreover, safety-oriented analysis indicates improved driving stability, with steering variance reduced from 0.087 to 0.072 and abrupt control changes decreased from 14.6 to 10.3 events. Visual explanations consistently highlight road surfaces and lane-related structures during complex maneuvers, indicating reliance on semantically meaningful cues. In conclusion, the results demonstrate that integrating explainability into end-to-end autonomous driving not only preserves predictive performance but also correlates with smoother and more stable driving behavior. This framework contributes to the development of transparent and trustworthy autonomous driving systems suitable for safety-critical applications

Noronha, Marcelino Caetano; Dwiasnati, Saruni; Helena P Panjaitan, Cherlina

Journal of Information Technology and Computer Science 2025 Vol. 1 (4) International Forum of Researchers and Lecturers

Abstract: The rapid diffusion of Generative Artificial Intelligence (AI) has intensified public debate regarding its benefits, risks, and societal implications. This study investigates public sentiment and thematic structures surrounding Generative AI by analyzing Twitter discourse as a representation of large-scale, real-time public perception. The research addresses two main problems: how public sentiment toward Generative AI is distributed and what dominant themes shape this perception. Accordingly, the objective is to map both emotional polarity and thematic narratives embedded in social media conversations. A computational mixed-methods approach was employed using a dataset of 12,470 tweets collected on 17 December 2024. Sentiment classification was conducted using a transformer-based DistilBERT model, while semantic representations were generated with Sentence-BERT. Topic modeling was performed using BERTopic, integrating HDBSCAN clustering and class-based TF-IDF to extract coherent and interpretable topics. Human-in-the-loop validation supported the interpretive robustness of topic labeling. The findings reveal that public sentiment toward Generative AI is predominantly positive (41.8%), particularly in relation to productivity enhancement, education, and creative applications. Neutral sentiment (31.4%) reflects informational discourse, while negative sentiment (26.8%) centers on ethical concerns, privacy risks, misinformation, and AI hallucinations. Seven dominant topics were identified, with clear topic–sentiment alignment showing optimism in utility-driven themes and skepticism in ethics- and risk-related discussions. In conclusion, public perception of Generative AI is dualistic—characterized by strong enthusiasm alongside persistent caution. These results provide empirical insights for AI governance, responsible innovation, and future research on socio-technical impacts of Generative AI. *    

Sinaga, Rudolf; Frangky

Journal of Information Technology and Computer Science 2025 Vol. 1 (4) International Forum of Researchers and Lecturers

: The rapid expansion of cybersecurity standards and threat intelligence frameworks has led to significant semantic fragmentation among security terminologies, hindering effective information retrieval and interoperability across systems. Traditional keyword-based search approaches are inadequate for capturing the contextual meaning of security terms, particularly within formal frameworks such as NIST, MITRE ATT&CK, and CWE. This study addresses this challenge by proposing CyberBERT, a transformer-based semantic search framework designed to align cybersecurity terminologies through deep contextual representation and ontology-driven reasoning. Research Objectives: The primary objective of this research is to develop a semantic retrieval model capable of understanding conceptual relationships between security terms beyond lexical similarity. Methodology: The proposed methodology fine-tunes a BERT-based model on the NIST Glossary corpus using a combination of masked language modeling and triplet loss objectives to generate discriminative semantic embeddings. These embeddings are further aligned with cybersecurity ontologies, including MITRE ATT&CK and CWE, to enhance semantic consistency and explainability. Semantic retrieval is performed using cosine similarity within a 768-dimensional embedding space and evaluated using Mean Reciprocal Rank (MRR) and Precision@K metrics. Results: Experimental results demonstrate that CyberBERT achieves an MRR of 0.832, outperforming domain-adapted baselines such as SecureBERT and CyBERT. The integration of ontology alignment improves semantic accuracy by over 6%, while robustness evaluations confirm resilience against adversarial linguistic perturbations. Visualization using t-SNE reveals coherent semantic clustering aligned with the five core NIST Cybersecurity Framework functions. Conclusions: In conclusion, CyberBERT effectively bridges semantic gaps across cybersecurity terminologies by combining transformer-based contextual learning with ontological reasoning. The framework offers a robust, interpretable, and scalable solution for semantic search, supporting improved interoperability and knowledge discovery in cybersecurity operations and standards harmonization.

Ricky Imanuel Ndaumanu; Suprayuandi Pratama; Gulay Yusifli Elshad

Journal of Information Technology and Computer Science 2025 Vol. 1 (3) International Forum of Researchers and Lecturers

The increasing demand for cloud computing services has led to the rapid expansion of cloud data centers, which consume significant amounts of energy and contribute substantially to global CO2 emissions. As the IT industry grows, the environmental impact of these data centers becomes an urgent concern. Green Cloud Computing (GCC) has emerged as a solution to mitigate this impact by focusing on energy efficiency and reducing carbon footprints while maintaining the necessary functionality and performance of cloud infrastructures. However, traditional blockchain consensus algorithms such as Proof of Work (PoW) and Proof of Stake (PoS) face limitations regarding energy consumption and scalability, which exacerbates the environmental burden. This study proposes a quantum-inspired blockchain consensus algorithm designed to optimize energy consumption and reduce latency in cloud data centers. By integrating quantum principles such as superposition and entanglement, the algorithm enhances task scheduling and resource utilization, enabling more energy-efficient operations without sacrificing performance. Simulations in a green cloud environment showed that the quantum-inspired algorithm resulted in up to a 30% reduction in energy usage compared to traditional consensus methods, with a 40% improvement in consensus processing time. These results suggest that quantum-inspired algorithms hold significant potential for enhancing the sustainability of cloud infrastructures by improving energy efficiency and scalability. Furthermore, this study discusses the feasibility of implementing quantum-inspired algorithms on classical hardware, addressing challenges in scalability and integration into existing blockchain frameworks. The findings provide valuable insights into the potential of quantum-inspired technologies to drive energy-efficient solutions in cloud computing.

Anjun Dermawan; Efan Efan; Elay Yusifli Elshad

Journal of Information Technology and Computer Science 2025 Vol. 1 (3) International Forum of Researchers and Lecturers

The integration of Augmented Reality (AR) and Explainable AI (XAI) within Cyber-Physical Systems (CPS) design is transforming the industrial automation landscape. This study explores how combining AR’s immersive visualization with XAI’s decision transparency enhances collaborative design processes in CPS. The AR-XAI platform developed in this research improves team collaboration by offering real-time visual feedback and enabling interactive decision-making. The platform provides interpretable insights into AI-driven decisions, fostering trust among engineers and decision-makers. Key features of the platform include the ability to visualize complex CPS models, facilitating faster iterations, reducing design errors, and improving design accuracy. The integration of XAI ensures transparency in decision-making by offering clear explanations of AI predictions, which is essential for ensuring accountability and building trust in automated systems. Testing with industrial engineers confirmed that the AR-XAI platform significantly improved design outcomes, with a reduction in errors and enhanced team performance compared to traditional design methods. Moreover, the platform enabled faster decision-making and improved collaboration across diverse teams, demonstrating its potential to optimize CPS design workflows. This research provides valuable insights into the role of AR and XAI in advancing Industry 4.0 and paves the way for more advanced integrations of these technologies in industrial settings. Future research should focus on developing scalable solutions for various industrial applications and exploring more sophisticated AR-XAI integrations for emerging fields like smart cities and autonomous manufacturing.

Danang Danang; Febri Adi Prasetya; Rashad Huseynaga Asgarov

Journal of Information Technology and Computer Science 2025 Vol. 1 (3) International Forum of Researchers and Lecturers

The increasing integration and digitization of smart grid systems have exposed them to a variety of security threats, necessitating robust security measures to ensure their reliability and efficiency. This paper proposes a novel Digital Twin-Based Cyber-Physical Security Framework, incorporating AI-driven predictive maintenance and zero-trust architecture to address the evolving challenges of securing smart grids. By leveraging digital twin technology, this framework creates a real-time virtual representation of physical systems, enabling continuous monitoring and simulation for enhanced security and operational performance. Zero-trust security principles are integrated to ensure that no entity, whether inside or outside the network, is trusted by default, thus significantly reducing the risk of cyber-attacks. Additionally, AI-driven predictive maintenance enhances the framework’s reliability by proactively identifying potential failures before they occur, reducing downtime and improving system resilience. Through the development and simulation of this framework, including attack and failure scenarios, the paper demonstrates that the proposed system outperforms traditional methods in terms of anomaly detection, system downtime, and response times. The integration of predictive maintenance allows for early identification of component failures, thus enhancing the overall resilience of the grid. The zero-trust architecture further strengthens the cybersecurity posture, preventing unauthorized access and attacks. The study also identifies challenges, such as data synchronization and scalability, which must be addressed for broader implementation in large-scale smart grid systems. The findings suggest that the proposed framework could play a critical role in the future evolution of smart grid security, offering valuable insights for researchers and practitioners.  

Agustinus Suradi; Muhamad Aris Sunandar; Umna iftikhar

Journal of Information Technology and Computer Science 2025 Vol. 1 (3) International Forum of Researchers and Lecturers

The integration of blockchain technology with Multi-Agent Reinforcement Learning (MARL) presents a promising solution for optimizing resource allocation and ensuring security in decentralized network environments, particularly in 5G and 6G network slicing. This research proposes a model that combines the security features of blockchain with the adaptive, decentralized decision-making capabilities of MARL. Blockchain ensures the integrity and transparency of resource allocation by providing a secure, tamper-proof ledger for transaction validation, while MARL allows agents to dynamically allocate resources based on real-time network conditions. The simulation results demonstrate significant improvements in resource allocation efficiency, fairness among users, and resilience to cyberattacks. By combining these two technologies, the proposed model overcomes many of the challenges posed by traditional centralized systems and offers an enhanced, secure, and fair solution for resource distribution in future mobile networks. However, scalability remains a challenge, especially in large-scale networks where transaction processing and consensus overhead can create bottlenecks. Additionally, training complexity in MARL models presents computational challenges, particularly in highly dynamic network environments. The model's performance trade-offs, including the balance between high security and system overhead, are also discussed. Future research should focus on optimizing blockchain consensus mechanisms to improve scalability and enhancing MARL model training techniques to reduce computational costs and improve real-time decision-making. This integration holds significant potential for revolutionizing resource allocation in 5G and 6G networks, enabling more efficient, secure, and fair management of network resources in the increasingly complex and decentralized digital ecosystem

Genrawan Hoendarto; Thommy Willay; Pavan Kumar

Journal of Information Technology and Computer Science 2025 Vol. 1 (3) International Forum of Researchers and Lecturers

The rapid advancement of intelligent systems has accelerated the adoption of data-driven solutions across diverse industries, creating an increasing need for models that are both efficient and privacy-preserving. While traditional centralized machine learning approaches offer strong predictive capabilities, they often struggle with challenges related to data privacy, network latency, and computational inefficiency-especially in distributed environments with heterogeneous devices. To address these limitations, recent research has explored hybrid learning frameworks that integrate federated learning, edge computing, and dynamic model optimization techniques. These hybrid approaches enable models to process and learn from data closer to the source while maintaining stringent privacy requirements by keeping raw data localized. Additionally, the incorporation of pruning strategies, adaptive model compression, or multimodal data fusion contributes to improved speed, scalability, and accuracy in real-time inference tasks. Such frameworks have demonstrated notable promise in settings characterized by high data volume, operational complexity, and the necessity for fast anomaly detection or decision-making. However, despite these advancements, several challenges remain, including synchronization delays across edge nodes, variability in hardware capabilities, and the need for more efficient aggregation algorithms. Future developments may involve leveraging next-generation pruning techniques, energy-aware edge scheduling, decentralized orchestration protocols, or the integration of digital twin technologies to further enhance performance. Overall, hybrid distributed learning frameworks represent an important evolution toward more intelligent, secure, and autonomous computational ecosystems capable of supporting the next wave of smart applications.

Jarot Dian Susatyono; Sofiansyah Fadli; G Thippanna

Journal of Information Technology and Computer Science 2025 Vol. 1 (2) International Forum of Researchers and Lecturers

The integration of autonomous systems in traffic management has become increasingly important as urban populations and vehicle numbers continue to rise, leading to significant congestion. Traditional traffic signal control systems, which rely on fixed timing, are no longer sufficient to handle the dynamic and complex nature of urban traffic. To address these challenges, the proposed explainable Deep Reinforcement Learning (DRL) framework aims to optimize traffic signal control by dynamically adjusting traffic signals based on real-time data. This approach enhances traffic flow efficiency, reduces congestion, and improves overall system performance. The framework leverages Vehicle-to-Everything (V2X) communication, which enables real-time data exchange between vehicles, infrastructure, and other road users, extending the perception range of autonomous vehicles and providing valuable insights for traffic signal optimization. Additionally, the integration of smart infrastructure, such as smart intersections, plays a crucial role in enabling adaptive traffic management and facilitating better coordination across multiple intersections. One of the key advantages of the proposed system is its transparency, achieved through the implementation of explainable AI (XAI) techniques. These mechanisms provide clear insights into the decision-making processes, ensuring that traffic management authorities and system users can understand the rationale behind the system’s decisions. Although challenges such as data accuracy, scalability, and cybersecurity risks remain, the proposed DRL framework shows great promise in revolutionizing traffic management systems. Future research directions include enhancing data collection methods, improving the scalability of the system for larger cities, and further developing explainability features to improve trust and adoption in real-world applications.

Mutiara S. Simanjuntak; Aji Priyambodo; Elshad Yusifov

Journal of Information Technology and Computer Science 2025 Vol. 1 (2) International Forum of Researchers and Lecturers

This study explores the integration of blockchain technology with federated learning (FL) to enhance cross-organizational healthcare analytics while ensuring privacy and data security. Federated learning allows multiple institutions to collaboratively train machine learning models without sharing sensitive patient data. Instead, local data is used to train models, and only model parameters are exchanged. However, privacy concerns and data sharing inefficiencies have hindered broader healthcare collaboration. Blockchain, a decentralized ledger technology, addresses these concerns by ensuring data integrity and transparency, providing an immutable and tamper-proof record of all transactions. This study investigates how the combination of blockchain and federated learning can overcome these challenges, facilitating secure and efficient data sharing between healthcare institutions. The study uses synthetic multi-institution healthcare datasets to simulate real-world collaboration scenarios. The blockchain-enabled federated learning system ensures that no raw patient data is shared, significantly reducing the risk of privacy breaches while still allowing healthcare institutions to collaborate on predictive model development. The results show that while there is a slight decrease in model accuracy compared to centralized methods, the trade-off is outweighed by the privacy and security benefits. Blockchain’s integration ensures that model updates are transparent, enhancing trust between institutions and reducing concerns about data integrity. Moreover, the use of blockchain’s smart contracts automates and enforces compliance, further streamlining collaboration. This research contributes to the field by demonstrating how blockchain-integrated federated learning can create a secure, scalable, and privacy-preserving framework for collaborative healthcare analytics. The findings underscore the potential for this approach to enhance healthcare outcomes and improve decision-making across institutions while ensuring patient data protection.

Atika Mutiarachim; Royke Lantupa Kumowal; Nigar Aliyeva

Journal of Information Technology and Computer Science 2025 Vol. 1 (2) International Forum of Researchers and Lecturers

This study explores the development and application of a digital twin-driven cybersecurity risk assessment model for Industrial Internet of Things (IIoT) networks. The increasing complexity and interconnectivity of IIoT systems have expanded the attack surface, making them vulnerable to a wide range of cyber threats. The digital twin model addresses this challenge by creating real-time virtual replicas of physical systems, which can simulate and predict network vulnerabilities and attack vectors. The model uses machine learning algorithms and real-time data to simulate cyberattacks, including Distributed Denial of Service (DDoS), malware, and data breaches. By providing continuous monitoring and dynamic risk predictions, the digital twin model enhances the resilience of IIoT networks compared to traditional cybersecurity frameworks. The findings indicate that the model's ability to predict potential cyber threats and simulate various attack scenarios provides a more proactive and accurate approach to cybersecurity in IIoT environments. Additionally, the study highlights key mitigation strategies, including adaptive security mechanisms, real-time anomaly detection, and the use of lightweight encryption for resource-constrained devices. Despite its effectiveness, challenges such as computational requirements, integration with legacy systems, and scalability were identified. This research underscores the strategic importance of digital twin models in securing IIoT systems and advancing Manufacturing 4.0 ecosystems. Future research should focus on enhancing model accuracy, expanding its application to diverse industrial sectors, and improving interoperability with legacy systems to further strengthen the security posture of IIoT networks.

Eka Prasetya Adhy Sugara; Nurul Azwanti; Ivy Derla

Journal of Information Technology and Computer Science 2025 Vol. 1 (2) International Forum of Researchers and Lecturers

This paper explores the application of quantum-inspired optimization algorithms in the training of large-scale Graph Neural Networks (GNNs) within distributed cloud-edge environments. GNNs have gained significant attention due to their ability to model complex relationships in graph-structured data, yet their training presents challenges such as high computational demand, inefficient resource allocation, and slow convergence, especially for large datasets. Traditional meta-heuristic algorithms, while useful, often face scalability and performance issues when applied to such large-scale tasks. To address these challenges, we propose a quantum-inspired meta-heuristic algorithm that leverages quantum principles, such as superposition and entanglement, to enhance optimization processes. The algorithm was integrated into a hybrid cloud-edge system, where computational tasks are dynamically distributed between edge nodes and the cloud, optimizing resource utilization and reducing latency. Our experimental results demonstrate significant improvements in training speed, resource efficiency, and convergence rate when compared to traditional optimization methods such as Genetic Algorithms and Simulated Annealing. The quantum-inspired algorithm not only accelerates the training process but also reduces memory usage, making it well-suited for large-scale GNN applications. Furthermore, the system's scalability was enhanced by the hybrid cloud-edge architecture, which balances computational load and enables real-time data processing. The findings suggest that quantum-inspired optimization algorithms can significantly improve the training of GNNs in distributed systems, opening new avenues for real-time applications in areas such as social network analysis, anomaly detection, and recommendation systems. Future work will focus on refining these algorithms to handle even larger datasets and more complex GNN architectures, with potential integration into edge devices for enhanced real-time decision-making.

Benny Martha Dinata; Ahmad Budi Trisnawan; Eram Abbasi

Journal of Information Technology and Computer Science 2025 Vol. 1 (2) International Forum of Researchers and Lecturers

This research focuses on the development and evaluation of an Adaptive Edge-AI framework designed to optimize real-time data processing and decision-making in resource-constrained environments, specifically within smart city infrastructures. The primary problem addressed is the challenge of minimizing latency, reducing energy consumption, and ensuring the reliability of Cyber-Physical Systems (CPS) when using Internet of Things (IoT) devices. The objective of the study is to assess the effectiveness of this framework in real-world smart city applications such as traffic monitoring, environmental sensing, and smart utilities management. The proposed method integrates lightweight AI models, edge computing, and adaptive resource management techniques, including Federated Learning and Neural Architecture Search, to ensure optimal performance while addressing hardware constraints. The main findings reveal that the framework significantly improves real-time inference speed, reduces energy consumption of IoT devices, and enhances CPS reliability by minimizing communication delays and ensuring continuous system operation despite network disruptions. The application of this framework to smart transportation and urban utilities further demonstrates its potential to optimize city management processes. The study concludes that the Adaptive Edge-AI framework offers a promising solution for smart cities, enhancing operational efficiency, sustainability, and resilience. It is recommended for integration into smart city infrastructures to enable better resource management and decision-making in real-time applications.

Gopinda Tri Anda Gurusinga; Arisfi Alma Ashofi; Rifqi Rahman Abdillah

Journal of Information Technology and Computer Science 2025 Vol. 1 (1) International Forum of Researchers and Lecturers

Ambarawa Regional General Hospital (RSUD) is a Public Service Agency Regional (BLUD) belonging to Semarang Regency which operates in the field of public health services. Ambarawa Regional Hospital is located at Jalan Kartini No. 101 Ambarawa. So far, inventory data processing. At the Ambarawa Regional General Hospital (RSUD) it is still processed manually, there are several problems. What often arises is not knowing the depreciation value of goods each month, not yet applying it database system and low inventory data security system. The author used 3 research methods for data collection in this research, namely observation, interviews and literature study. The observation method used by the author is by carry out direct practice on problems that occur. Interview conducted by the author with the employees concerned, while in the literature study the author looked for related literature with research and used as a theoretical basis. From the analysis and research results, it can be seen that the solution to the problem above is: create an Inventory Data Recording Information System that uses a programming language Microsoft Visual Basic 6.0 and Microsoft SQL Server 2000 database processing system. Procedures will consist of p5 main parts, namely inventory data collection, room data collection, inventory item placement transactions, inventory item mutation transactions and line method depreciation straight. This system is expected to increase the effectiveness of inventory data processing in hospitals Ambarawa Regional General Hospital (RSUD).    

Sifa Alfiana; Putri Amalia Syafitri; Rina Setiana

Journal of Information Technology and Computer Science 2025 Vol. 1 (1) International Forum of Researchers and Lecturers

KUD Plongkowati Timur is located at Jalan Godong-Karangrayung, Grobogan Regency, which apart from operating in the savings and loan business unit, is also a distributor of fertilizer products produced by PT. Petrokima Gresik and PT. Pupuk Sriwidjaja Palembang. In running the fertilizer distributor business unit at KUD Plongkowati Timur, the process of recording purchase and sales transactions is not carried out using an integrated system in the database, all data is still recorded conventionally by the administration section, while the administration section's computer functions to create reports which are copied from notes provided by cashier so that data security is not guaranteed because the database is not yet integrated. Based on the problems that exist in the East Plongkowati KUD, the author would like to propose a design of a Fertilizer Inventory application system to be taken into consideration in helping speed up the process of recording purchase and sales transactions and can provide the information needed by the East Plongkowati KUD, Grobogan Regency in the form of purchase reports, sales reports, reports. suppliers, customer reports. So the design of the fertilizer inventory information system uses the Microsoft Visual Basic 6.0 programming language with a database from Microsoft Access 2007 and the design of reports from Crystal Report 7. It is hoped that making this application can be one of the efforts that can be made to overcome problems that often occur so that it can increase work effectiveness which of course will have an impact on improving the quality of service to customers.

Hermawan Prayoga; Rama Deddy Irawan; Achsan Edi Winata

Journal of Information Technology and Computer Science 2025 Vol. 1 (1) International Forum of Researchers and Lecturers

The selection system at Bina Negara Gubug Vocational School Jl. KH. Hasan Anwar No.9 Gubug currently processes data on the criteria for each student for each type of scholarship. It does not yet have a database system but uses a computerized system with Microsoft Excel, so there are often delays in the selection process in preparing the selection report for scholarship recipients. This research uses the Research and Development (R & D) development model by Borg and Gall with 6 steps of development, namely Research and Information Collecting, Planning, Develop Premilinary Form of Product, Premilinary Field Testing, Main Product Revision, Main Field Testing. The scholarship selection decision support system application product uses the SAW (Simple Additive Waighting) method. Visual Basic 6.0 development software and Microsoft Access database. This system can provide a useful solution for the decision-making system for selecting scholarship recipients for schools so that a better and faster selection can be achieved.      

Aditama Surya Permana; Eri Yanto; Evanti Andriani

Journal of Information Technology and Computer Science 2025 Vol. 1 (1) International Forum of Researchers and Lecturers

The development of the business world relies on technology information Already become absolute thing.​ His height activity data transactions and exchange information in online communication media and the internet is one technology For overcome activity everyday life in the era of globalization moment This. CV. TYTUS FURNITURE SEMARANG is A moving company​ in field Furniture sales , such as : table chair room visitor . Problem sale CV products. TYTUS FURNITURE SEMARANG is limitations consumer in obtain information product , type product , price product , because consumer must come direct to company or waiting for sales to come give information about company , so No can save time and costs , turnover sale not enough in accordance hope , not yet exists processing sales and consumer data reports in a way fast and efficient. For that's system very online sales required moment This For develop something company , because system the present For give real solution​ Where system the online sales can fulfil need will transaction business online with​ easy and fast with using a MySQL database with Language PHP programming . System information This capable give information product to consumer online , and can give information products sold​​​ to consumer . Based on background behind problem above , then writer take title "WEB-BASED GOODS ORDERING INFORMATION SYSTEM ON CV. TYTUS FURNITURE SEMARANG.”    

Zulnazri Zulnazri; Rozanna Dewi; Agam Muarif; Ahmad Fikri

The International Conference on Education, Social Sciences and Technology 2025 Vol. 3 (2) International Forum of Researchers and Lecturers

This study describes the manufacture of modified composites from recycled HDPE plastic waste and TKKS fibers. The comparison of matrix composition with TKKS fibers used is (70:30); (80:20) and (90:10) %. The mixing process of HDPE with OPEFB was carried out using the melt blending method and extruded to obtain composite pellets. Composite characterization obtained high tensile strength in HDPE composites: OPEFB ratio (70:30) % of 36.50 Mpa, while the tensile strength of HDPE: OPEFB ratio (80:20) and HDPE: OPEFB ratio (90:10) % were obtained respectively 23.50 MPa and 26.50 MPa. The results of the surface structure test with SEM showed that there was a bond between the HDPE matrix and TKKS fibers which was marked by a brittle fracture shape on the surface of the composite and the fibers were not uniformly broken. The impact test results of HDPE composite: OPEFB ratio (80:20) % have a high impact value of 0.363168 Joules. The FT-IR test results show deformation of C-H vibrations and pyranose C-O-C which are typical of OPEFB at wave numbers 1070.49 & 1151.50 cm- and intermolecular O-H bending in TKKS at wave numbers 3672.47 cm-. The TGA test results of HDPE composite: OPEFB ratio (70:30) and HDPE: TKKS (80:20) obtained degradation temperature values (Te) of 535.57ᴼC and 529.74ᴼC, respectively.