SciRepID - Memory Hierarchy Optimization and Cache Aware Signal Processing Pipelines for Next Generation High Throughput Computing Architectures

📅 20 January 2026

Memory Hierarchy Optimization and Cache Aware Signal Processing Pipelines for Next Generation High Throughput Computing Architectures

Computer Architecture and Signal Processing
ASOSIASI PENGELOLA JURNAL INFORMATIKA DAN KOMPUTER INDONESIA

📄 Abstract

This research explores the impact of Cache Aware optimizations on signal processing pipelines in High Throughput computing systems. The growing demand for efficient memory management in modern computing systems, especially for data-intensive applications such as artificial intelligence (AI) and multimedia processing, necessitates the development of optimized memory hierarchies. Traditional memory systems often suffer from memory bottlenecks, significantly reducing the performance of these systems. This study investigates how memory hierarchy optimizations, particularly cache line aware optimization, dependency-aware caching, and adaptive cache replacement algorithms, can mitigate these challenges and improve system performance. Through analytical modeling and experimental benchmarking, this work evaluates various memory hierarchy configurations, including processing-in-memory (PIM) and three-dimensional integrated circuits (3D ICs), comparing them to conventional systems. The results demonstrate that Cache Aware optimizations lead to a reduction in memory access latency by up to 30%, while throughput improved by up to 40%. Additionally, cache hit rates increased by 25%, and energy consumption was reduced by up to 20%, highlighting the effectiveness of optimized memory management. The research contributes to the field by providing valuable insights into the design and implementation of efficient signal processing pipelines. It also identifies key challenges, including the need for dynamic occupancy mechanisms and DAG-aware scheduling algorithms, and suggests potential areas for future research, such as the exploration of collaborative caching approaches and further optimization of cache-adaptive algorithms. This work lays the foundation for more efficient, high-performance computing systems that can handle large datasets and complex tasks in real-time applications.

🔖 Keywords

#Signal Processing; Memory Hierarchies; Throughput Improvement; Pipeline Design; Cache Aware Optimizations

ℹ️ Informasi Publikasi

Tanggal Publikasi
20 January 2026
Volume / Nomor / Tahun
Volume 1, Nomor 1, Tahun 2026

📝 HOW TO CITE

Hari Imbrani; Achmad Subagdja, "Memory Hierarchy Optimization and Cache Aware Signal Processing Pipelines for Next Generation High Throughput Computing Architectures," Computer Architecture and Signal Processing, vol. 1, no. 1, Jan. 2026.

ACM
ACS
APA
ABNT
Chicago
Harvard
IEEE
MLA
Turabian
Vancouver

🔗 Artikel Terkait dari Jurnal yang Sama

📊 Statistik Sitasi Jurnal

Tren Sitasi per Tahun