Extended performance accounting using Valgrind tool

D.V. Rahozin, A.Yu. Doroshenko


Modern workloads, parallel or sequential, usually suffer from insufficient memory and computing performance. Common trends to improve workload performance include the utilizations of complex functional units or coprocessors, which are able not only to provide accelerated computations but also independently fetch data from memory generating complex address patterns, with or without support of control flow operations. Such coprocessors usually are not adopted by optimizing compilers and should be utilized by special application interfaces by hand. On the other hand, memory bottlenecks may be avoided with proper use of processor prefetch capabilities which load necessary data ahead of actual utilization time, and the prefetch is also adopted only for simple cases making programmers to do it usually by hand. As workloads are fast migrating to embedded applications a problem raises how to utilize all hardware capabilities for speeding up workload at moderate efforts. This requires precise analysis of memory access patterns at program run time and marking hot spots where the vast amount of memory accesses is issued. Precise memory access model can be analyzed via simulators, for example Valgrind, which is capable to run really big workload, for example neural network inference in reasonable time. But simulators and hardware performance analyzers fail to separate the full amount of memory references and cache misses per particular modules as it requires the analysis of program call graph. We are extending Valgrind tool cache simulator, which allows to account memory accesses per software modules and render realistic distribution of hot spot in a program. Additionally the analysis of address sequences in the simulator allows to recover array access patterns and propose effective prefetching schemes. Motivating samples are provided to illustrate the use of Valgrind tool.

Prombles in programming 2021; 2: 54-62


workload; performance analysis; coprocessors; prefetch; computer system simulator

Full Text:



A. Doroshenko, O. Beketov. Large-Scale Loops Parallelization for GPU Accelerators. //In Proc. of the 15th Int. Conf. on ICT in Education, Research and Industrial Applications. Integration, Harmonization and Knowledge Transfer. Vol I. Kherson, Ukraine, June 12-15, 2019. CEUR-WS, vol. 2387 (2019).-P.82-89. http://ceur-ws.org/Vol-2387/

A. Doroshenko, O. Yatsenko. Formal and Adaptive Methods for Automation of Parallel Programs Construction: Emerging Research and Opportunities. IGI Global, Hershey, Pennsylvania, USA. 2021, 279 p. CrossRef

Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M. & others (2016). TensorFlow: A System for Large-Scale Machine Learning.. OSDI (p./pp. 265-283)

Y. Jia and Evan Shelhamer and J. Donahue and S. Karayev and J. Long and Ross B. Girshick et al. Caffe: Convolutional Architecture

for Fast Feature Embedding. // In Proc. of the 22nd ACM international conference on Multimedia, 2014.

J.Redmon. (2013) [Online]. Darknet: Open Source Neural Networks in C. - Available from https://pjreddie.com/darknet/

Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv: 2004.10934.

D. Ragozin, A. Doroshenko. Memory Subsystems Performance Analysis for CNN Workloads. // In Proc. of AUTOMATION 2020: 26-th Scientific conf. in memory of L. Pontryagin, N. Krasovsky and B. Pshenichny, 2020, Kyiv, Ukraine. P. 12-122.

Ignatov A. et al. (2019) AI Benchmark: Running Deep Neural Networks on Android Smartphones. In: Leal-Taixé L., Roth S. (eds) Computer Vision - ECCV 2018 Workshops. ECCV 2018. Lecture Notes in Computer Science, vol 11133. Springer, Cham. CrossRef

J. Rainders and J. Jeffers. High Performance Parallelism Pearls. Morgan-Kauffmann, 2015. 502 p.,CrossRef

J. Weidendorfer. Sequential Performance Analysis with Callgrind and KCachegrind. // In Proc. of the 2nd International Workshop on Parallel Tools for High Performance Computing, July 2008, HLRS, Stuttgart, pp. 93-113

Mittal, Sparsh. (2016). A Survey of Recent Prefetching Techniques for Processor Caches. ACM Computing Surveys. CrossRef

Kim, Yoongu; Yang, Weikun; Mutlu, Onur (2016): Ramulator: A Fast and Extensible DRAM Simulator. Carnegie Mellon University. Journal contribution. CrossRef

DOI: https://doi.org/10.15407/pp2021.02.054


  • There are currently no refbacks.