I want to count the number of disk accesses during a complete run of my script.
My bash script runs 2 other executable files locally and 2 other executable files remotely. Something like this (those executable files may access other tools themselves):
#!/bin/bash
./executable1 DATA1 &
./executable2 DATA2 &
ssh remote_host './executable3 DATA3' &
ssh remote_host './executable4 DATA4' &
wait;
Now I'm running my bash script with perf like this:
perf stat -e page-faults,page-faults:u ./myBashScript.sh
but always results are the same, no matter if I change the DATA* files, the orders, the number of commands,… Something like this:
128,470 page-faults
127,641 page-faults:u
Now my question is "How can I count the number of those disk accesses for the whole script?"
p.s:
- As you know Linux tries to reduce number of disk access by using free space of ram as a cache disk and here by "counting number of disk accesses" I exactly mean how many times OS needs to bring data from hard disk to main memory ( = RAM hit/miss)
- I just need to count number of disk accesses on local machine not the remote one.