tidy profile data

here’s a bash/octave script for tidying up the aforementioned profiling output. The script pulls relevant meta data, collecting time and frequency for same call(s) and producing a neat, sorted *csv which can be used for generating nice graphs, here again an example from the lovely quantum espresso

 #cleanup and sort profile data 
gawk '($1~"meta"){print $0}' | \ 
sed 's/\ //g' | \ 
sed 's/meta//g' |\ 
sed 's/<//g'|\ 
sed 's/\///g'|\ 
sed 's/>//g'|\ 
tee prof_data_raw.txt |\ 
gawk 'BEGIN{FS=":"}{print NR,",",$1,",",$2$3}' > foo.txt 
gawk 'BEGIN{FS=":"}{print $4,",",$2,"#",$3,","}' < prof_data_raw.txt > names.txt 

cat > temp.m << %end 

load foo.txt; 

% collect the profile data for unique calls 
% the call column 3 is a unique cat of call line + file number 
[dat ind]=sort(foo(:,3));
 out = foo(ind,:); 
%determine total unique call keys (col 3) 
uniq = unique(out(:,3)); 
num = length(uniq); 

%output profile data matrix 
profl = zeros(num,4); 

%collect call frequency information 
for i=1:num
         ind = find(out(:,3)==uniq(i));
         %call frequency
         %total call time
         %average call time
         %line number
         ln = out(ind,1);
         profl(i,1) = ln(1); 

%sort by total time 
[dat ind]=sort(profl(:,3)); 
out = profl(ind,:); 
[a b]=fscanf(fid,'%s'); 
str = strsplit(a,","); 
fid = fopen('profile_output.csv','w');  
fprintf(fid,"call, hash, total time (ms), average time (ms), frequency\n"); 

for i=1:num
         index = out(i,1);


octave --silent --eval temp 

#rm temp.m prof_data_raw.txt foo.txt names.txt 


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s