for a seemingly large code for AES when I profile the code using gprof with following command
cc file1.c file2.c -pg
./a.out
gprof a.out gmon.out > analysis.txt
cat analysis.txt
the output file shows time as 0 for all function calls
Flat profile:
Each sample counts as 0.01 seconds.
no time accumulated
% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
0.00 0.00 0.00 576 0.00 0.00 galois_multiply
0.00 0.00 0.00 40 0.00 0.00 getSBoxValue
0.00 0.00 0.00 33 0.00 0.00 PrintArr
0.00 0.00 0.00 11 0.00 0.00 AddRoundKey
0.00 0.00 0.00 10 0.00 0.00 core
0.00 0.00 0.00 10 0.00 0.00 getRconValue
0.00 0.00 0.00 10 0.00 0.00 myrotate
0.00 0.00 0.00 10 0.00 0.00 shiftRow
0.00 0.00 0.00 10 0.00 0.00 subByte
0.00 0.00 0.00 9 0.00 0.00 mixColumn
0.00 0.00 0.00 1 0.00 0.00 ReadInput
0.00 0.00 0.00 1 0.00 0.00 expandKey
am I missing somthing.. kindly advise,
I tried using eclipse tptp, but couldnt figure out a way to profice c
code using eclipse, any ideas in that direction would also be
appreciated
Is there any tool online using which I can upload my code and extract
the detailed analysis report?
Related
I have data with multilabel classification. I used KNN model in order to classify it. The number of labels are 15, I got accuracy results for each label, averaged the results to get the accuracy of the model which is 93%.
The confusion matrix is showing bad numbers.
Would you tell me what does this mean? Is it overfitting? How can I solve my problem?
Accuracy and mean absolute error (mae) code
Input:
# Getting the accuracy of the model
y_pred1 = level_1_knn_model.predict(X_val1)
accuracy = (sum(y_val1==y_pred1)/y_val1.shape[0])*100
accuracy = sum(accuracy)/len(accuracy)
print("Accuracy: "+str(accuracy)+"%\n")
# Getting the mean absolute error
mae1 = mean_absolute_error(y_val1, y_pred1)
print("Mean Absolute Error: "+str(mae1))
Output:
Accuracy: [96.55462575 97.82146336 99.23207908 95.39247451 98.69340807 74.22793801
78.67975909 97.47825108 99.80189098 77.67264969 91.69399776 99.97084683
99.42621267 99.32682688 99.74159693]%
Accuracy: 93.71426804569977%
Mean Absolute Error: 9.703818402273944
Confusion Matrix and classification report code
Input:
# Calculate the confusion matrix
cMatrix1 = confusion_matrix(y_val1.argmax(axis=1), y_pred1.argmax(axis=1))
# Plot the confusion matrix
plt.figure(figsize=(11,10))
sns.heatmap(cMatrix1, annot=True, fmt='g')
# Calculate the classification report
classReport1 = classification_report(y_val1, y_pred1)
print("\nClassification Report:")
print(classReport1)
Output:
Classification Report:
precision recall f1-score support
0 0.08 0.00 0.01 5053
1 0.03 0.00 0.01 3017
2 0.00 0.00 0.00 1159
3 0.07 0.00 0.01 6644
4 0.00 0.00 0.00 1971
5 0.58 0.65 0.61 47222
6 0.39 0.33 0.36 27302
7 0.02 0.00 0.00 3767
8 0.00 0.00 0.00 299
9 0.58 0.61 0.60 40823
10 0.13 0.02 0.03 11354
11 0.00 0.00 0.00 44
12 0.00 0.00 0.00 866
13 0.00 0.00 0.00 1016
14 0.00 0.00 0.00 390
micro avg 0.54 0.43 0.48 150927
macro avg 0.13 0.11 0.11 150927
weighted avg 0.43 0.43 0.42 150927
samples avg 0.43 0.43 0.43 150927
So - I've been writing a language interpreter as a side project for a year now. Today I have finally decided to test its performance for the first time! Maybe I should have done that sooner... turns out running a Fibonacci function in the language takes x600 the time of the equivalent Python program. Whoopsy daisy.
Anyway... I'm off to profiling. In the call graph, gprof regards a few functions (namely critical ones) as called from <spontaneous>. It's a problem because understanding what calls these functions the most frequently will aid me.
I compile the project as a whole like so:
gcc *.c -o app.exe -g -pg -O2 -Wall -Wno-unused -LC:/msys64_new/mingw64/lib -lShlwapi
I use gprof like so:
gprof app.exe > gprofoutput.txt
Since it's a language interpreter, many of these functions (all of them?) might be called as part of a mutual recursion chain. Is it possible that this is the problem? If so, is gprof to be trusted at all with this program?
The functions called by <spontaneous> are compiled as part of the *.c files of the project, and are not called by an external library or anything that I know of.
Because I have checked this, the other answers here on SO about <spontaneous> haven't solved my issue. What can be causing these functions to appear as called from <spontaneous> and how can I fix this?
Example gprof output (_mcount_private and __fentry__ are of course irrelevant - including them here in case it grants any clues):
index % time self children called name
<spontaneous>
[1] 46.9 1.38 0.00 _mcount_private [1]
-----------------------------------------------
<spontaneous>
[2] 23.1 0.68 0.00 __fentry__ [2]
-----------------------------------------------
<spontaneous>
[3] 18.7 0.06 0.49 object_string_new [3]
0.17 0.24 5687901/5687901 cell_table_set_value [4]
0.00 0.08 5687901/7583875 make_native_function_with_params [7]
0.00 0.00 13271769/30578281 parser_parse [80]
-----------------------------------------------
0.17 0.24 5687901/5687901 object_string_new [3]
[4] 14.1 0.17 0.24 5687901 cell_table_set_value [4]
0.12 0.05 5687901/5930697 table_set_value_directly [6]
0.02 0.04 5687901/7341054 table_get_value_directly [9]
0.01 0.00 5687901/5930694 object_cell_new [31]
-----------------------------------------------
<spontaneous>
[5] 7.0 0.07 0.14 vm_interpret_frame [5]
0.01 0.05 1410341/1410345 cell_table_get_value_cstring_key [13]
0.01 0.02 242786/242794 cell_table_set_value_cstring_key [19]
0.02 0.00 3259885/3502670 object_thread_pop_eval_stack [22]
0.01 0.00 242785/242786 value_array_free [28]
0.00 0.01 242785/242785 vm_call_object [34]
0.00 0.00 681987/1849546 value_compare [32]
0.00 0.00 485570/31306651 table_init [20]
0.00 0.00 242785/242788 cell_table_free [38]
0.00 0.00 242785/25375951 cell_table_init [29]
0.00 0.00 1/1 object_load_attribute [50]
0.00 0.00 1/1 object_load_attribute_cstring_key [52]
0.00 0.00 1/2 object_user_function_new [56]
0.00 0.00 2/33884613 copy_cstring [17]
0.00 0.00 1/5687909 object_function_set_name [25]
0.00 0.00 1/17063722 copy_null_terminated_cstring [23]
0.00 0.00 1/72532402 allocate [21]
0.00 0.00 3502671/3502671 object_thread_push_eval_stack [81]
0.00 0.00 1167557/1167557 object_as_string [85]
0.00 0.00 681988/681995 two_bytes_to_short [86]
0.00 0.00 485572/485578 value_array_make [88]
0.00 0.00 242786/242786 object_thread_push_frame [96]
0.00 0.00 242786/242786 object_thread_peek_frame [95]
0.00 0.00 242785/242785 object_thread_pop_frame [97]
0.00 0.00 242785/485571 vm_import_module [89]
0.00 0.00 2/1167575 object_value_is [83]
-----------------------------------------------
..... etc .........
I'm running Mingw-w64 GCC on Windows 7.
From the gprof manual:
If the identity of the callers of a function cannot be determined, a dummy caller-line is printed which has `' as the "caller's name" and all other fields blank. This can happen for signal handlers.
Looks like your caller's name is unknown to gprof. If any potential caller (including async dispatch, if you're using such) is compiled without symbols, the callers names would not be known. What third party libraries are you using? Can you get debugging symbols for them?
You can obtain Windows symbol packages, though I don't know which libraries are covered. That page also discusses using Microsoft's Symbol Server instead of downloading (potentially out-of-date) symbols packages.
I used gprof to get a profile of a c code which is running too slowly. Here is what I get:
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
100.05 0.16 0.16 etext
0.00 0.16 0.00 90993 0.00 0.00 Nel_wind
0.00 0.16 0.00 27344 0.00 0.00 calc_crab_dens
0.00 0.16 0.00 17472 0.00 0.00 Nel_radio
0.00 0.16 0.00 1786 0.00 0.00 sync
0.00 0.16 0.00 1 0.00 0.00 _fini
0.00 0.16 0.00 1 0.00 0.00 calc_ele
0.00 0.16 0.00 1 0.00 0.00 ic
0.00 0.16 0.00 1 0.00 0.00 initialize
0.00 0.16 0.00 1 0.00 0.00 make_table
I don't know what does "etext" mean and why is it taking 100.05% of time running. Thanks for your help!
I was having a similar issue and it was caused by me calling gprof with a different executable.
The accident occurred because I was recompiling with different options and naively called gprof with the same executable name on two different gmon.out files that were generated with different executables.
gprof exec1 exec1.gmon.out # Good, expected output
gprof exec1 exec2.gmon.out # Weird etext function with 0 calls, but lots of time consumed
Make sure you're not doing something similar.
I have below output from gprof for my program:
Flat profile:
Each sample counts as 0.01 seconds.
no time accumulated
% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
0.00 0.00 0.00 30002 0.00 0.00 insert
0.00 0.00 0.00 10124 0.00 0.00 getNode
0.00 0.00 0.00 3000 0.00 0.00 search
0.00 0.00 0.00 1 0.00 0.00 initialize
I have done optimizations and the run time I have is 0.01 secs(this is being calculated on a server where I'm uploading my code) which is the least I am getting at the moment. I am not able to reduce it further, though I want to. Does the 0.01 sec run time of my program has anything to do with the sampling time I see above in gprof output.
Call graph is as below:
gprof -q ./a.out gmon.out
Call graph (explanation follows)
granularity: each sample hit covers 2 byte(s) no time propagated
index % time self children called name
0.00 0.00 30002/30002 main [10]
[1] 0.0 0.00 0.00 30002 insert [1]
0.00 0.00 10124/10124 getNode [2]
-----------------------------------------------
0.00 0.00 10124/10124 insert [1]
[2] 0.0 0.00 0.00 10124 getNode [2]
-----------------------------------------------
0.00 0.00 3000/3000 main [10]
[3] 0.0 0.00 0.00 3000 search [3]
-----------------------------------------------
0.00 0.00 1/1 main [10]
[4] 0.0 0.00 0.00 1 initialize [4]
-----------------------------------------------
While using `time /bin/sh -c ' ./a.out < inp.in '` on my machine I get below which varies slightly on every run .
real 0m0.024s
user 0m0.016s
sys 0m0.004s
real 0m0.017s
user 0m0.008s
sys 0m0.004s
I am bit confused how to correlate time output and gprof o/p
According to your other question, you got it from 8 seconds down to 0.01 seconds.
That's pretty good.
Now if you want to go further, first do as #Peter suggested in his comment.
Run the code many times inside main() so it runs long enough to get samples.
Then you could try my favorite technique.
It will be much more informative than gprof.
P.S. Don't worry about CPU percent.
All it tells is if your machine is busy and not doing much I/O.
It does not tell you anything about your program.
I thought of learning gprof.so i started with a simple program.
I have written a small program in c below:
#include<stdio.h>
#include<unistd.h>
void hello(void);
int main()
{
hello();
return 0;
}
void hello()
{
int i;
for(i=0; i<60; i++)
{
sleep(1);
printf("hello world\n");
}
}
i compiled my program using -pg option.
and i executed it to make sure that its working fine.
then i did
gprof -f hello a.out > gout
this gives me gout file created.
inside the gout file i can see the below information.
% cumulative self self total
time seconds seconds calls ms/call ms/call name
-nan 0.00 0.00 3 0.00 0.00 __1cH__CimplWnew_atexit_implemented6F_b_ (1561)
-nan 0.00 0.00 1 0.00 0.00 __1cFhello6F_v_ (1562)
-nan 0.00 0.00 1 0.00 0.00 __1cG__CrunMdo_exit_code6F_v_ (1563)
-nan 0.00 0.00 1 0.00 0.00 __1cG__CrunSregister_exit_code6FpG_v_v_ (1564)
-nan 0.00 0.00 1 0.00 0.00 __1cG__CrunVdo_exit_code_in_range6Fpv1_v_ (1565)
-nan 0.00 0.00 1 0.00 0.00 __1cH__CimplKcplus_fini6F_v_ (1566)
-nan 0.00 0.00 1 0.00 0.00 __1cH__CimplQ__type_info_hash2t5B6M_v_ (1567)
-nan 0.00 0.00 1 0.00 0.00 __1cU__STATIC_CONSTRUCTOR6F_v_ (1568)
-nan 0.00 0.00 1 0.00 0.00 __SLIP.FINAL__A (1569)
-nan 0.00 0.00 1 0.00 0.00 __SLIP.INIT_A (1570)
-nan 0.00 0.00 1 0.00 0.00 __cplus_fini_at_exit (1571)
-nan 0.00 0.00 1 0.00 0.00 _ex_deregister (1572)
-nan 0.00 0.00 1 0.00 0.00 main (1)
^L
Index by function name
(1562) __1cFhello6F_v_ (1567) __1cH__CimplQ__type(1571) __cplus_fini_at_exi
(1563) __1cG__CrunMdo_exit(1561) __1cH__CimplWnew_at(1572) _ex_deregister
(1564) __1cG__CrunSregiste(1568) __1cU__STATIC_CONST (1) main
(1565) __1cG__CrunVdo_exit(1569) __SLIP.FINAL__A
(1566) __1cH__CimplKcplus_(1570) __SLIP.INIT_A
i have given a sleep time of 60 sec.
and i am not seeing that 60 sec in the gprof output.
i believe its probably hidden inside the output.
could anybody pls help me understand the output of gprof?
gprof's sample doesn't consider I/O, sleep, and other async or blocked OS syscalls, so you can't see related time cost in gprof's report.