I have compiled dhrystone using the following command gcc dhrystone_1.c dhrystone_2.c -DHZ=60 -o dhrystone and got the executable. Now how do I use the executable (dhrystone) on my code to calculate the number of cpu cycles consumed. Please let me know.
Here is how I ran the executable. Just copied on SD card which contains the Uboot for the board I am using and ran it to get the Dhrystone per second.
Related
i want ask you for some question what can be incorrect,
i write a program in C and translate it in
gcc -Wall -pedantic
and run
valgrind --simulate-cache=yes --tool=callgrind ./a.out
that create callgrind.out.[pid-number]
and if i run
callgrind_annotate callgrind.out.[pid] main.c
out will be
-- User-annotated source: main.c
No information has been collected for main.c
is it any way how to annotate code and calls of function for program optimalize tools ?
note
gcc -g - pg progrma.c
not working because Apple unsuported debug option on gcc and gprof is unsupported too. And KDE program don`t want run on Mac ...
Thanks everyone to helpful information how to solve it
You're compiling with gprof profiling information when you compile with -pg. valgrind doesn't actually need that data to do it's profiling, what it does need is the debug information.
Using valgrind-HEAD, I took a simple piece of code and compiled it without -g and got the same result as you - i.e. No information has been collected for main.c.
When I compiled with -g, I got useful information about main.c, even when I compiled with optimization I got useful information.
Long and short of it is that you need to compile with -g, not with -pg to get it to work with callgrind.
I'm running into a little problem and require some assistance. I would like to run gprof on some OpenMP and MPI hybrid code as a part of my testing. I understand that Gprof relies on a binary file which is created when you compile GCC (or mpicc) with a -pg switch.
I have tried adding this switch and my compiling succeeds (as in no errors are reported), however, the binary file is not created, but the executable is created as normal. I have also tried doing this on much simpler code, which uses pthreads, with the same result.
Please examine the below and let me hear your thoughts.
gcc -pg --std=gnu99 -pthread -Wall -o pthreadsv0 pthreads.c
GCC compiling with -pg doesn't produce binary needed for Gprof.
I suspect that the binary file you mention is in fact the profile data file (gmon.out), and it is generated when you run your program (which has to be compiled with the -pg flag).
Just execute your program and see if a gmon.out file is there.
The Gprof information is created when you execute the program after you compile with the -pg option. Try running your program. (You're profiling (Gprof) the execution of the program.)
The -pg compile option adds the necessary logic to create the profiling information when the program is executed. Executing the program, several times if desired or needed, allows the 'instrumented' code to write the data describing the logic flow and timing to the gmon.out file.
I encountered the same problem. The problem arose because I was shutting the program's execution with Ctrl + C instead of a proper exit (closing the GUI window in my case).
So when I try to run a compiled C program on my school's "hercules" server, it runs as I would normally expect it to. However, I've got an assignment that requires the use of forks, and running programs that fork on that server is forbidden, instead, I am to run them by remotely connecting to one of several Linux machines and running it there, from command line
However, any attempt to do so gives me this error:
shell2: Exec format error. Binary file not executable.
Altogether, my command prompt looks like this:
a049403[8]% shell2
shell2: Exec format error. Binary file not executable.
I've got the shell2 file in the working directory, when I type "ls" it shows it with the * character indicating it is notionally an executable. I've tried setting its permissions to 777. It produces the same error for other C programs which I have been working with and running, and "hercules" can run the exact same file without any difficulties or complaints. My make file for this particular program looks like this:
all: shell2
Basics.o: Basics.c Basics.h
cc -c Basics.c
doublinked.o: doublelinked.c doublelinked.h
cc -c doublelinked.c
main.o: main.c Basics.h doublelinked.h
cc -c main.c
shell2: main.o Basics.o doublelinked.o
cc main.o Basics.o doublelinked.o -o shell2
clean:
rm -f *o shell2
...and if I re-run the makefile it seems to build the program with no difficulties.
So, and reason an environment that can compile C programs would be unable to run them? Any likely workarounds?
First, your Makefile is not very good. See this example and this one to improve it. Always pass -Wall -g flags (to get all warnings and debug info) to gcc when compiling some home work.
Then, a possible explanation of why your binary program don't run on the target machine could be libc version mismatch. Or perhaps one system is 32 bits and the other is 64 bits. Use file shell2 to find out.
I suggest to make clean then recompile again (e.g. with make all) on the target machine.
Learn how to use the gdb debugger.
If you're running a program compiled on a different (incompatible) architecture, you'll almost certainly run into troubles.
It's a little unclear from the question whether you're transferring and attempting to run the executable, or trying to build the executable from scratch on the new machine. It's also unclear whether your make (if you're running it on the new machine) is completely rebuilding everything.
However, given the "Exec format error", it's a safe bet that the former case is the one you're encountering, trying to run an incompatible executable file.
So, here's my advice. Transfer everything to the new machine and execute:
make clean all
to ensure everything is being rebuilt.
Recently, I found knn CUDA which is a group of Mex file that implement knn search based on brute force, but in the README.md I have not found the way to compile this files in matlab using a linux distribution. I would appreciate ideas about how cope with this issue.
I'm the author of this kNN code :)
Back in 2008, the code was written using the Windows XP OS.
Since I provide the source code, you should be able to produce linux mex files.
In the ReadMe, I give the following command line for Windows :
nvmex -f nvmexopts.bat knn_cuda_with_indexes.cu -I'C:\CUDA\include' -L'C:\CUDA\lib' -lcufft -lcudart -lcuda -D_CRT_SECURE_NO_DEPRECATE
Adapt it for your Linux distribution to generate your mex file.
A lot of things may have changed in 5 years so you may have to modify a few things.
However, the feedbacks I got from users indicate that it works just fine.
Try also to read about how to compile a CUDA code under Linux.
I guess NVidia provides a pretty nice tutorial.
You can also compile cuda+mex without nvmex. In MATLAB command, simply run the following two lines
>> !nvcc -c yourfile.cu -Xcompiler -fPIC -I$matlabroot/extern/include -I$matlabroot/toolbox/distcomp/gpu/extern/include
>> mex yourfile.o -L/usr/local/cuda/lib64 -L$matlabroot/bin/glnxa64 -lcudart -lcufft -lmwgpu
replace $matlabroot with appropriate path. (Note that ! invoke system command in matlab)
The first line create object file and then mex links library.
You might have to modify your CUDA path to /usr/loca/cuda-6.0/ or /usr/local/cuda-YOUR_VERSION/. Also for the cuda library /usr/local/cuda/lib64 or /usr/local/cuda/lib Please check.
If you want to optimize your code simply put -O3 -DNDEBUG
>> !nvcc -O3 -DNDEBUG -c yourfile.cu -Xcompiler -fPIC -I$matlabroot/extern/include -I$matlabroot/toolbox/distcomp/gpu/extern/include
the library link command is same.
Also please note that additional include path -I$path and library path -L$path or library -l$library might be required to suit your need.
There is a difference of size in the executable file when i compile the code with gcc (using the terminal) and eclipse cdt.gcc 8kb and eclispse 27 kb.why does it happen? Isn't the eclipse using the same gcc compiler that was preinstalled?. The program is very simple ,but would it make significant size increment for larger code and compromize program performance ?
Yes, Eclipse is using the same gcc. However, it's likely that Eclipse add to the binary file some information for its debugging. For example, it's likely that Eclipse runs gcc with -g options which will include inside the binary the whole source code. This can result in very big difference about executable size.
For example, try to compile this simple program:
#include <stdio.h>
int main() {
int i;
for (i=0; i<10; i++)
printf("Hello, world!");
return 0;
}
Try with:
$ gcc -o program program.c
$ gcc -o program_g program.c
$ ls -lh | grep program
-rwxr-xr-x 1 zagorax users 7,8K set 11 19:37 program
-rwxr-xr-x 1 zagorax users 8,4K set 11 19:37 program_g
-rw-r--r-- 1 zagorax users 105 set 11 19:35 program.c
Of course, different gcc option may result in different size.
Note that Eclipse CDT has two build configurations named "Debug" and "Release". By default it builds "Debug" that results in a bigger executable size due to less optimizations and inclusion of the debug information. You can reproduce this by passing gcc -O0 and -g flags.
It is likely that "Release" build will produce executable of the comparable size to what you get from the command line. Note that 'Release" build may yet pass some flags that alter executable size (e.g. it can enable deeper optimization).
You can find the command line flags CDT passes to GCC in build console view and in the generated make files.
Note, as name implies, "Debug" version of the executable is to be used for debugging and it should not be distributed to the users. As a rule, it may be noticeably slower and may provide some debug output that is not meant for user eyes. On the other hand, debugging "Release" build may be a tough endeavour as it may optimize out some code you would like to observe in debugger, reshuffle code lines and be unable to link source code to the program execution.