I am now learning c language, and my school put all assignments on myth, every time we have to log in by ssh and execute command remotely.
Thus I want to download the files and execute them on my own macbook. However when I use make command to compile the files, I got errors and warnings such as :
gcc -g -O0 -std=gnu99 -Wall $warnflags -m32 -c -I. vectest.c -o vectest.o
warning: unknown warning option '-Wlogical-op'; did you mean '-Wlong-long'?
vectest.c:10:10: fatal error: 'error.h' file not found
#include <error.h>”
I googled these problems but could not find a satisfactory answer. can anyone help me solve this ? or I have to use a linux machine instead?
Indeed; compilers for various platforms (even if it's the "same" compiler, such as GCC) may have different flags and behaviors. You may be able to get it to work - you could remove the -Wlogical-op flag from $warnflags in your Makefile, but if the error.h file is a system-supplied header file, you're probably in trouble. Therefore, I suggest that you download e.g. VirtualBox and run Linux on it.
See error(3) for what this header provides. It's not specific to linux but to the GNU C library. What you COULD do is provide your own minimal implementation of these functions and write your own error.h.
You could even `#define' them to do nothing at all, but then you will probably lose some error reporting in the existing code. Maybe you could try to find a teacher understanding the problem and discuss the issue ... it's probably better to learn standard C not using any platform-specific extensions.
Related
TL:DR
Can you generate clang debugging information(CFGs, PDGs) when the original source file have DEPENDENCY errors from missing header files that cause compilation issues such as undeclared identifiers and unknown types? The files are syntactically correct. Is there a flag that maybe set all undeclared identifiers to INTs for debugging?
I am using Clang to analyze source code packages. Usually, I modify the makefile so clang generates debugging information using the command below
clang -emit-llvm -g -S -ferror-limit=0 -I somefile some_c_file
However, this approach is very makefile focused and if developer does not support Clang in that given build version, I have to figure out how to generate the debugging information.
This is not good for automation. For things such as OpenSSL where they include dozen of files(headers) and custom configurations for the given platform, this is not practical. I want to suppress or ignore the errors if possible since I know the build version's file under test is syntactically correct.
Thanks!
Recently I used clang-tidy for source code analysis of one of our projects. The project uses GNU compiler and we didn't wanted to move away from that. So the process that I followed was below:
1) Use bear to generate the compilation database i.e. compile_commands.json which is used by clang-tidy
2) By pass the include files that we don't want to analyze by including them as system files i.e. use --isystem for their inclusion and project specific files using -I. (If you can't change the Make files you could change the compile_commands.json by a simple find and replace)
Hope this helps
I'm a beginner in programming embedded devices.
While cross compiling a cryptography algorithm (using openssl), it generates an error as shown below. The program doesn't have a problem, since it runs well in the host system (Ubuntu 14).
Did anyone come across this problem ? I tried some of the already posted related questions on cross compilation but didn't solve my problem.
Thanks.
For headers issue:
Locate the headers and include it using -I switch while compilation.
For linking ussue:
$ locate libcrypto.so
You will get the directory libcrypto resides. Let's say the directory is: target_usr/lib/libcrypto.so
Now use the following command to ensure correct linking:
$ arm-linux-gnueabi-gcc hashSHA.c -Ltarget_usr/lib -lcrypto
Also make sure to add appropriate include flag and prefer to use some warning and optimization flags (-W -Wall -O2 for example)
I would like to know how I can prevent gcc under Cygwin from automatically adding the .exe extension to compiled files, because I just caused myself a lot of confusion with "missing files". For context, I am working on a C project for university and I usually work in the labs which run Ubuntu (dual-boot with Windows), but to work from home I prefer using my Windows machine, ergo Cygwin. If I just remove the extension it still works just fine on either system, but it is rather frustrating to have to change the command to include the extension whenever I've just compiled it under Cygwin.
I looked up the FAQ from Cygwin to find that it is probably an issue related to an environment variable in .bashrc or .bash_profile (see here), but I am no command-line ninja and am not very familiar with editing configuration files... I found two related questions as well that show the same behaviour, but have nothing to do with trying to change it:
Compiling with gcc (cygwin on windows)
Executable file generated using gcc under cygwin
Any ideas?
It is actually for an MPI in C project so I have a Makefile that calls mpicc but that is not really relevant to the problem, since I just tried with gcc as well and both do the same thing. For the purpose of this question, the commands and outputs I get are:
$ gcc -o hello hello.c
$ ls
hello.c hello.exe
$./hello
Hello, world!
$./hello.exe
Hello, world!
Note that running with or without the extension does the same thing in the shell, but it does not with mpirun which is why I want to change this behaviour.
I eventually decided that Windows is not the programming environment for me. From now on all work that can be done in Linux will be.
7 years and no one to tell ?
My answer : Yes it's possible to produce an executable without .exe extension under Cygwin GCC. By telling the linker how to name its output.
$ echo -e "#include <stdio.h>\nint main(int nbargs, char *args[]) {
printf(\"Hello \\\n\");
}" | gcc -pipe -x c - -Wl,-oess2
This will produce an ess PE32 / PE32+ executable file, not a ess.exe.
The -pipe option instructs the GCC build chain to not write temporary files but use pipe between stages instead. The -Wl,-o option inhibits the default --force-exe-suffix.
And this way you can really nullify Cygwin GCC output with -Wl,-o/dev/null, the linker will fail when trying to close the output but you can trap the error message. If you get it, you can be assured that GCC reaches the link stage far enough to produce an output, which means that GCC can build an executable with this code.
From the ld man page :
--noinhibit-exec Retain the executable output file whenever it is still usable. Normally, the linker will not produce an output file if
it encounters errors during the link process; it exits without writing
an output file when it issues any error whatsoever.
DO NOT USE -Wl,-o/dev/stdout under Cygwin. Under Cygwin, /dev/stdout is a symlink, and if the linker fails it will DELETE /dev/stdout.
On the other end, -Wl,-o/proc/self/fd/1 will do no harm, but the linker will fail and will produce only an error message on stdout. Currently, it seems there is no direct way under Cygwin to pipe the linker output, even with named pipes.
The automatic exe extension for executables is there for a reason (Windows requires it). You should deconfuse (aka educate :-) yourself and accept the way Cygwin works. This is a feature rooted so deeply in the Cygwin/Windows guts that it is almost impossible to make it run without it.
For a "Unix feeling on Windows" with a different approach you want to check out AT&T's UWin.
I recently discovered the Tiny C Compiler. For the project that I'm currently working on, performance is not a real issue, but file size is, making TCC ideal. I'm using Autotools as a build manager, and I figured that using TCC would be as simple as ./configure CC=tcc.
However, this returns checking whether the C compiler works... no. In config.log, it says configure: exit 77.
Despite all of this, setting CC=clang works fine. Is there any way to get Autotools to use TCC?
The issue appears to have been the fault of my CFLAGS. While TCC was normally able to compile programs with them, Autotools seems to have thought otherwise. Setting CFLAGS="" resolved the issue.
For future reference, my CFLAGS are -march=native -mtune=native -O2 -pipe -fstack-protector --param=ssp-buffer-size=4.
iirc tcc does not produce tiny executables - it is tcc itself that is tiny. Perhaps you're looking for gcc -Os?
I am trying to run a MPI program with C language.
I have installed GCC compiler and the openmpi libraries. I am running ubuntu Linux and Netbeans IDE. My challenge is that after including ‘mpi.h’ in my header file and compiling the application, I still get ‘fatal error : cannot find file mpi.c’. I have the files in home/user/lib/openmpi/include, but I cant get it too work.
Can anyone help?
You could try to change the compiler to /path/mpicc and the debugger to mpirun. This should work, although I did not test it, but probably the best way to compile MPI code is via terminal.
If you really depend on the IDE you cound try writing your code with it (to take advantage of auto-completion and such) and compile it in terminal using mpicc -o main.exe main.cpp [other .cpp files] and run it with mpirun -np number_of_processes_to_use ./main.exe [args]. You could write a small script or a Makefile to do it all in one command.
Good luck!
to save yourself some sanity, I'd recommend opening up a terminal and going from there (at least until you figure out what's what).
Also, using the mpi compiler to do things would simplify your life. (and likely automatically solve the missing source issue, as it should know where they are by default).
If you still can't locate them during compile then I'd look at adding the location where mpi.c & mpi.h are located to your C Include Path: How to add a default include path for gcc in linux?