GCC complaining about non-standard calling convention "ZEND_API" - c

In Zend engine code for PHP I see lines like the below in the header files.
ZEND_API char *zend_strndup(const char *s, unsigned int length) ZEND_ATTRIBUTE_MALLOC;
I am new to professional C/C++ programming.
When I try to compile the c files in this package using gcc I get errors like
zend_alloc.h:55: error: expected =, ,, ;, asm or __attribute__ before char
I tried the command gcc -I./ -I../TSRM zend_language_*.c
It looks like gcc is complaing about ZEND_API. What does ZEND_API indicate? Can anyone help me figure out why this error happens?
SVN repository where the files are located.

These files are part of the PHP interpreter, and are not intended to be compiled separately from it. The configure script is supposed to define the ZEND_API macro to:
__attribute__ ((visibility("default")))
on systems that support it (e.g, GCC 4.0+), and to nothing on other compilers.
If you're just trying to build PHP, download the whole source bundle from php.net and use configure / make to build it. The build process is complex, and isn't intended to be obvious (or even possible) to run manually.

Related

Identify version of C file

For a project I need to find if a c file has code that requires >=C11 or C99 compiler. Can this be done with gcc, or ctags?
Basically I need to identify the minimum version of compiler required to compile the file. I have tried different tools including ctags etc.
Use grep -- -std= Makefile
ctags: no way
If you are looking for something smarter... bad luck.

autoconf configure results in C std lib header related compile errors

I am attempting to build a project that comes with an automake/autoconf build system. This is a well-used project, so I'm skeptical about a problem with the configure scripts, makefiles, or code as I received them. It is likely some kind of environment, path, flag, etc problem - something on my end with simply running the right commands with the right parameters.
The configuration step seems to complete in a satisfactory way. When I run make, I'm shown a set of errors primarily of these types:
error: ‘TRUE’ undeclared here (not in a function)
error: ‘struct work’ has no member named ‘version’
error: expected ‘)’ before ‘PRIu64’
Let's focus on the last one, which I have spent time researching - and I suspect all the errors are related to missing definitions. Apparently the print-friendly extended definitions from the C standard library header file inttypes.h is not being found. However, in the configure step everything is claimed to be in order:
configure:4930: checking for inttypes.h
configure:4930: /usr/bin/x86_64-linux-gnu-gcc -c -g -O2 conftest.c >&5
configure:4930: $? = 0
configure:4930: result: yes
All the INTTYPES flags are set correctly if I look in confdefs.h, config.h, config.log Output Variables, etc:
HAVE_INTTYPES_H='1'
#define HAVE_INTTYPES_H 1
The problem is the same whether doing a native build, or cross-compiling (for arm-linux-gnueabihf, aka armhf).
The source .c file in question does have config.h included as you'd expect, which by my understanding via the m4 macros mechanic should be adding an
#include <inttypes.h>
line. Yes, as you may be inclined to ask, if I enter this line myself into the .c file it appears to work and the PRIu64 errors go away.
I'm left with wondering how to debug this type of problem - essentially, everything I am aware of tells me I've done the configure properly, but I'm left with a bogus make process. Aside from trying every ./configure tweak and trick I can find, I've started looking at the auto-generated Makefile.in itself, but nothing so far. Also looking into how I can get the C pre-processor to tell me which header files it's actually inserting.
EDIT: I've confirmed that the -DHAVE_CONFIG_H mechanic looks good through configure, config.log, Makefile, etc.
autoconf does not automatically produce #include directives. You need to do that on your own based on the HAVE_* macros. So you'll have to add something like this:
#ifdef HAVE_INTTYPES_H
# include <inttypes.h>
#endif
If these lines show up in confdefs.h, a temporary header file used by configure scripts, this does excuse your application from performing these #includes. If configure writes them to confdefs.h, this is solely for the benefit of other configure tests, and not for application use.
First, run make -n for the target that failed. This is probably some .o file; you may need some tweaking to get its path correctly.
Now you have the command used to compile your file. If you don't find the problem by meditating on this command, try to run it, adding the -E to force preprocessor output text instead of invoking the compiler.
Note that now the .o file will be text, and you must rebuild it without -E later.
You may find some preprocessor flags useful to get more details: -dM or -dD, or others.

CLion fails to index C preprocessor macros when -std=gnuXX is set (Linux Kernel Headers)

I am trying to write a Linux kernel module with CLion. This is the cmake file:
cmake_minimum_required(VERSION 3.5)
project(labs)
set(KERNEL_HEADERS
/home/alex/Developer/linux/include
/home/alex/Developer/linux/arch/x86/include
/home/alex/Developer/linux/arch/x86/include/generated
/home/alex/Developer/linux/include/uapi
/home/alex/Developer/linux/include/generated/uapi
/home/alex/Developer/linux/arch/x86/include/uapi
/home/alex/Developer/linux/arch/x86/include/generated/uapi
)
set(MY_MODULE_SOURCES
chapter_03/lab_01/hello.c
)
add_definitions(-imacros /home/alex/Developer/linux/include/linux/kconfig.h)
add_definitions(-D__KERNEL__)
add_definitions(-DMODULE)
add_definitions(-std=gnu89)
include_directories(${KERNEL_HEADERS})
add_custom_target(labs COMMAND $(MAKE) -C ${labs_SOURCE_DIR}
PWD=${labs_SOURCE_DIR})
add_library(dummylib ${MY_MODULE_SOURCES})
The actual building of the kernel module is done with the externally called makefile using "add_custom_target". The "dummylib" is only there so that CLion actually starts to parse the header files and gives me auto completion. With my supplied definitions it does even compile the "dummylib" successfully (look at the screenshot). It is no kernel module though, but that does not matter ;)
My problem is the error you see in the screenshot. Somehow it says that it can't resolve all the macros defined in the kernel headers. Functions, structs and plain defines ( "MODULE_SIG_STRING ") do work (as you see). I do not understand why the editor says it cannot resolve the macro but can still build it. What is more strange is that I can even jump to the declaration using STRG+B of the marked macros. Clearly something is going wrong. The macros are really defined within linux/module.h.
Update
When I set -std=c89 instead of -std=gnu89 the editor recognizes the macros but the "dummylib" of course fails to build since the kernel needs the gnu extensions. I guess this is a bug in CLion. I posted it at the Jetbrains Bugtracker: https://youtrack.jetbrains.com/issue/CPP-6875

Problems with linking a library with a c program in linux

I want to run serial commands from a Bealgebone to a 4Dsystems display. Therefore I copied the c library found here into a directory and created a test program main.c:
#include "Picaso_const4D.h"
#include "Picaso_Serial_4DLibrary.h"
int main(int argc,char *argv[])
{
OpenComm("/dev/ttyUSB0", B115200); // Matches with the display "Comms" rate
gfx_BGcolour(0xFFFF);
gfx_Cls();
gfx_CircleFilled(120,160,80,BLUE);
while (1) {}
}
Now when I do gcc -o main main.c its says
main.c:2:37: fatal error: Picaso_Serial_4DLibrary.h: No such file or
directory
So I try linking it:
gcc main.c -L. -lPICASO_SERIAL_4DLIBRARY
which gives me the same error. Then I tried to create a static library:
gcc -Wall -g -c -o PICASO_SERIAL_4DLIBRARY PICASO_SERIAL_4DLIBRARY.C
which gives me this:
PICASO_SERIAL_4DLIBRARY.C:1:21: fatal error: windows.h: No such file
or directory compilation terminated.
What am I doing wrong? the git page clearly says this library is created for people who do not run windows.
Thanks in advance!
You're not getting a linker error; you're getting a preprocessor error. Specifically, your preprocessor can't find Picaso_Serial_4DLibrary.h. Make sure that it's in your include path; you can add directories to your include path using the -I argument to gcc.
You've had two problems. First was the picaso_whatever.h file that couldn't be found. You fixed that with the -I you added. But, now, the picaso.h wants windows.h
What are you building on? WinX or BSD/Linux?
If you're compiling on WinX, you need to install the "platform sdk" for visual studio.
If you're using mingw or cygwin, you need to do something else.
If on WinX, cd to the C: directory. Do find . -type f -name windows.h and add a -I for the containing directory.
If under Linux, repeat the find at the source tree top level. Otherwise, there is probably some compatibility cross-build library that you need to install.
Or, you'll have to find WinX that has it as Picaso clearly includes it. You could try commenting out one or more of the #include's for it and see if things are better or worse.
If you can't find a real one, create an empty windows.h and add -I to it and see how bad [or good] things are.
You may need the mingw cross-compiler. See https://forums.wxwidgets.org/viewtopic.php?t=7729
UPDATE:
Okay ... Wow ... You are on the right track and close, but this is, IMO, ugly WinX stuff.
The primary need of Picaso is getting a serial comm port connection, so the need from within windows.h is [thankfully] minimal. It needs basic boilerplate definitions for WORD, DWORD, etc.
mingw or cygwin will provide their own copies of windows.h. These are "clean room" reimplementations, so no copyright issues.
mingw is a collection of compile/build tools that let you use gcc/ld/make build utilities.
cygwin is more like: I'd like a complete shell-like environment similar to BSD/Linux. You get bash, ls, gcc, tar, and just about any GNU utility you want.
Caveat: I use cygwin, but have never used mingw. The mingw version of windows.h [and a suite of .h files that it includes underneath], being open source, can be reused by other projects (e.g. cygwin, wine).
Under Linux, wine (windows emulator) is a program/suite that attempts to allow you to run WinX binaries under Linux (e.g. wine mywinpgm).
I git cloned the Picaso library and after some fiddling, I was able to get it to compile after pointing it to wine's version of windows.h
Picaso's OpenComm is doing CreateFile [a win32 API call]. So, you'll probably need cygwin. You're opening /dev/ttyUSB0. /dev/* implies cygwin. But, /dev/ttyUSB0 is a Linux-like name. You may need some WinX-style name like "COM:" or whatever. Under the cygwin terminal [which gives you a bash prompt], do ls /dev and see what's available.
You can get cygwin from: http://cygwin.com/ If you have a 64 bit system, be sure to use the 64 bit version of the installer: setup-x86_64.exe It's semi-graphical and will want two directories, one for the "root" FS and one to store packages. On my system, I use C:\cygwin64 and C:\cygwin64_packages--YMMV.
Note that the installer won't install gcc by default. You can [graphically] select which packages to install. You may also need some "devel" packages. They have libraries and .h files that a non-developer wouldn't need. As, docs mention, you can rerun the installer as often as you need. You can add packages that you forgot to specify or even remove ones that you installed that you don't need anymore.
Remember that you'll need to adjust makefile -I and/or -L option appropriately. Also, when building the picaso library, gcc generated a ton of warnings about overflow of a "large integer". The code was doing:
#define control_code -279
unsigned char buf[2];
buf[0] = control_code >> 8;
buf[1] = control_code;
The code is okay, and the warning is correct [because the code is sloppy]. If the code had done:
#define control_code -279
unsigned char buf[2];
buf[0] = (unsigned) control_code >> 8;
buf[1] = (unsigned) control_code;
it probably would have been silent. Use -Wno-overflow in your Makefile to get rid of the warnings rather that edit 50 or so lines

Old gcc compiler on matlab

I am using MATLAB on the Linux MINT. I have a C program for which I want to used mex command as follows:
mex /home/.../binary.c -output binary_m
but I get the following error
Warning: You are using gcc version "4.8.1-10ubuntu9)". The version
currently supported with MEX is "4.4.6".
For a list of currently supported compilers see:
http://www.mathworks.com/support/compilers/current_release/
/home/.../binary.c:43:19: fatal error: binary.h: No such file or directory
#include "binary.h"
^
compilation terminated.
mex: compile of ' "/home/.../binary.c"' failed.
I think that I have to downgrade the gcc compiler on the MATLAB but I don't know how.
Any help is appreciate it.
Regards
This has nothing to do with the warning regarding the compiler version; don't pay attention to that, you will be fine. You might have had problems trying to compile c++11 sources, depending on your Matlab version, compiler version and mex command flags, but this is not your case.
Here is the problem: your C program binary.c contains an #include statement of the file binary.h which is not found by Matlab (although I trust you put it in the same directory than the C file?) because the directory that contains your C sources is not in the Matlab path.
To fix the problem, simply change directory to where binary.c is, and mex your file there. You can automate the process doing something like:
source_dir = '/home/.../';
current_dir = fileparts(mfilename('fullpath'));
cd source_dir;
% do something
cd current_dir;

Resources