extract library version from binary with CMake - c

I am writing a FindXXX.cmake script for an external C library. I would like my script to provide information about the library version. However, the library only provides this information in the form of a function that returns the version number as a string.
I thought I could extract the version number by having FindXXX.cmake compile the following C program on the fly:
#include <stdio.h>
#include "library.h"
int main() {
char version[256];
get_version(version);
puts(version);
return 0;
}
In order for this to work, CMake should compile and run the program above at configure time, and use the information it prints as the version identifier. I know how to do the latter (execute_process), and I almost know how to do the former: CheckCSourceRuns comes to mind, but I do not know how to capture the stdout of the generated executable.
TL;DR: is there a way to compile a program, run it and capture its stdout from CMake at generation time?

You may use try_run for that purpose (it is assumed that your source file is named as foo_get_version.c):
try_run(foo_run_result foo_compile_result
foo_try_run ${CMAKE_CURRENT_LIST_DIR}/foo_get_version.c
RUN_OUTPUT_VARIABLE foo_run_output)
if(NOT foo_compile_result)
# ... Failed to compile
endif()
if(NOT foo_run_result EQUAL "0")
# ... Failed to run
endif()
# Now 'foo_run_output' variable contains output of your program.
Note, that try_run isn't executed when cross-compiling. Instead, CMake expects that the user will set cache variables foo_run_result and foo_run_result__TRYRUN_OUTPUT.

Related

TCL: call TCL package while embedded in C

From this question I learnt that we can embed TCL in C as easy as the folloiwng
#include <stdio.h>
#include <tcl.h>
void main ()
{
Tcl_Interp *myinterp;
char *action = "set a [expr 5 * 8]; puts $a";
int status;
printf ("Your Program will run ... \n");
myinterp = Tcl_CreateInterp();
status = Tcl_Eval(myinterp,action);
printf ("Your Program has completed\n");
getch();
}
And to compile it, we need to define path to the tcl libraries:
gcc -o test.exe test.c -Ic:/tcl/include /mingw64/bin/tcl86.dll
My question is: If my tcl script is calling another package (for example: package require Img), How to include this package (for example "Img") in the created test.exe.
I am using mingw64 on windows to compile my C code, but when running the resulted test.exe, it gives me TCL error that {can't find package Img while executing "package require Img"}
BTW, I have Img is installed in and when I run my TCL script using tclsh, I have no errors.
You should extend the list in the global auto_path variable with the path to the location (i.e., the directory) of the extra libraries you want to be able to access.
Tcl_SetVar(interp, "::auto_path", "/path/to/directory", TCL_APPEND_VALUE | TCL_LIST_ELEMENT);
Do this after you create the interpreter but before you evaluate any scripts in it. This is safe against characters like spaces in the pathname. On Windows, you can use \ as a separator if you prefer. If you have multiple locations, put several calls to Tcl_SetVar() in. (How you work out the correct directory or directories is up to you; the value gets copied immediately.)

YouCompleteMe suggests only "local" used code

I'm trying to use YCM for the first time so in order to make it work I decided to give a chance for the YCM-Generator, which generates the .ycm_extra_conf.py file automatically based on the makefile.
So far my program is just a simple hello world.
#include <stdio.h>
int main()
{
printf("Hello World!");
return 0;
}
I'm using the CMakeLists.txt trick to generate the makefile.
file(GLOB sources *.h *.c)
add_executable(Foo ${sources})
then after executing the YCM-Generator script, I get this output
Running cmake in '/tmp/tmp_YknVy'... $ cmake
/home/pedro/Desktop/Projetos/teste
Running make... $ make -i -j4
Cleaning up...
Build completed in 1.5 sec
Collected 2 relevant entries for C compilation (0 discarded).
Collected 0 relevant entries for C++ compilation (0 discarded).
Created YCM config file with 0 C flags
YCM plugin does find the .ycm_extra_conf.py file, but the auto-completion doesn't work right, for example, if I type "floa", it doesn't suggests "float", but It only suggests things that I used before like "int" or "printf".
Am I missing something or this is working as intended?
So I fixed it.
For c it does require a .ycm_extra_conf.py , while a friend of mine could make it work without one in c++.
The auto complete only suggest automatically functions that were previously used, if you don't remember a function name you have to press <Ctrl-Space>
YCM-Generator didn't do the job, so I modified the example file myself following the comments.
If you are used to Visual Assist, the auto complete works but it's really weak if compared to VA, which is a shame... I really hope someone port that plugin to Linux.

Pro*C based batch, Out of Memory?

When trying to compile a Pro*C based batch file, the process "proc" stucks at 100% of 1 CPU core and the memory starts growing to a point where the system needs to OOM kill the process (the machine has 16GB Memory and the process grew up to 9GB).
Has anyone seen this behavior before?
As an aditional information:
-The mk is the one from the instalation of the main package
-The .pc files are the original files (I've tried to compile several, such as dtesys.pc)
-The Libs are correctly compiled
-The environment variables are correctly set
Yes, it is limits.h because it includes itself recursively on line 123:
/* Get the compiler's limits.h, which defines almost all the ISO constants.
We put this #include_next outside the double inclusion check because
it should be possible to include this file more than once and still get
the definitions from gcc's header. */
#if defined __GNUC__ && !defined _GCC_LIMITS_H_
/* `_GCC_LIMITS_H_' is what GCC's file defines. */
# include_next <limits.h>
#endif
So, the solution is to pass parse=none option to Pro*C precompiler:
proc parse=none iname=filename.pc oname=filename.c
Or, a second option: you may first precompile your source with c precompiler to get pc file:
cpp -P -E yourfile.someextension -o yourfile.pc
Then you will get limits.h parsed without recursion.
-P option is needed because Pro*C is the program which can be confused with linemarkers.
-E option is needed because Pro*C is the program which can be confused with non-traditional output.

Getting the GCC include path with GNU Autotools

I'm writing an implementation of the C preprocessor that, when running on Linux, needs to know the path on which to find header files. This can be obtained by running gcc -v. I want to compile the results into the binary of my preprocessor rather than having to invoke gcc -v on every run, so I'm currently thinking of writing a Python script to be run at compile time, that would obtain the path and write it into a small C source file to be included in the build.
On the other hand, I get the impression GNU Autotools is basically the specialist in obtaining system-specific information to be used at build time. Does Autotools have the ability to obtain the #include path in such a way that it can be incorporated as a string into the program being built (as opposed to being used for the build process)? If so, how?
If you want to get the internal include/ directory used by GCC, run the gcc -print-file-name=include command, e.g. in shell syntax
the_gcc_include_dir=$(gcc -print-file-name=include)
This $the_gcc_include_dirdirectory contains files like <stdarg.h> and <stddef.h> and many others.
You also want the include-fixed/ directory, so
the_gcc_include_fixed_dir=$(gcc -print-file-name=include-fixed)
This $the_gcc_include_fixed_dir contains files like <limits.h> and also a useful README
You probably don't need autotools in your case.
I ended up parsing gcc's include path with a Python script:
print 'string gcc_include_path[] = {'
for s in sys.stdin:
if s[0] == ' ':
s = s.strip()
print '\t"'+s+'",'
print '};'
and calling it from Makefile:
echo | cpp -Wp,-v 2>&1 >/dev/null | python include_path.py >include_path

run c program - stdio.h where do i get it?

Looking into learning C. As I understand it when I say #include <stdio.h> it grabs stdio.h from the default location...usually a directory inside your working directory called include. How do I actually get the file stdio.h? Do I need to download a bunch of .h files and move them from project to project inside the include directory? I did the following in a test.c file. I then ran make test and it outputted a binary. When I ran ./test I did not see hello print onto my screen. I thought I wasn't seeing output maybe because it doesn't find the stdio.h library. But then again if I remove the greater than or less than signs in stdio the compiler gives me an error. Any ideas?
I'm on a Mac running this from the command line. I am using: GNU Make 3.81. This program built for i386-apple-darwin10.0
#include <stdio.h>
main()
{
printf("hello");
}
Edit: I have updated my code to include a datatype for the main function and to return 0. I still get the same result...compiles without error and when I run the file ./test it doesn't print anything on screen.
#include <stdio.h>
int main()
{
printf("hello");
return 0;
}
Update:
If I add a \n inside of the printf it works! so this will work:
#include <stdio.h>
int main()
{
printf("hello\n");
return 0;
}
Your code should have preferably
printf("hello\n");
or
puts("hello");
If you want to know where does the standard header file <stdio.h> comes from, you could run your compiler with appropriate flags. If it is gcc, try compiling with
gcc -H -v -Wall hello.c -o hello
Pedantically, a standard header file is even not required to exist as a file; the standard permits an implementation which would process the #include <stdio.h> without accessing the file system (but e.g. by retrieving internal resources inside the compiler, or from a database...). Few compilers behave that way, most really access something in the file system.
If you didn't have the file, you'd get a compilation error.
My guess is the text was printed, but the console closed before you got the chance to see it.
Also, main returns an int, and you should return 0; to signal successful completion.
#include <header.h>, with angle brackets, searches in standard system locations, known to the compiler-- not in your project's subdirectories. In Unix systems (including your Mac, I believe), stdio.h is typically in /usr/include. If you use #include "header.h", you're searching subdirectories first and then the same places as with <header.h>.
But you don't need to find or copy the header to run your program. It is read at compilation time, so your ./test doesn't need it at all. Your program looks like it should have worked. Is it possible that you just typed "test", not "./test", and got the system command "test"? (Suggestion: Don't name your programs "test".)
Just going to leave this here : STILL! in 2018, December... Linux Mint 18.3
has no support for C development.
innocent / # cc ThoseSorts.c
ThoseSorts.c:1:19: fatal error: stdio.h: No such file or directory
compilation terminated.
innocent / # gcc ThoseSorts.c
ThoseSorts.c:1:19: fatal error: stdio.h: No such file or directory
compilation terminated.
innocent / # apt show libc6
(Abbreviated)::
Package: libc6
Version: 2.23-0ubuntu10
Priority: required
Section: libs
Source: glibc
Origin: Ubuntu
Installed-Size: 11.2 MB
Depends: libgcc1
Homepage: http://www.gnu.org/software/libc/libc.html
Description: GNU C Library: Shared libraries
Contains the standard libraries that are used by nearly all programs on
the system. This package includes shared versions of the standard C library
and the standard math library, as well as many others.
innocent / # apt-get install libc6-dev libc-dev
So, magic... and a minute later they are all installed on the
computer and then things work as they should.
Not all distros bundle up all the C support libs in each ISO.
Hunh.
hardlyinnocent / # gcc ThoseSorts.c
hardlyinnocent / # ./a.out
20
18
17
16
... ... ...

Resources