Is it possible to make C preprocessor(cpp) to call external scripts? For example:
$ ./snmp
C3750
$ cat test
switch model:
#script snmp /* starts an external bash script named snmp */
$ cpp -P < test
switch model:
C3750
$
You are free to write a Makefile or other automated build system which invokes scripts; you can feed the information from the script into the source file using the -D option (for gcc/clang/etc.) or equivalent. But the preprocessor itself is independent of any operating system interface other than reading files for #include directives.
For example, you could compile your program with something like:
gcc -DSWITCH_MODEL="$(./snmp)" test.c -o test
(Of course, you would probably want other options). That would result in the preprocessor variable SWITCH_MODEL being #define'd as C3750 (assuming that's the output of the snmp script).
Related
I have a C file residing in the USS filesystem. My C file is pretty basic. It contains logic to print "Hello World". I execute: c89 [filename]
I get a return code (rc) of CCN0634(U). The same rc is thrown if I try the cc compiler. I Google the aforementioned rc. The IBM Knowledge Center tells me the following:
Check that the compiler is installed correctly. Make sure there is enough memory in the region to fetch the module. You may need to specify the runtime option HEAP(,,,FREE,,) to prevent the compiler from running out of memory.
The above explanation didn't make much sense. I Googled for some solutions. All of the search results led to compilation in batch using JCL. It was overwhelming as there were many different flavors.
Q1: What’s the simplest way to compile a C program on the Unix Shell Services?
Q2: How do I check if a compiler is installed? cc --version doesn’t work.
Expected Result
Compilation of my C program
Actual Result
My C program doesn't compile.
The simplest way to compile a C program in USS on z/OS is by using the xlc utility. Here is an example with two command line options -v and -qphaseid that show the information about the version of the utility and the compiler components that were involved in the compile process:
xlc -v -c -qphaseid a.c
FSUM0000I Utility(xlc) Level(D170323.1712)
exec: export(export,XL_CONFIG=/bin/../usr/lpp/cbclib/xlc/etc/xlc.cfg:xlc,NULL)
exec: /usr/lpp/cbclib/xlc/exe/ccndrvr(/usr/lpp/cbclib/xlc/exe/ccndrvr,./,./a.c,*.c,CMDOPTS(DEFINE(errno=(*__errno())),NOTEST,-qoe,-qargparse,-qexecops,-qflag=i,-qhalt=16,-qnodebug,-qnolsearch,-qredir,-qlocale=POSIX,-qlongname,-qmaxmem=*,-qmemory,-qnestinc=255,-qnoexpmac,-qnoexportall,-qnogonumber,-qtarget=le,-qnolibansi,-qlist=/dev/fd1,-qnolist,-qnomargins,-qnooffset,-qnosequence,-qnoshowinc,-qsource=/dev/fd1,-qnosource,-qnoxref,-qterminal,-qnooptimize,-qplist=host,-qspill=128,-qstart,-qnoipa,DEFINE(_OPEN_DEFAULT=1),-qansialias,-qcpluscmt,-qlanglvl=extended,-qnoupconv,-qnoalias,-qnoaggregate,-qnoinfo,-qnoevents,-qrent,-qinline=auto:noreport:100:1000,-qnoinline),object(./a.o),-qphaseid,NOPPONLY,NULL)
exec: export(export,STEPLIB=CBC.SCCNCMP:CEE.SCEERUN2:CEE.SCEERUN,NULL)
exec: export(export,_C89_ACCEPTABLE_RC=4,NULL)
CCN0000(I) Product(5650-ZOS) Phase(CCNEOPTP) Level(D190612.Z2R3)
CCN0000(I) Product(5650-ZOS) Phase(CCNDRVR ) Level(D190612.Z2R3)
CCN0000(I) Product(5650-ZOS) Phase(CCNEP ) Level(D190612.Z2R3)
CCN0000(I) Product(5650-ZOS) Phase(CCNETBY ) Level(D190612.Z2R3)
CCN0000(I) Product(5650-ZOS) Phase(CCNECWI ) Level(D190612.Z2R3)
The XL C/C++ compiler is a priced feature on z/OS so your system may not have it installed. A properly installed compiler will have a valid configuration file in /bin/../usr/lpp/cbclib/xlc/etc/xlc.cfg
Perhaps a clarification is in order as it may not be obvious that there are two utilities in USS that invoke the XL C/C++ compiler.
The c89 utility whose operation is controlled by a number of environment variables (most users would run into difficulties figuring out how to define the required environment variables)
The xlc utility whose operation is controlled by a configuration file which is easier to setup. Most users will be fine with the default configuration file customized by a system programmer who installed the compiler, but if a more sophisticated user needs a different setup the configuration file can be copied into a private file and modified to satisfy the need. Once modified it can be used by including -F /path_to_modified_config_file option on the command line.
I am running on z/OS 2.4.
One common issue is that the libraries that host the compiler are not accessible. I have this setting in my shell environment:
export STEPLIB="none:CEE.SCEERUN:CBC.SCLBDLL:CBC.ACCNCMP" The last dataset hosts the module you indicate is not found.
In addition, these here are some additional settings (Note: some of these libraries may be different on your system due to customization but these are the default libraries):
declare -x _C89_CLIB_PREFIX="CBC"
declare -x _C89_INCDIRS="/usr/include /usr/lpp/ioclib/include"
declare -x _C89_LIBDIRS="/lib /usr/lib"
declare -x _C89_PLIB_PREFIX="CEE"
declare -x _C89_SLIB_PREFIX="SYS1"
declare -x _C89_WORK_UNIT="SYSDA"
declare -x _CC_CLIB_PREFIX="CBC"
declare -x _CC_INCDIRS="/usr/include /usr/lpp/ioclib/include"
declare -x _CC_LIBDIRS="/lib /usr/lib"
declare -x _CC_PLIB_PREFIX="CEE"
declare -x _CC_SLIB_PREFIX="SYS1"
declare -x _CC_WORK_UNIT="SYSDA"
declare -x _CEE_RUNOPTS="FILETAG(AUTOCVT,AUTOTAG) POSIX(ON)"
declare -x _CXX_CLIB_PREFIX="CBC"
declare -x _CXX_INCDIRS="/usr/include /usr/lpp/ioclib/include"
declare -x _CXX_LIBDIRS="/lib /usr/lib"
declare -x _CXX_PLIB_PREFIX="CEE"
declare -x _CXX_SLIB_PREFIX="SYS1"
declare -x _CXX_WORK_UNIT="SYSDA"
In answer to Q1 here is the simplest way:
Below is the output from my shell session in USS:
IBMUSER:/u/ibmuser #>cat t.c
#include <stdio.h>
int main(int argc, char **argv) {
printf("hello world\n");
return(0);
}
IBMUSER:/u/ibmuser #>cc t.c
IBMUSER:/u/ibmuser #>./a.out
hello world
I'm writing an implementation of the C preprocessor that, when running on Linux, needs to know the path on which to find header files. This can be obtained by running gcc -v. I want to compile the results into the binary of my preprocessor rather than having to invoke gcc -v on every run, so I'm currently thinking of writing a Python script to be run at compile time, that would obtain the path and write it into a small C source file to be included in the build.
On the other hand, I get the impression GNU Autotools is basically the specialist in obtaining system-specific information to be used at build time. Does Autotools have the ability to obtain the #include path in such a way that it can be incorporated as a string into the program being built (as opposed to being used for the build process)? If so, how?
If you want to get the internal include/ directory used by GCC, run the gcc -print-file-name=include command, e.g. in shell syntax
the_gcc_include_dir=$(gcc -print-file-name=include)
This $the_gcc_include_dirdirectory contains files like <stdarg.h> and <stddef.h> and many others.
You also want the include-fixed/ directory, so
the_gcc_include_fixed_dir=$(gcc -print-file-name=include-fixed)
This $the_gcc_include_fixed_dir contains files like <limits.h> and also a useful README
You probably don't need autotools in your case.
I ended up parsing gcc's include path with a Python script:
print 'string gcc_include_path[] = {'
for s in sys.stdin:
if s[0] == ' ':
s = s.strip()
print '\t"'+s+'",'
print '};'
and calling it from Makefile:
echo | cpp -Wp,-v 2>&1 >/dev/null | python include_path.py >include_path
I have a library that calls C code. It's compile with the -custom, -cclib, -l flags, which is working ok when I compile my code against the library with ocamlc,
but when I use the "ocaml" top level to run a script like:
ocaml -I /opt/godi/lib/ocaml/pkg-lib/xxxx xxx.cma myprog.ml
it says:
Error: The external function `caml_yyyy' is not available
Do I need additional parameters to tell the top level ?
You should build your own toplevel using "ocamlmktop":
$ ocamlmktop -custom -I /opt/godi/lib/ocaml/pkg-lib/xxxx xxx.cma -o ocaml_with_xxx
Then, you can use it :
$ ./ocaml_with_xxx -I /opt/godi/lib/ocaml/pkg-lib/xxxx
Note that you still need the -I so that the toplevel can find the interface files of the library that it contains.
IIRC you cannot use libraries compiled with -custom in toplevel. You should compile dynamically-loadable stubs so that toplevel could pick up them. This is very easy to do with e.g. oasis and somewhat more involved if invoking ocaml tools manually.
hello everyone
i try to debug a program, which have been installed by makefile.
it have a binary file of OpenDPI_demo.o and a shell shellscript OpenDPI_demo.
when i gdb OpenDPI_demo.o,i have a problem. i can't run it. the error is:
Starting program: /home/lx/ntop/test/opendpi/src/examples/OpenDPI_demo/OpenDPI_demo.o
/bin/bash: /home/lx/ntop/test/opendpi/src/examples/OpenDPI_demo/OpenDPI_demo.o:can't execute the binary file.
please tell me why. actually i can run the program by ./OpenDPI_demo.
thank you.
Based on the extension, the file is an object file. It is used by the linker (alongside other object files) to produce an executable. It's the real executable the one you want to run/debug.
This is another example of difficulties encountered with programs using libtool.
the file OpenDPI_demo alongside OpenDPI_demo.o is actually, as you said, a shell script which wraps the execution of the real compiled file, probably in .libs/OpenDPI_demo.
libtool needs this wrapper to adjust the runtime library paths and such so that you can execute the program transparently, as if it was actually installed on your system.
The way to correctly debug this application is not
/home/lx/ntop/test/opendpi $ gdb src/examples/OpenDPI_demo/.libs/OpenDPI_demo
but rather using libtool --mode=execute on the shell script, like the following (it's an example):
/home/lx/ntop/test/opendpi $ ./libtool --mode=execute gdb --args \
src/examples/OpenDPI_demo/OpenDPI_demo -f capture.pcap
Suggest you use
gdb OpenDPI_demo
instead
In your makefile if it depens on the object, make it depend on OpenDPI_demo, e.g.
I know when I install a Linux app from source ,I execute ./configure --sysconfdir=/etc, then this app's conf file(such as httpd.conf) will goto /etc.
But from the view of source code, how do source code know the conf file is under /etc when parse it. I mean the code like fopen("/../../app.conf", "r"); is determined before we install it, will the configure file change source code or some other mechanism exist ?
The configure script will generate the necessary Makefile that will use the C compiler's -DMACRO=content functionality to essentially inject C preprocessor #define MACRO content statements into the compilation units. So, sysconfdir could be used via Make rules:
foo.o: foo.c
$(CC) -DCONFDIR=$(sysconfdir) -o $# $<
(That says to build the foo.o object file when foo.c is updated; to build it, use the $(CC) variable to run the C compiler, define CONFDIR with the contents of $(sysconfdir) (supplied via the ./configure script), put output into the target file ($#) and give the source file ($<) as the lone input to the compiler.))
Then the C code in foo.c could use it like this:
FILE *conf;
if (conf = fopen(CONFDIR "/foo", "r")) {
/* read config file */
} else {
/* unable to open config, either error and die or supply defaults */
}
Note the C string concatenation is performed before compiling the program -- super convenient for exactly this kind of use.
More details here: http://www.gnu.org/software/hello/manual/autoconf/Installation-Directory-Variables.html#Installation-Directory-Variables
When you execute ./configure, it typically generates a makefile that includes the command options for the C compiler. These options will include -D... options that (in effect) "#define" various CPP symbols. One of these will have the "/etc" value that you supplied when you ran ./configure --sysconfdir=/etc.
From there, the "/etc" string gets compiled into the code anywhere that the source code uses the #defined symbol.