Undefined reference to extern int stm32 - c

I'm using Atrollic Studio(problem also exists in Eclipse).
.h file
extern int i2cInitIO(uint channel, uint hz);
extern int i2cIO(uint device, byte *put, uint putlen, byte *get, uint getlen);
.c file
#include "tollosI2C.h"
int i2cGetReg(uint device, byte reg, byte *get) {
// write one byte address then read 1 byte data
return i2cIO(device, &reg, 1, get, 1);
} // i2cGetReg
I have a problem: undefined reference to `i2cIO'.This project is need to be compiled by ARM tool chain.
StM32F103VET6 - high density devices.I'm use ST-Link.
UPD: my .h file - http://pastebin.com/52ftBxR9
and c. file - http://pastebin.com/CcjpVZUP
Compiler invocation command - "gcc" without braces.
Compiler invocation arguments - "-E -P -v -dD ${plugin_state_location}/specs.c" without braces.

OK, your environment is called Atollic (spelling mistake), but from the name of the header file I conclude you are using the Tollos supervisor from Mike Cowlishaw.
Secondly, your compilation options may not be correct, since the -E option for GCC results in only preprocessed output being generated, the error you report is a linker error, however.
Without more information, I would assume you're missing a library containing the i2cIO implementation, probably a missing option for the linker command line.
Since you seem to be using a processor variant not directly supported by Tollos, I suppose you want to port Tollos for your processor. Check your makefile cq. Atollic project setup to include the correct libraries. And if appropriate, replace the -E option with -c.

Related

Where are avr-gcc libraries stored?

I'm trying to locate the .c files that are related to the #include header files in avr.
I want to have a look at some of the standard libraries that are defined in the avr-gcc library, particularly the PORT definitions contained in <avr/io.h>. I searched through the library in /usr/lib/avr/include/avr and found the header file, however what I am looking for is the .c file. Does this file exist? If so, where can I find it? If not, what is the header file referencing?
The compiler provided libraries are precompiled object code stored in static libraries. In gcc, libraries conventionally the extension .a (for "archive" for largely historic reasons), and the prefix "lib".
At build time, the linker will search the library archives to find the object-code modules necessary to resolve referenced to library symbols. It extracts the required modules and links them to the binary image being built.
In gcc a library libXXX.a is typically linked using the command line switch -lXXX - so the libXXX.a naming convention is important in that case. So for example the standard C library libc.a is looking linked by the switch -lc.
So to answer your question, there are normally no .c files for the compiler provided libraries provided with the toolchain. The library need not even have been written by in C.
That said, being open source, the source files (.c or otherwise) will be available from the repositories of the various libraries. For example, for the standard C library: https://www.nongnu.org/avr-libc/.
For other AVR architecture and I/O support libraries, you might inspect the associated header files or documentation. The header files will typically have a boiler-plate comment with a project URL for example.
PORTB and other special function registers are usually defined as macros in headers provided by avr-libc. Find your include/avr directory (the one that contains io.h). In that directory, there should be many other header files. As an example, iom328p.h contains the following line that defines PORTB on the ATmega328P:
#define PORTB _SFR_IO8(0x05)
If you are also looking for the libraries that are distributed as .a files, you should run avr-gcc -print-search-dirs.
There are several ways to find out where the system headers are located and which are included:
avr-gcc -v -mmcu=atmega8 foo.c ...
With option -v, GCC will print (amongst other stuff) whch include paths it is using. Check the output on a shell / console, where GCC will print the search paths:
#include "..." search starts here:
#include <...> search starts here:
/usr/lib/gcc/avr/5.4.0/include
/usr/lib/gcc/avr/5.4.0/include-fixed
/usr/lib/gcc/avr/5.4.0/../../../avr/include
The last location is for AVR-LibC, which provides avr/io.h. Resolving the ..s, that path is just /usr/lib/avr/include. These paths depend on how avr-gcc was configured and installed, hence you have to run that command with your installation of avr-gcc.
avr-gcc -H -mmcu=atmega8 foo.c ...
Suppose the C-file foo.c reads:
#include <avr/io.h>
int main (void)
{
PORTD = 0;
}
for an easy example. With -H, GCC will print out which files it is actually including:
. /usr/lib/avr/include/avr/io.h
.. /usr/lib/avr/include/avr/sfr_defs.h
... /usr/lib/avr/include/inttypes.h
.... /usr/lib/gcc/avr/5.4.0/include/stdint.h
..... /usr/lib/avr/include/stdint.h
.. /usr/lib/avr/include/avr/iom8.h
.. /usr/lib/avr/include/avr/portpins.h
.. /usr/lib/avr/include/avr/common.h
.. /usr/lib/avr/include/avr/version.h
.. /usr/lib/avr/include/avr/fuse.h
.. /usr/lib/avr/include/avr/lock.h
avr-gcc -save-temps -g3 -mmcu=atmega8 foo.c ...
With DWARF-3 debugging info, the macro definitions will be recorded in the debug info and are visible in the pre-processed file (*.i for C code, *.ii for C++, *.s for pre-processed assembly). Hence, in foo.i we can find the definition of PORTD as
#define PORTD _SFR_IO8(0x12)
Starting from the line which contains that definition, scroll up until you find the annotation that tells in which file the macro definition happened. For example
# 45 "/usr/lib/avr/include/avr/iom8.h" 3
in the case of my toolchain installation. This means that the lines following that annotation follow line 45 of /usr/lib/avr/include/avr/iom8.h.
If you want to see the resolution of PORTD, scroll down to the end of foo.i which contains the pre-processed source:
# 3 "foo.c"
int main (void)
{
(*(volatile uint8_t *)((0x12) + 0x20)) = 0;
}
0x12 is the I/O address of PORTD, and 0x20 is the offset between I/O addresses and RAM addresses for ATmega8. This means the compiler may implement PORTD = 0 by means of out 0x12, __zero_reg__.
avr-gcc -print-file-name=libc.a -mmcu=...
Finally, this command will print the location (absolue path) of libraries like libc.a, libm.a, libgcc.a or lib<mcu>.a. The location of the library depends on how the compiler was configureed and installed, but also on command line options like -mmcu=.
avr-gcc -Wl,-Map,foo.map -mmcu=atmega8 foo.c -o foo.elf
This directs the linker to dump a "map" file foo.map where it reports which symbol will drag which module from which library. This is a text file that contains lines like:
LOAD /usr/lib/gcc/avr/5.4.0/../../../avr/lib/avr4/crtatmega8.o
...
LOAD /usr/lib/gcc/avr/5.4.0/avr4/libgcc.a
LOAD /usr/lib/gcc/avr/5.4.0/../../../avr/lib/avr4/libm.a
LOAD /usr/lib/gcc/avr/5.4.0/../../../avr/lib/avr4/libc.a
LOAD /usr/lib/gcc/avr/5.4.0/../../../avr/lib/avr4/libatmega8.a
libgcc.a is from the compiler's C runtime, and all the others are provided by AVR-LibC. Resolving the ..s, the AVR-LibC files for ATmega8 are located in /usr/lib/avr/lib/avr4/.

Compile and Link to .com file with Turbo C

I'm trying to compile and link a simple program to a DOS .com file using Turbo C compiler and linker. By that I try the simplest C-program I can think of.
void main()
{}
Are there command line arguments to link to com files in the Turbo C Linker?
The Error Message I get from the Linker is the following:
"Fatal: Cannot generate COM file: invalid entry point address"
I know that com files need entry point to be at 100h. Does Turbo C have an option to set this address?
It has been a long time since I have genuinely tried to use Turbo-C for this kind of thing. If you are compiling and linking on the command line separately with TCC.EXE and TLINK.EXE then this may work for you.
To compile and link to a COM file you can do this for each one of your C source files creating an OBJ file for each:
tcc -IF:\TURBOC3\INCLUDE -c -mt file1.c
tcc -IF:\TURBOC3\INCLUDE -c -mt file2.c
tcc -IF:\TURBOC3\INCLUDE -c -mt file3.c
tlink -t -LF:\TURBOC3\LIB c0t.obj file1.obj file2.obj file3.obj,myprog.com,myprog.map,cs.lib
Each C file is compiled individually using -mt (tiny memory model) to a corresponding OBJ file. The -I option specifies the path of the INCLUDE directory in your environment (change accordingly). The -c option tell TCC to compile to a OBJ file only.
When linking -t tells the linker to generate a COM program (and not an EXE), -LF:\TURBOC3\LIB is the path to the library directory in your environment (change accordingly). C0T.OBJ is the C runtime file for the tiny memory model. This includes the main entry point that you are missing. You then list all the other OBJ files separated by a space. After the first comma is the output file name. If using -t option name the program with a COM extension. After the second comma is the MAP file name (you can leave the file name blank if you don't want a MAP file). After the third comma is the list of libraries separated by spaces. With the tiny model you want to use the small model libraries. The C library for the small memory model is called CS.LIB .
As an example if we have a single source file called TEST.C that looks like:
#include<stdio.h>
int main()
{
printf("Hello, world!\n");
return 0;
}
If we want to compile and link this the commands would be:
tcc -IF:\TURBOC3\INCLUDE -c -mt test.c
tlink -t -LF:\TURBOC3\LIB c0t.obj test.obj,test.com,test.map,cs.lib
You will have to use the paths for your own environment. These commands should produce a program called TEST.COM. When run it should print:
Hello, world!
You can generate COM file while still using IDE to generate EXE. Following worked on TC 2.01. Change memory model to Tiny in the options, then compile the program and generate EXE file, then go to command prompt, and run EXE2BIN PROG.EXE PROG.COM. Replace PROG with your program name.
Your problem is about "entry point"
some compiler or linker can recognize void main() like entry point omiting a return value but no all of them.
You shoud use int main() entry point instead for better control of app and compiler can recognize main function as entry point
example:
int main() {
/* some compiler return 0 when you don't for main,
they can ask for return value */
}
from geekforgeeks:
A conforming implementation may provide more versions of main(), but they must all have return type int. The int returned by main() is a way for a program to return a value to “the system” that invokes it. On systems that doesn’t provide such a facility the return value is ignored, but that doesn’t make “void main()” legal C++ or legal C. Even if your compiler accepts “void main()” avoid it, or risk being considered ignorant by C and C++ programmers.
In C++, main() need not contain an explicit return statement. In that case, the value returned is 0, meaning successful execution.
source: https://www.geeksforgeeks.org/fine-write-void-main-cc/

Compiling a Linux program under Mac OS X

I am trying to use make under Mac OS X (El Capitan) to compile a program which I know to work under Linux. The program makes use of USB libraries. I had to modify the config.mk file for these libraries to be found, but now I end up with errors in the compilation (undeclared identifiers).
Link to source: https://github.com/pali/0xFFFF
It requires usb.h, which seems to be part of usblib-compat. I installed the latter by brew install usblib-compat. But still usb.h couldn't be seen, although I knew where it was: specifically, symbolic link to usb.h and to the library may be found under /usr/local/include and under /usr/local/lib, respectively.
After many trials, I progressed somehow. Namely, the file config.mk is clearly read during the make'ing process, although I have to admit that it is not clear to me how this is done; anyway, I noticed two lines commented:
CPPFLAGS += -I/usr/local/include
LDFLAGS += -L/usr/local/lib -Wl,-R/usr/local/lib
(for the sake of precision, in the original config.mk the local dir was replaced by a pkg dir. I replaced it in these lines.)
I uncommented them and now something happens: the usb.h is found. I think the first of these variable definitions tells the compiler where to look tor header files, and the second tells the linker where to look for libraries - but again it is not completely clear to me.
In any case, I have still problems. Namely, the make'ing process outputs two warnings and an error, and then stops:
usb-device.c:90:57: warning: unused parameter 'udev' [-Wunused-parameter]
static void usb_reattach_kernel_driver(usb_dev_handle * udev, int interface) {
^
usb-device.c:90:67: warning: unused parameter 'interface' [-Wunused-parameter]
static void usb_reattach_kernel_driver(usb_dev_handle * udev, int interface) {
usb-device.c:324:13: error: use of undeclared identifier 'RTLD_DEFAULT' if ( dlsym(RTLD_DEFAULT, "libusb_init") )
Seems this program is difficult to port from Linux to Mac, although I think it should be portable. If anyone has any idea about what to do (apart from running a Linux distribution...), it would be much appreciated.
EDIT
dlfcn.h has the following:
#if !defined(_POSIX_C_SOURCE) || defined(_DARWIN_C_SOURCE)
#define RTLD_NOLOAD 0x10
#define RTLD_NODELETE 0x80
#define RTLD_FIRST 0x100 /* Mac OS X 10.5 and later */
/*
* Special handle arguments for dlsym().
*/
#define RTLD_NEXT ((void *) -1) /* Search subsequent objects. */
#define RTLD_DEFAULT ((void *) -2) /* Use default search algorithm. */
#define RTLD_SELF ((void *) -3) /* Search this and subsequent objects (Mac OS X 10.5 and later) */
#endif /* not POSIX */
Ok, finally I have been successful. I think it be worth publishing my solution - maybe others could find it useful.
So, the first point is: if I run make in the program's main folder, usb.h is not found. Then, we have to install the corresponding library.
There are two possibilities for this to be done. The first and more obvious would be to install, through home brew, libusb-1.0 and libusb-compat (the latter provides a compatibility interface for programs that use libusb-0.1, which is the first version of libusb, and is not compatible with libusb-1.0. usb.h is included in libusb-compat):
brew install libusb
brew install libusb-compat
However, this leads to other problems, as reported in the other answer. I had worked around them, but eventually found out that my program got angry when using libusb-compat (if I understand correctly, interfacing the usb port through two layers of libraries is too slow for a flasher).
So, the other possibility: installing the actual libusb-0.1. This is not available through home brew. It is however available through ports, with the name of libusb-legacy. So, I had to install ports, install the X-code command line utilities (which required first going to Apples' website to accept their legal things...) and run
sudo port install libusb-legacy
Ok, now calling make would not do the trick since the compiler is not able to find the library yet. For that, I had to edit the config.mk file which is included in the main directory of the program, uncommenting the last two lines, and editing them somewhat in order to point to the directory where libusb-legacy is stored:
CPPFLAGS += -I/opt/local/include/libusb-legacy -D_DARWIN_C_SOURCE
LDFLAGS += -L/opt/local/lib/libusb-legacy
(the -D_DARWIN_C_SOURCE defines the environmental variable required for other variables to be defined by the libraries. In the Makefile in the src directory, in fact, _POSIX_C_SOURCE is defined.)
Do you think all this did the job? No. In fact at this point I ended up with another error: the linker not being able to find some library called -lusb. I don't know why this syntax, but after some thought I realised that -lusb is somewhat a short for libusb. And the libusb I am using is actually called libusb-legacy... So I went into the Makefile in the src directory, where -lusb is introduced, and changed -lusb to -lusb-compat. Tah-dah! Compiled. A few warnings about non-used variables and a comparison between two different types of integers, but nothing more. And the program runs - after a few trials, I have been able to reflash my bricked phone, which now is alive again! Very happy!!! :)
Looking at the dlfcn.h source code, it seems that the identifier is defined only if _POSIX_C_SOURCE is not defined, or _DARWIN_C_SOURCE is defined. Thus I'd just add #define _DARWIN_C_SOURCE;
Or you could add the corresponding -D switch in the config.mk:
CPPFLAGS += -I/usr/local/include -D_DARWIN_C_SOURCE

How can execute a decrypted file residing in the memory? [duplicate]

Is it possible to compile a C++ (or the like) program without generating the executable file but writing it and executing it directly from memory?
For example with GCC and clang, something that has a similar effect to:
c++ hello.cpp -o hello.x && ./hello.x $# && rm -f hello.x
In the command line.
But without the burden of writing an executable to disk to immediately load/rerun it.
(If possible, the procedure may not use disk space or at least not space in the current directory which might be read-only).
Possible? Not the way you seem to wish. The task has two parts:
1) How to get the binary into memory
When we specify /dev/stdout as output file in Linux we can then pipe into our program x0 that reads
an executable from stdin and executes it:
gcc -pipe YourFiles1.cpp YourFile2.cpp -o/dev/stdout -Wall | ./x0
In x0 we can just read from stdin until reaching the end of the file:
int main(int argc, const char ** argv)
{
const int stdin = 0;
size_t ntotal = 0;
char * buf = 0;
while(true)
{
/* increasing buffer size dynamically since we do not know how many bytes to read */
buf = (char*)realloc(buf, ntotal+4096*sizeof(char));
int nread = read(stdin, buf+ntotal, 4096);
if (nread<0) break;
ntotal += nread;
}
memexec(buf, ntotal, argv);
}
It would also be possible for x0 directly execute the compiler and read the output. This question has been answered here: Redirecting exec output to a buffer or file
Caveat: I just figured out that for some strange reason this does not work when I use pipe | but works when I use the x0 < foo.
Note: If you are willing to modify your compiler or you do JIT like LLVM, clang and other frameworks you could directly generate executable code. However for the rest of this discussion I assume you want to use an existing compiler.
Note: Execution via temporary file
Other programs such as UPX achieve a similar behavior by executing a temporary file, this is easier and more portable than the approach outlined below. On systems where /tmp is mapped to a RAM disk for example typical servers, the temporary file will be memory based anyway.
#include<cstring> // size_t
#include <fcntl.h>
#include <stdio.h> // perror
#include <stdlib.h> // mkostemp
#include <sys/stat.h> // O_WRONLY
#include <unistd.h> // read
int memexec(void * exe, size_t exe_size, const char * argv)
{
/* random temporary file name in /tmp */
char name[15] = "/tmp/fooXXXXXX";
/* creates temporary file, returns writeable file descriptor */
int fd_wr = mkostemp(name, O_WRONLY);
/* makes file executable and readonly */
chmod(name, S_IRUSR | S_IXUSR);
/* creates read-only file descriptor before deleting the file */
int fd_ro = open(name, O_RDONLY);
/* removes file from file system, kernel buffers content in memory until all fd closed */
unlink(name);
/* writes executable to file */
write(fd_wr, exe, exe_size);
/* fexecve will not work as long as there in a open writeable file descriptor */
close(fd_wr);
char *const newenviron[] = { NULL };
/* -fpermissive */
fexecve(fd_ro, argv, newenviron);
perror("failed");
}
Caveat: Error handling is left out for clarities sake. Includes for sake of brevity.
Note: By combining step main() and memexec() into a single function and using splice(2) for copying directly between stdin and fd_wr the program could be significantly optimized.
2) Execution directly from memory
One does not simply load and execute an ELF binary from memory. Some preparation, mostly related to dynamic linking, has to happen. There is a lot of material explaining the various steps of the ELF linking process and studying it makes me believe that theoretically possible. See for example this closely related question on SO however there seems not to exist a working solution.
Update UserModeExec seems to come very close.
Writing a working implementation would be very time consuming, and surely raise some interesting questions in its own right. I like to believe this is by design: for most applications it is strongly undesirable to (accidentially) execute its input data because it allows code injection.
What happens exactly when an ELF is executed? Normally the kernel receives a file name and then creates a process, loads and maps the different sections of the executable into memory, performs a lot of sanity checks and marks it as executable before passing control and a file name back to the run-time linker ld-linux.so (part of libc). The takes care of relocating functions, handling additional libraries, setting up global objects and jumping to the executables entry point. AIU this heavy lifting is done by dl_main() (implemented in libc/elf/rtld.c).
Even fexecve is implemented using a file in /proc and it is this need for a file name that leads us to reimplement parts of this linking process.
Libraries
UserModeExec
libelf -- read, modify, create ELF files
eresi -- play with elfes
OSKit (seems like a dead project though)
Reading
http://www.linuxjournal.com/article/1060?page=0,0 -- introduction
http://wiki.osdev.org/ELF -- good overview
http://s.eresi-project.org/inc/articles/elf-rtld.txt -- more detailed Linux-specific explanation
http://www.codeproject.com/Articles/33340/Code-Injection-into-Running-Linux-Application -- how to get to hello world
http://www.acsu.buffalo.edu/~charngda/elf.html -- nice reference of ELF structure
Loaders and Linkers by John Levine -- deeoer explanation of linking
Related Questions at SO
Linux user-space ELF loader
ELF Dynamic loader symbol lookup ordering
load-time ELF relocation
How do global variables get initialized by the elf loader
So it seems possible, you decide whether is also practical.
Yes, though doing it properly requires designing significant parts of the compiler with this in mind. The LLVM guys have done this, first with a kinda-separate JIT, and later with the MC subproject. I don't think there's a ready-made tool doing it. But in principle, it's just a matter of linking to clang and llvm, passing the source to clang, and passing the IR it creates to MCJIT. Maybe a demo does this (I vaguely recall a basic C interpreter that worked like this, though I think it was based on the legacy JIT).
Edit: Found the demo I recalled. Also, there's cling, which seems to do basically what I described, but better.
Linux can create virtual file systems in RAM using tempfs. For example, I have my tmp directory set up in my file system table like so:
tmpfs /tmp tmpfs nodev,nosuid 0 0
Using this, any files I put in /tmp are stored in my RAM.
Windows doesn't seem to have any "official" way of doing this, but has many third-party options.
Without this "RAM disk" concept, you would likely have to heavily modify a compiler and linker to operate completely in memory.
If you are not specifically tied to C++, you may also consider other JIT based solutions:
in Common Lisp SBCL is able to generate machine code on the fly
you could use TinyCC and its libtcc.a which emits quickly poor (i.e. unoptimized) machine code from C code in memory.
consider also any JITing library, e.g. libjit, GNU Lightning, LLVM, GCCJIT, asmjit
of course emitting C++ code on some tmpfs and compiling it...
But if you want good machine code, you'll need it to be optimized, and that is not fast (so the time to write to a filesystem is negligible).
If you are tied to C++ generated code, you need a good C++ optimizing compiler (e.g. g++ or clang++); they take significant time to compile C++ code to optimized binary, so you should generate to some file foo.cc (perhaps in a RAM file system like some tmpfs, but that would give a minor gain, since most of the time is spent inside g++ or clang++ optimization passes, not reading from disk), then compile that foo.cc to foo.so (using perhaps make, or at least forking g++ -Wall -shared -O2 foo.cc -o foo.so, perhaps with additional libraries). At last have your main program dlopen that generated foo.so. FWIW, MELT was doing exactly that, and on Linux workstation the manydl.c program shows that a process can generate then dlopen(3) many hundred thousands of temporary plugins, each one being obtained by generating a temporary C file and compiling it. For C++ read the C++ dlopen mini HOWTO.
Alternatively, generate a self-contained source program foobar.cc, compile it to an executable foobarbin e.g. with g++ -O2 foobar.cc -o foobarbin and execute with execve that foobarbin executable binary
When generating C++ code, you may want to avoid generating tiny C++ source files (e.g. a dozen lines only; if possible, generate C++ files of a few hundred lines at least; unless lots of template expansion happens thru extensive use of existing C++ containers, where generating a small C++ function combining them makes sense). For instance, try if possible to put several generated C++ functions in the same generated C++ file (but avoid having very big generated C++ functions, e.g. 10KLOC in a single function; they take a lot of time to be compiled by GCC). You could consider, if relevant, to have only one single #include in that generated C++ file, and pre-compile that commonly included header.
Jacques Pitrat's book Artificial Beings, the conscience of a conscious machine (ISBN 9781848211018) explains in details why generating code at runtime is useful (in symbolic artificial intelligence systems like his CAIA system). The RefPerSys project is trying to follow that idea and generate some C++ code (and hopefully, more and more of it) at runtime. Partial evaluation is a relevant concept.
Your software is likely to spend more CPU time in generating C++ code than GCC in compiling it.
tcc compiler "-run" option allows for exactly this, compile into memory, run there and finally discard the compiled stuff. No filesystem space needed. "tcc -run" can be used in shebang to allow for C script, from tcc man page:
#!/usr/local/bin/tcc -run
#include <stdio.h>
int main()
{
printf("Hello World\n");
return 0;
}
C scripts allow for mixed bash/C scripts, with "tcc -run" not needing any temporary space:
#!/bin/bash
echo "foo"
sed -n "/^\/\*\*$/,\$p" $0 | tcc -run -
exit
/**
*/
#include <stdio.h>
int main()
{
printf("bar\n");
return 0;
}
Execution output:
$ ./shtcc2
foo
bar
$
C scripts with gcc are possible as well, but need temporary space like others mentioned to store executable. This script produces same output as the previous one:
#!/bin/bash
exc=/tmp/`basename $0`
if [ $0 -nt $exc ]; then sed -n "/^\/\*\*$/,\$p" $0 | gcc -x c - -o $exc; fi
echo "foo"
$exc
exit
/**
*/
#include <stdio.h>
int main()
{
printf("bar\n");
return 0;
}
C scripts with suffix ".c" are nice, headtail.c was my first ".c" file that needed to be executable:
$ echo -e "1\n2\n3\n4\n5\n6\n7" | ./headtail.c
1
2
3
6
7
$
I like C scripts, because you just have one file, you can easily move around, and changes in bash or C part require no further action, they just work on next execution.
P.S:
The above shown "tcc -run" C script has a problem, C script stdin is not available for executed C code. Reason was that I passed extracted C code via pipe to "tcc -run". New gist run_from_memory_stdin.c does it correctly:
...
echo "foo"
tcc -run <(sed -n "/^\/\*\*$/,\$p" $0) 42
...
"foo" is printed by bash part, "bar 42" from C part (42 is passed argv[⁠1]), and piped script input gets printed from C code then:
$ route -n | ./run_from_memory_stdin.c
foo
bar 42
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.29.58.98 0.0.0.0 UG 306 0 0 wlan1
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
169.254.0.0 0.0.0.0 255.255.0.0 U 303 0 0 wlan0
172.29.58.96 0.0.0.0 255.255.255.252 U 306 0 0 wlan1
$
One can easily modify the compiler itself. It sounds hard first but thinking about it, it seams obvious. So modifying the compiler sources directly expose a library and make it a shared library should not take that much of afford (depending on the actual implementation).
Just replace every file access with a solution of a memory mapped file.
It is something I am about to do with compiling something transparently in the background to op codes and execute those from within Java.
-
But thinking about your original question it seams you want to speed up compilation and your edit and run cycle. First of all get a SSD-Disk you get almost memory speed (use a PCI version) and lets say its C we are talking about. C does this linking step resulting in very complex operations that are likely to take more time than reading and writing from / to disk. So just put everything on SSD and live with the lag.
Finally the answer to OP question is yes!
I found memrun repo from guitmz, that demoed running (x86_64) ELF from memory, with golang and assembler. I forked that, and provided C version of memrun, that runs ELF binaries (verified on x86_64 and armv7l), either from standard input, or via first argument process substitution. The repo contains demos and documentation (memrun.c is 47 lines of code only):
https://github.com/Hermann-SW/memrun/tree/master/C#memrun
Here is simplest example, with "-o /dev/fd/1" gcc compiled ELF gets sent to stdout, and piped to memrun, which executes it:
pi#raspberrypi400:~/memrun/C $ gcc info.c -o /dev/fd/1 | ./memrun
My process ID : 20043
argv[0] : ./memrun
no argv[1]
evecve --> /usr/bin/ls -l /proc/20043/fd
total 0
lr-x------ 1 pi pi 64 Sep 18 22:27 0 -> 'pipe:[1601148]'
lrwx------ 1 pi pi 64 Sep 18 22:27 1 -> /dev/pts/4
lrwx------ 1 pi pi 64 Sep 18 22:27 2 -> /dev/pts/4
lr-x------ 1 pi pi 64 Sep 18 22:27 3 -> /proc/20043/fd
pi#raspberrypi400:~/memrun/C $
The reason I was interested in this topic was usage in "C script"s. run_from_memory_stdin.c demonstrates all together:
pi#raspberrypi400:~/memrun/C $ wc memrun.c | ./run_from_memory_stdin.c
foo
bar 42
47 141 1005 memrun.c
pi#raspberrypi400:~/memrun/C $
The C script producing shown output is so small ...
#!/bin/bash
echo "foo"
./memrun <(gcc -o /dev/fd/1 -x c <(sed -n "/^\/\*\*$/,\$p" $0)) 42
exit
/**
*/
#include <stdio.h>
int main(int argc, char *argv[])
{
printf("bar %s\n", argc>1 ? argv[1] : "(undef)");
for(int c=getchar(); EOF!=c; c=getchar()) { putchar(c); }
return 0;
}
P.S:
I added tcc's "-run" option to gcc and g++, for details see:
https://github.com/Hermann-SW/memrun/tree/master/C#adding-tcc--run-option-to-gcc-and-g
Just nice, and nothing gets stored in filesystem:
pi#raspberrypi400:~/memrun/C $ uname -a | g++ -O3 -Wall -run demo.cpp 42
bar 42
Linux raspberrypi400 5.10.60-v7l+ #1449 SMP Wed Aug 25 15:00:44 BST 2021 armv7l GNU/Linux
pi#raspberrypi400:~/memrun/C $

Pro*C based batch, Out of Memory?

When trying to compile a Pro*C based batch file, the process "proc" stucks at 100% of 1 CPU core and the memory starts growing to a point where the system needs to OOM kill the process (the machine has 16GB Memory and the process grew up to 9GB).
Has anyone seen this behavior before?
As an aditional information:
-The mk is the one from the instalation of the main package
-The .pc files are the original files (I've tried to compile several, such as dtesys.pc)
-The Libs are correctly compiled
-The environment variables are correctly set
Yes, it is limits.h because it includes itself recursively on line 123:
/* Get the compiler's limits.h, which defines almost all the ISO constants.
We put this #include_next outside the double inclusion check because
it should be possible to include this file more than once and still get
the definitions from gcc's header. */
#if defined __GNUC__ && !defined _GCC_LIMITS_H_
/* `_GCC_LIMITS_H_' is what GCC's file defines. */
# include_next <limits.h>
#endif
So, the solution is to pass parse=none option to Pro*C precompiler:
proc parse=none iname=filename.pc oname=filename.c
Or, a second option: you may first precompile your source with c precompiler to get pc file:
cpp -P -E yourfile.someextension -o yourfile.pc
Then you will get limits.h parsed without recursion.
-P option is needed because Pro*C is the program which can be confused with linemarkers.
-E option is needed because Pro*C is the program which can be confused with non-traditional output.

Resources