Would enabling noexec_user_stack parameter in Solaris prevent some geniune programs from running?
Has anyone tested this setting please?
Older versions of GCC in 32bit mode can create code that relies on executable stacks (Nested Functions / Trampolines).
See also Implementation of nested functions and Example of executable stack in Linux (i386 architecture) on StackOverflow.
This is known to be "broken" by noexec_user_stack in Solaris (just as noexec stacks do in Linux), and yes it's one way to test the effectiveness of the feature.
Java uses just in time (JIT) compiling which means it will generate code on the fly and run it in writable/data sections. This is most likely implemented in the heap with mprotect() on pages or in an anonymous mapping via mmap(), both of which will probably have PROT_READ|PROT_WRITE|PROT_EXEC at a low level. However, I don't believe Java does JIT on the actual stack, so you may not have issues with Java on Solaris with this limited memory protection, though you would have problems on Linux systems with PaX in Linux (fixable with paxctlor problems with the relatively newer W^X on OpenBSD. When it comes to Solaris, I suspect you probably won't since Oracle owns both Sun and Java and strongly pushes their use together.
So, if Java immediately and consistently crashes, no-exec stack is likely why. But you should be OK.
EDIT: I said "relatively new" not to imply W^X is "new" but to point out that it came along after PaX was "a thing." W^X is just a small subset of the features of PaX that came along later
Related
Does GCC (or alternatively Clang) defines any macro when it is compiled for the Arch Linux OS?
I need to check that my software restricts itself from compiling under anything but Arch Linux (the reason behind this is off-topic). I couldn't find any relevant resources on the internet.
Does anyone know how to guarantee through GCC preprocessor directives that my binaries are only compilable under Arch Linux?
Of course I can always
#ifdef __linux__
...
#endif
But this is not precise enough.
Edit: This must be done through C source code and not by any building systems, so, for example, doing this through CMake is completely discarded.
Edit 2: Users faking this behaviour is not a problem since the software is distributed to selected clients and thus, actively trying to "misuse" our source code is "their decision".
Does GCC (or alternatively Clang) defines any macro when it is compiled for the Arch Linux OS?
No. Because there's nothing inherently specific to Arch Linux on the binary level. For what it's worth, when compiling the only things you/the compiler has to care about is the target architecture (i.e. what kind of CPU it's going to run with), data type sizes and alignments and function calling conventions.
Then later on, when it's time to link the compiled translation unit objects into the final binary executable, the runtime libraries around are also of concern. Without taking special precautions you're essentially locking yourself into the specific brand of runtime libraries (glibc vs. e.g. musl; libstdc++ vs. libc++) pulled by the linker.
One can easily sidestep the later problem by linking statically, but that limits the range of system and midlevel APIs available to the program. For example on Linux a purely naively statically linked program wouldn't be able to use graphics acceleration APIs like OpenGL-3.x or Vulkan, since those rely on loading components of the GPU drivers into the process. You can however still use X11 and indirect GLX OpenGL, since those work using wire protocols going over sockets, which are implemented using direct syscalls to the kernel.
And these kernel syscalls are exactly the same on the binary level for each and every Linux kernel of every distribution out there. Although inside of the kernel there's a lot of leeway when it comes to redefining interfaces, when it comes to the interfaces toward the userland (i.e. regular programs) there's this holy, dogmatic, ironclad rule that YOU NEVER BREAK USERLAND! Kernel developers breaking this rule, intentionally or not are chewed out publicly by Linus Torvalds in his in-/famous rants.
The bottom line to this is, that there is no such thing as a "Linux distribution specific identifier on the binary level". At the end of the day, a Linux distribution is just that: A distribution of stuff. That means someone or more decided on a set of files that make up a working Linux system, wrap it up somehow and slap a name on it. That's it. There's nothing inherently specific to "Arch" Linux other than it's called "Arch" and (for the time being) relies on the pacman package manager. That's it. Everything else about "Arch", or any other Linux distribution, is just a matter of happenstance.
If you really want to sort different Linux distributions into certain bins regarding binary compatibility, then you'd have to pigeonhole the combinations of
Minimum required set of supported syscalls. This translates into minimum required kernel version.
What libc variant is being used; and potentially which version, although it's perfectly possible to link against a minimally supported set of functions, that has been around for almost "forever".
What variant of the C++ standard library the distribution decided upon. This actually also inflicts programs that might appear to be purely C, because certain system level libraries (*cough* Mesa *cough*) will internally pull a lot of C++ infrastructure (even compilers), also triggering other "fun" problems¹
I need to check that my software restricts itself from running under anything but Arch Linux (the reason behind this is off-topic). I couldn't find any relevant resources on the internet.
You couldn't find resources on the Internet, because there's nothing specific on the binary level that makes "Arch" Arch. For what it's worth right now, this instant I could create a fork of Arch, change out its choice of default XDG skeleton – so that by default user directories are populated with subdirs called leech, flicks, beats, pics – and call it "l33tz" Linux. For all intents and purposes it's no longer Arch. It does behave significantly different from the default Arch behavior, which would also be of concern to you, if you'd relied on any specific thing, and be it most minute.
Your employer doesn't seem to understand what Linux is or what distinguished distributions from each other.
Hint: It's not the binary compatibility. As a matter of fact, as long as you stay within the boring old realm of boring old glibc + libstdc++ Linux distributions are shockingly compatible with each other. There might be slight differences in where they put libraries other than libc.so, libdl.so and ld-linux[-${arch}].so, but those two usually always can be found under /lib. And once ld-linux[-${arch}].so and libdl.so take over (that means pulling in all libraries loaded at runtime) all the specifics of where shared objects and libraries are to be found are abstracted away by the dynamic linker.
1: like becoming multithreaded only after global constructors were executed and libstdc++ deciding it wants to be singlethreaded, because libpthread wasn't linked into a program that didn't create a single thread on its own. That was a really weird bug I unearthed, but yshui finally understood https://gitlab.freedesktop.org/mesa/mesa/-/issues/3199
You can list the predefined preprocessor macros with
gcc -dM -E - /dev/null
clang -dM -E - /dev/null
None of those indicate what operating system the compiler is running under. So not only you can't tell whether the program is compiled under Arch Linux, you can't even tell whether the program is compiled under Linux. The macros __linux__ and friends indicate that the program is being compiler for Linux. They are defined when cross-compiling from another system to Linux, and not defined when cross-compiling from Linux to another system.
You can artificially make your program more difficult to compile by specifying absolute paths for system headers and relying on non-portable headers (e.g. /usr/include/bits/foo.h). That can make cross-compilation or compilation for anything other than Linux practically impossible without modifying the source code. However, most Linux distributions install headers in the same location, so you're unlikely to pinpoint a specific distribution.
You're very likely asking the wrong question. Instead of asking how to restrict compilation to Arch Linux, start from why you want to restrict compilation to Arch Linux. If the answer is “because the resulting program wouldn't be what I want under another distribution”, then start from there and make sure that the difference results in a compilation error rather than incorrect execution. If the answer to “why” is something else, then you're probably looking for a technical solution to a social problem, and that rarely ends well.
No, it doesn't. And even if it did, it wouldn't stop anyone from compiling the code on an Arch Linux distro and then running it on a different Linux.
If you need to prevent your software from "from running under anything but Arch Linux", you'll need to insert a run-time check. Although, to be honest, I have no idea what that check might consist of, since linux distros are not monolithic products. The actual check would probably have to do with your reasons for imposing the restriction.
Is it possible to write code in C, then statically build it and make a binary out of it like an ELF/PE then remove its header and all unnecessary meta-data so to create a raw binary and at last be able to put this raw binary in any other kind of OS specific like (ELF > PE) or (PE > ELF)?!
have you done this before?
is it possible?
what are issues and concerns?
how this would be possible?!
and if not, just tell me why not?!!?!
what are my pitfalls in understanding the static build?
doesn't it mean that it removes any need for 3rd party and standard as well as os libs and headers?!
Why cant we remove the meta of for example ELF and put meta and other specs needed for PE?
Mention:
I said, Cross OS not Cross Hardware
[Read after reading below!]
As you see the best answer, till now (!) just keep going and learn cross platform development issues!!! How crazy is this?! thanks to philosophy!!!
I would say that it's possible, but this process must be crippled by many, many details.
ABI compatibility
The first thing to think of is Application Binary Interface compatibility. Unless you're able to call your functions the same way, the code is broken. So I guess (though I can't check at the moment) that compiling code with gcc on Linux/OS X and MinGW gcc on Windows should give the same binary code as far as no external functions are called. The problem here is that executable metadata may rely on some ABI assumptions.
Standard libraries
That seems to be the largest hurdle. Partly because of C preprocessor that can inline some procedures on some platforms, leaving them to run-time on others. Also, cross-platform dynamic interoperation with standard libraries is close to impossible, though theoretically one can imagine a code that uses a limited subset of the C standard library that is exposed through the same ABI on different platforms.
Static build mostly eliminates problems of interaction with other user-space code, but still there is a huge issue of interfacing with kernel: it's int $0x80 calls on x86 Linux and a platform-specifc set of syscall numbers that does not map to Windows in any direct way.
OS-specific register use
As far as I know, Windows uses register %fs for storing some OS-wide exception-handling stuff, so a binary compiled on Linux should avoid cluttering it. There might be other similar issues. Also, C++ exceptions on Windows are mostly done with OS exceptions.
Virtual addresses
Again, AFAIK Windows DLLs have some predefined address they're must be loaded into in virtual address space of a process, whereas Linux uses position-independent code for shared libraries. So there might be issues with overlapping areas of an executable and ported code, unless the ported position-dependent code is recompiled to be position-independent.
So, while theoretically possible, such transformation must be very fragile in real situations and it's impossible to re-plant the whole static build code - some parts may be transferred intact, but must be relinked to system-specific code interfacing with other kernel properly.
P.S. I think Wine is a good example of running binary code on a quite different system. It tricks a Windows program to think it's running in Windows environment and uses the same machine code - most of the time that works well (if a program does not use private system low-level routines or unavailable libraries).
Looking for various tools (free/commercial) available for detection of memory leaks static/runtime on HP-UX Itanium platform.
Background, we:
Use HP-UX 11.31 ia64. But, all our applications are still 32bits only.
Have software with object files from C/Pro*C/COBOL and a very large application with lot of files/programs.
C files are compiled with standard C compiler (cc), Pro*C with Oracle's proc and COBOL with Microfocus' cob. Finally, all the object files are linked with cob linker.
Facing core dumps, due to memory leaks/invalid references (mostly from C/Pro*C code)
What was tried:
Used gdb and RTC (HP RunTimeCheck for memory analysis), but due to mixed nature of COBOL and C, the tool is not able to give vital clues.
Planned to use Insure++, but found that, it's not supported on HP-Itanium.
Currently, relying on static debugging and manual prints, but as you can see, very slow and ineffective.
Can anybody please suggest tools/software available to do effective memory leaks detection in this scenario.
Thanks in advance.
ps:
While searching on the web, I came across one commercial tool, but never used it though. http://www.dynamic-memory.com/products_Overview_htm.php
HP WDB is recognized by HP for these purposes: HP WDB
Our CheckPointer tool that finds memory management mistakes in C programs. If you have not made any such errors, on exit it will tell you where unfreed memory was allocated.
Because it operates on source code, it isn't specifically dependent on the Itanium hardware, but it is compiler-dependent (handles GCC 3/4 + Microsoft C dialects). The ProC you would handle by preprocessing the ProC code to produce C and then applying Checkpointer to the generated C code.
You will likely have to build some wrappers for your COBOL code (to verify that the COBOL code doesn't do something bad with a pointer). COBOL doesn't really do a lot of dynamic allocation/pointer dereferencing (watch out for CALL variable statements) so such wrapper models shouldn't be complicated.
If I have a code, fully written in C, using only libraries written also in C, and I have a compiler, like GCC, supporting many platforms, can I be sure that this code will run for any architecture supported by compiler? For example, can I take Flex or CPython , compile and use it, let say, at AVR?
Edit:
compile and run, of course
no GUI
No, it's hard to say without seeing your code. For example, if you depend on that long is 4 bytes, it won't be right on 64bit machine.
"fully written in C" doesn't guarantee in any way that the code is portable. A portable compiler like GCC abstracts away the CPU architecture details, but the moment you use a system call specific to a particular OS, your code becomes unportable unless you surround the fragment in an #ifdef WHATEVER_OS. This is why standards like POSIX have emerged to unify the system call interface across different operating systems.
Limiting your code to the POSIX-defined system calls and using a POSIX-compliant operating system should generally stop you from worrying, with little exceptions.
source code portability and compilability should be granted in your scenario.
things might change if you would use external libraries or GUI frameworks which depend on the specific OS but this is not your case you you should be good to go.
The response is "they'll probably compile but there is a possibility they won't run". The problem is the resources available to you. Let's say you program for your 2gb machine. Will your program run on a 256mb machine? Could a DOS machine run CPython? But they'll probably compile :-) (technically you could have a program (the code) too much big to fit in the address space of the target machine. If your .exe/.out is 18mb and the target machine has an address space of 16mb, you can't even compile)
Simply using C doesn't guarantee that the code is portable to any platform that supports C. There's a boat-load of traps to step into, like dependence on type-sizes, endianess or undefined behavior.
In reality, a non-trivial program is rarely portable to much except the platforms you have actually verified that it runs on. But you can certainly take measures to try and decrease the chance of problems, though.
In mac os x, you can combine 32bit, powerpc and 64bit binaries in one executable using "lipo" is something like this possible in linux?
I think Fatelf (available at http://icculus.org/fatelf/ ) is what you are actually asking for, but it requires certain patches to kernel, glibc, gdb etc. So it's currently not for the faint of heart to use. It may be a reasonable burden for a developer to compile on a modified system, but it also requires client-side systems to be modified, too.