Im a programming OpenCL via pyopenCL on a Ubuntu 16.04.3 64bit,
on Nvidia's Tesla K10.G2.8GB.
So far, anything runs smoothly as long as I don't include header files into my OpenCL kernel. As soon, as I put #include <stdlib.h> on top of my header file, the compilation of my openCL kernels fails with different files missing, amongst them being
gnu/stubs-32.h
sys/cdefs.h
Searching for that problem, brings up answers like
Error "gnu/stubs-32.h: No such file or directory" while compiling Nachos source code
or
https://askubuntu.com/questions/470796/fatal-error-sys-cdefs-h-no-such-file-or-directory
baiscally suggesting to install libc6-dev-i386 or gcc-multilib and g++-multilib, supposing that the underlying problem is a 64bit/32bit problem. My question is, are my OpenCL binaries for the GPU compiled as 32bit binaries (how can I check?)?
If yes:
Are there other caveats, when I want to compile 32bit binaries on a 64bit OS?
Furthermore: Can I use 64bit floats, when my kernel is compiled in 32bit?
(e.g., will #pragma OPENCL EXTENSION cl_khr_fp64 : enable still work?)
If no:
Do I have to manually locate / copy all the needed header files and include them by hand?
Also: Some of my co-workers even doubt, that including standard C headers into OpenCL kernels is possible due to missing linkers. Any light on that is also appreciated.
Standard C library and other system headers cannot be included
into OpenCL C code, basically because they are only compatible
with the current system (a host), whereas an OpenCL C code could
run on a different device with a different architecture (a GPU in
your case).
As a replacement for standard C functions, OpenCL C defines a set
of built-in functions, which are available without any #include:
printf, large number of math functions, atomics, image-related
functions, etc.
See the "OpenCL Specification: 6.12 Built-in Functions" for a
complete list:
https://www.khronos.org/registry/OpenCL/specs/opencl-1.2.pdf
That doesn't mean you can't create a header with OpenCL C code
and #include it into an OpenCL C program. This works fine:
// foo.h
void foo() {
printf("hello world!");
}
// kernel.cl
#include "foo.h"
__kernel void use_foo() {
foo();
}
Related
Can I use #include <stdatomic.h> and atomic_thread_fence() with memory_order from C11 in Linux driver (kernel-space), or do I must to use Linux functions of memory-barriers:
http://lxr.free-electrons.com/source/Documentation/memory-barriers.txt
http://lxr.free-electrons.com/source/Documentation/atomic_ops.txt
Using:
Linux-kernel 2.6.18 or greater
GCC 4.7.2 or greater
If you are writing kernel code, you should do it in C, and do it in the version of C required by the current kernel (shipping gcc). If you want to get it accepted into mainline (or write it as if it were going to get accepted), you should use the Linux functions. You will also find that they work without unexpected surprises, and you will get better debugging help.
Summary: use the linux functions.
EDIT:
It seems not to work.
With or without does not make any difference.
Driver may compile but the lib will fallback to plain integers or NOP
It seems to work.
atomic_store() and atomic_load() provide the threads synchronization I need between the kernel module driver and the userland program.
What is not sure is that if a fallback method is employed, I mean, usage of standard integer and regular assembly instructions by the compiler.
Feel free to give a look in source codes
in functions:
intelfreq.c / Core_Cycle()
and
corefreqd.c / Core_Cycle()
Mingw don't have BIOS.h file by default. And i'm doing system programming by using netbeans IDE and a third party tool mingw. . ?
Can any one helps me, where do i get that file?
This is the code.
#include<stdio.h>
#include<BIOS.H>
#include<DOS.H>
char st[80] ={"Hello World$"};
void main()
{
_DX = (unsigned int) st;
_AH = 0x09;
geninterrupt(0x21);
}
Nowhere, you don't.
Those header files (dos.h and bios.h) are from 16-bit DOS compilers such as Turbo C or Open Watcom C. MinGW is a 32-bit compiler for Windows. As such, even if you get these header files, they will be useless because:
they are incompatible with gcc
they also need counterpart libraries because the headers themselves do not contain definitions of things like geninterrupt()
DOS interrupt services (int 21h) are not available to Win32 programs
Further, gcc does not support variables aliasing to CPU registers (e.g. _DX, _AH).
You either need to use the appropriate 16-bit DOS compiler or write a Windows program using functionality available from gcc and Win32 API.
Do you really need it? It's been obsoleted a hundred or so times. But from what I've heard, some older Turbo C versions might have it. You can also try out http://www.sandroid.org/TurboC/ , but they say the file might not have all the functions.
I am a java programmer, but i have few things to be done in C. So, i started with a simple example as below. If i have compiled it and generate a executable file (hello), can i run the executable file (hello) in any unix platform without the original file (hello.c)? And also is there a way to read the data from executable file means, decompile the executable file to original file (hello.c)?
[oracle#oracleapps test]$ cat hello.c
#include <stdio.h>
int main(){
int i,data =0;
for(i=1;i<=64;i+=1){
data = i*2;
printf("data=%d\n",data);
}
return 0;
}
To compile
gcc -Wall -W -Werror hello.c -o hello
You can run the resulting executable on platforms that are ABI-compatible with the one which you have compiled the executable for. ABI-compatibility basically means that the same physical processor architecture and OS-interfaces (plus calling convention) is used on two (possibly different) OSes. For example, you can run binaries compiled for Linux on a FreeBSD system (with the same processor type), because FreeBSD includes Linux ABI-compatibility. However, it may not be possible to run a binary on all other types of Unices, unless some hackery is done. For example, you can't run Mac OS X applications on linux, however this guy has a solution with which it's possible to use some OS X command line tools (including the GCC compiler itself) on Linux.
Reverse engineering: there are indeed decompilers which aim to generate C code from machine code, but they're not (yet) very powerful. The reason for this is they're by nature extremely hard to write. Machine code patterns have to be recognized, and even then you can't gather all the original info. For example, types of loops, comments and non-static local variable names and most of the types are all gone during the compilation process. For example, if you have a C source file like this:
int main(int argc, char **argv)
{
int i;
for (i = 0; i < 10; i++)
{
printf("I is: %d\n", i); /* Write the value of I */
}
return 0;
}
a C decompiler may be able to reconstruct the following code:
int main(int _var1, void *_var2)
{
int _var3 = 0;
while (_var3 < 10)
{
printf("I is: %d\n", _var3);
_var3 = _var3 + 1;
}
return 0;
}
But this would be a rather advanced decompiler, such as this one.
You can't run the executable on any platform.
You can run the executable on other machines (or this one) without the .c file. If it is the same OS / Distro running on the same hardware.
You can use a de-compiler to disassembler to read the file and view it as assembly or C-- they won't look much like the original c file.
The compiled file is pure machine code (plus some metadata), so it is self-sufficient in that it does not require the source files to be present. The downside? Machine code is both OS and platform-specific. By platform, we usually mean just roughly the CPU's instruction set, i.e. "x86" or "PowerPC", but some code compiled with certain compiler flags may require specific instruction set extensions. The OS dependence is caused not only by different formats for executable files (e.g. ELF as opposed to PE), but also by use of OS-specific services, or common OS services in an OS-specific manner (e.g. system calls). In addition to that, almost all nontrivial code depends on some libraries (a C runtime library at least), so you probably won't be able to run an executable without having the right libraries in compatible versions. So no your executable likely won't run on a 10 year old proprietary UNIX, and may not run on different Linux distributions (though with your program there's a good chance it does, because it likely only depends on glibc).
While machine code can be easily disassembled, the result is very low-level and useless to many people. Decompilation to C is almost always much harder, though there are attempts. The algorithms can be recovered, simply because they have to be encoded in the machine code somehow. Assuming you didn't compile for debugging, it will never recover comments, formatting, variable names, etc. so even a "perfect" decompiler would yield a different C file from the one you put in.
No ... each platform may have a different executable format requirements, different hardware architectures, different executable memory layouts determined by the linker, etc. A compiled executable is "native" to it's currently compiled platform, not other platforms. You can cross-compile for another architecture on your current machine though.
For instance, even though they may have many similarities, a compiled executable on Linux x86 is not guaranteed to run under BSD, depending on it's flavor (i.e., you could probably run it under FreeBSD but typically not OSX's Darwin version of BSD even thought both machines may have the same underlying hardware architecture). You also couldn't compile something on a SGI MIPS machine running IRIX and run it on a Sun SPARC running Solaris.
With C programs, the program is tied to the environment it was compiled for (which is usually the same as the platform it was compiled on, unless you are cross-compiling). You could copy something built for one version of Linux (and a particular hardware architecture) to another machine with the same archtecture running the same version of Linux, and you'll be fine. You can often get away with running it on a related version of Linux. But you won't get x86/64 code to run on a IA32 machine, nor on a PPC machine, nor on a SPARCmachine. You can likely get IA32 code to run on an x86/64 machine, if the basic O/S is sufficiently similar. And you may or may not be able to get something compiled for Debian to run under RedHat or vice versa; it depends on which libraries your program uses.
Java avoids this by having a platform-neutral byte code program that is compiled, and a platform specific JVM (JRE) to run it on each platform. This WORM (Write Once, Run Many) behaviour was a key selling point for Java.
Yes, you can run it on any unix qemu runs on. This is pretty comparable to java programs, which you can run on any unix the jvm runs on...
I am trying to port some code I wrote from Mac OS X to Linux and am struggling to find a suitable replacement for the OSX only OSAtomic.h. I found the gcc __sync* family, but I am not sure it will be compatible with the older compiler/kernel I have. I need the code to run on GCC v4.1.2 and kernel 2.6.18.
The particular operations I need are:
Increment
Decrement
Compare and Swap
What is weird is that running locate stdatomic.h on the linux machine finds the header file (in a c++ directory), whereas running the same command on my OSX machine (gcc v4.6.3) returns nothing. What do I have to install to get the stdatomic library, and will it work with gcc v 4.1.2?
As a side note, I can't use any third party libraries.
Well, nothing is there to stop you from using OSAtomic operations on other platforms. The sources for OSAtomic operations for ARM, x86 and PPC are a part of Apple's libc which is opensource. Just make sure you are not using OSSpinLock as that is specific to Mac OS X, but this can be easily replaced by Linux futexes.
See these:
http://opensource.apple.com/source/Libc/Libc-594.1.4/i386/sys/OSAtomic.s
http://opensource.apple.com/source/Libc/Libc-594.1.4/ppc/sys/OSAtomic.s
http://opensource.apple.com/source/Libc/Libc-594.1.4/arm/sys/OSAtomic.s
Alternatively, you can use the sync_* family, which I believe should work on most platforms, which I believe are described here: http://gcc.gnu.org/wiki/Atomic
The OpenPA project provides a portable library of atomic operations under an MIT-style license. This is one I have used before and it is pretty straightforward. The code for your operations would look like
#include "opa_primitives.h"
OPA_int_t my_atomic_int = OPA_INT_T_INITIALIZER(0);
/* increment */
OPA_incr_int(&my_atomic_int);
/* decrement */
OPA_decr_int(&my_atomic_int);
/* compare and swap */
old = OPA_cas_int(&my_atomic_int, expected, new);
It also contains fine-grained memory barriers (i.e. read, write, and read/write) instead of just a full memory fence.
The main header file has a comment showing the operations that are available in the library.
GCC atomic intrinsics have been available since GCC 4.0.1.
There is nothing stopping you building GCC 4.7 or Clang with GCC 4.1.2 and then getting all the newer features such as C11 atomics.
There are many locations you can find BSD licensed assembler implementations of atomics as a last resort.
I have a project that I run on Linux (primarily), but sometimes on Darwin/Mac OS X. I use CMake to generate Makefiles on Linux and an Xcode project on Mac OS X. So far, this has worked well.
Now I want to use some Linux-specific functions (clock_gettime() and related functions). I get linker errors on Mac OS X when I try to use clock_gettime(), so I assume it is only available on Linux. I am prepared to introduce conditionally-compiled code in the .c files to use clock_gettime() on Linux and plain old clock() on Mac OS. (BTW I was planning to use #include <unistd.h> and #if _POSIX_TIMERS > 0 as the preprocessor expression, unless someone has a better alternative.)
Things get tricky when it comes to the CMakeLists.txt file. What is the preferred way of introducing linkage to Linux-specific APIs only under the Linux build in a cross-platform CMake project?
Note: An earlier revision of this question contained references to glibc, which was overly specific and confusing. The question is really about the right way to use Linux-specific APIs and libraries in a cross-platform CMake project.
Abstracting away from your examples, and answering only this question:
How to use Linux-specific APIs and libraries only on Linux builds with
CMake?
CMake provides numerous useful constants that you can check in order to determine which system you are running:
if (${UNIX})
# *nix-specific includes or actions
elsif (${WIN32})
# Windows-specific includes or actions
elsif (${APPLE})
# ...
endif (${UNIX})
(I know you're asking about glibc, but you really want to know whether clock_gettime is present, right? But nothing in your question is Linux-specific...)
If you want to check for clock_gettime, you can use the preprocessor. If clock_gettime is present, then _POSIX_TIMERS will be defined. The clock_gettime function is part of an optional POSIX extension (see spec), so it is not Linux-specific but not universal either. Mac OS X does not have clock_gettime: it is not declared in any header nor defined in any library.
#include <time.h>
#include <unistd.h> /* for _POSIX_TIMERS definition, if present */
#if _POSIX_TIMERS
...use clock_gettime()...
#else
...use something else...
#endif
This doesn't solve the problem that you still have to link with -lrt on Linux. This is typically solved with something like AC_CHECK_LIB in Autoconf, I'm sure there's an equivalent in CMake.
From man 2 clock_gettime:
On POSIX systems on which these functions are available, the symbol _POSIX_TIMERS is defined in <unistd.h> to a value greater than 0. The symbols _POSIX_MONOTONIC_CLOCK, _POSIX_CPUTIME, _POSIX_THREAD_CPUTIME indicate that CLOCK_MONOTONIC, CLOCK_PROCESS_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID are available. (See also sysconf(3).)
On Darwin you can use the mach_absolute_time function if you need a high-resolution monotonic clock. If you don't need the resolution or monotonicity, you should probably be using gettimeofday on both platforms.
There is also built-in CMake macro for checking if symbol exists - CheckSymbolExists.