I want to use the intel library embree in combination with OpenCL but it doesn't work for me.
To show you the problem I created a small code to get all OpenCL devices:
#include <stdio.h>
#include <stdlib.h>
#include <CL/cl.h>
int main() {
int i, j;
char* value;
size_t valueSize;
cl_uint platformCount;
cl_platform_id* platforms;
cl_uint deviceCount;
cl_device_id* devices;
cl_uint maxComputeUnits;
// get all platforms
clGetPlatformIDs(0, NULL, &platformCount);
platforms = (cl_platform_id*) malloc(sizeof(cl_platform_id) * platformCount);
clGetPlatformIDs(platformCount, platforms, NULL);
printf("Found PLatform :%i\n",platformCount);
for (i = 0; i < platformCount; i++) {
// get all devices
clGetDeviceIDs(platforms[i], CL_DEVICE_TYPE_ALL, 0, NULL, &deviceCount);
devices = (cl_device_id*) malloc(sizeof(cl_device_id) * deviceCount);
clGetDeviceIDs(platforms[i], CL_DEVICE_TYPE_ALL, deviceCount, devices, NULL);
// for each device print critical attributes
for (j = 0; j < deviceCount; j++) {
// print device name
clGetDeviceInfo(devices[j], CL_DEVICE_NAME, 0, NULL, &valueSize);
value = (char*) malloc(valueSize);
clGetDeviceInfo(devices[j], CL_DEVICE_NAME, valueSize, value, NULL);
printf("%i.%i Device: %sn\n", i+1,j+1, value);
free(value);
}
free(devices);
}
free(platforms);
return 0;
}
If i compile this code with:
g++ --std=c++11 -march=native -o test test.c -lOpenCL
I get the following output:
Found PLatform :2
1.1 Device: GeForce GTX 670n
2.1 Device: Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHzn
If I compile the same code with:
g++ --std=c++11 -march=native -o test test.c -lOpenCL -l:libembree.so.2
I cant find my CPU anymore as an available OpenCL device, as I get the following ouput:
Found PLatform :1
1.1 Device: GeForce GTX 670n
Can anyone help me to get my CPU again as an OpenCL device?
I found the solution for this problem:
The program was linked against a symlink and not the real library path. By linking against the real library path the problem is solved, using LD_LIBRARY_PATH.
Related
I've tried to run the following C code from two different computers.
#include <string.h>
int a[100000];
int main(){
for(int sz = 100000; sz > 1; sz --){
memmove(a, a+1, 4*(sz - 1));
}
}
Computer A uses 800ms, while computer B uses 6200ms. The running time of B is always far more than A.
Environment
compile command
Shell is bash.
And the -O gcc command doesn't influence runtime.
gcc myfile.c -o mybin
time ./mybin
Computer A
gcc 9.3.0
glibc: ldd (Ubuntu GLIBC 2.31-0ubuntu9) 2.31
uname_result(system='Linux', release='5.4.0-100-generic', machine='x86_64')
CPU: Intel(R) Xeon(R) Gold 6140 CPU # 2.30GHz
Computer B
gcc 9.3.0
glibc: ldd (Ubuntu GLIBC 2.31-0ubuntu9.2) 2.31
uname_result(system='Linux', release='4.4.0-210-generic', machine='x86_64')
CPU: Intel(R) Xeon(R) Platinum 8369B CPU # 2.70GHz
Question
Then I run the following file on same virtual machines, with different kernal version(4.4.210-0404210-generic x86_64 and 5.4.0-113-generic x86_64) with gcc 9.3.0. Two test cost less than 500ms.
#include <string.h>
#include <time.h>
#include <stdio.h>
#define TICK(X) clock_t X = clock()
#define TOCK(X) printf("time %s: %g sec.\n", (#X), (double)(clock() - (X)) / CLOCKS_PER_SEC)
int a[100000];
int main(){
TICK(timer);
for(int sz = 100000; sz > 100; sz --){
memmove(a, a+1, 4*(sz - 1));
}
TOCK(timer);
}
How can I find the cause?
I have two OS on my PC with i7-3770 # 3.40 GHz. One OS is latest Linux Kubuntu 18.04, the other OS is Windows 10 Pro running on same HDD.
I have tested a simple funny program written in C language doing some arithmetic calculations from number theory. On Kubuntu compiled with gcc 7.3.0, on Windows compiled with gcc 5.2.0. built by MinGW-W64 project.
The result is amazing, running program was 4-times slower on Linux, than on Windows.
On Windows the elapsed time is just 6 seconds. On Linux is elapsed time 24 seconds! On the same hardware.
I tried on Kubuntu to compile with some CPU specific options like "gcc -corei7" etc., but nothing helped. In the program is used "math.h" library, so the compilation is done with "-lm" on both systems. The source code is the same.
Is there a reason for this slow speed under Linux?
Further more I have compiled the same code also on older 32-bit machine with Core Duo T2250 # 1.73 GHz under Linux Mint 19 with gcc 7.3.0. The elapsed time was 28 seconds! Not much difference than 64-bit machine running on double frequency under Linux.
The sorce code is below, you can compile it and test it.
/* Program for playing with sigma(n) and tau(n) functions */
/* Compilation of code: "gcc name.c -o name -lm" */
#include <stdio.h>
#include <math.h>
#include <time.h>
int main(void)
{
double i, nq, x, zacatek, konec, p;
double odx, soucet, delitel, celkem, ZM;
unsigned long cas1, cas2;
i=(double)0; soucet=(double)0; celkem=(double)0; nq=(double)0;
zacatek=(double)1; konec=(double)1000000; x=zacatek;
ZM=(double)16 / (double)10;
printf("\n Program for playing with sigma(n) and tau(n) functions \n");
printf("---------------------------------------------------------\n");
printf("Calculation is running in range from %.0lf to %.0lf\n\n\n", zacatek, konec);
printf("Finding numbers which have sigma(n)/n = %.3lf\n\n", ZM);
cas1=time(NULL);
while (x <= konec) {
i=1; celkem=0; nq=0;
odx=sqrt(x)+1;
while (i <= odx) {
if (fmod(x, i)==0) {
nq++;
celkem=celkem+x/i+i;
}
i++;
}
nq=2*nq-1;
if ((odx-floor(odx))==0) {celkem=celkem-odx;}
if (fabs(celkem - (ZM*x)) < 0.001) {
printf("%.0lf has sum of all divisors = %.3lf times the number itself (%.0lf, %.0lf)\n", x, ZM, celkem, nq+1);
}
x++;
}
cas2=time(NULL);
printf("\n\nProgram ended.\n\n");
printf("Elapsed time %lu seconds.\n\n", cas2-cas1);
return (0);
}
Using clang we can generate IR with compile C program:
clang -S -emit-llvm hello.c -o hello.ll
I would like to translate neon intrinsic to llvm-IR, code like this:
/* neon_example.c - Neon intrinsics example program */
#include <stdint.h>
#include <stdio.h>
#include <assert.h>
#include <arm_neon.h>
/* fill array with increasing integers beginning with 0 */
void fill_array(int16_t *array, int size)
{ int i;
for (i = 0; i < size; i++)
{
array[i] = i;
}
}
/* return the sum of all elements in an array. This works by calculating 4 totals (one for each lane) and adding those at the end to get the final total */
int sum_array(int16_t *array, int size)
{
/* initialize the accumulator vector to zero */
int16x4_t acc = vdup_n_s16(0);
int32x2_t acc1;
int64x1_t acc2;
/* this implementation assumes the size of the array is a multiple of 4 */
assert((size % 4) == 0);
/* counting backwards gives better code */
for (; size != 0; size -= 4)
{
int16x4_t vec;
/* load 4 values in parallel from the array */
vec = vld1_s16(array);
/* increment the array pointer to the next element */
array += 4;
/* add the vector to the accumulator vector */
acc = vadd_s16(acc, vec);
}
/* calculate the total */
acc1 = vpaddl_s16(acc);
acc2 = vpaddl_s32(acc1);
/* return the total as an integer */
return (int)vget_lane_s64(acc2, 0);
}
/* main function */
int main()
{
int16_t my_array[100];
fill_array(my_array, 100);
printf("Sum was %d\n", sum_array(my_array, 100));
return 0;
}
But It doesn't support neon intrinsic, and print error messages like this:
/home/user/llvm-proj/build/bin/../lib/clang/4.0.0/include/arm_neon.h:65:24: error:
'neon_vector_type' attribute is not supported for this target
typedef __attribute__((neon_vector_type(8))) float16_t float16x8_t;
^
I think the reason is my host is on x86, but target is on ARM.
And I have no idea how to Cross-compilation using Clang to translate to llvm-IR(clang version is 4.0 on ubuntu 14.04).
Is there any target option commands or other tools helpful?
And any difference between SSE and neon llvm-IR?
Using ELLCC, a pre-packaged clang based tool chain (http://ellcc.org), I was able to compile and run your program by adding -mfpu=neon:
rich#dev:~$ ~/ellcc/bin/ecc -target arm32v7-linux -mfpu=neon neon.c
rich#dev:~$ ./a.
a.exe a.out
rich#dev:~$ file a.out
a.out: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, BuildID[sha1]=613c22f6bbc277a8d577dab7bb27cd64443eb390, not stripped
rich#dev:~$ ./a.out
Sum was 4950
rich#dev:~$
It was compiled on an x86 and I ran it using QEMU.
Using normal clang, you'll also need the appropriate -target option for ARM. ELLCC uses slightly different -target options.
I'm trying to run this code on nvidia GPU and it returns strange values. It consist of two modules main.cu and exmodul.cu (listed bellow). For building I'm using:
nvcc -dc -arch sm_35 main.cu
nvcc -dc -arch sm_35 exmodul.cu
nvcc -arch sm_35 -lcudart -o main main.o exmodul.o
If I run that I obtained strange last line!!! gd must be 1.
result=0
result=0
result=0
result=0
gd=-0.5
When I change 1.0 in exmodul.cu to number greater than
1.000000953 or bellow 0.999999999999999945, it return proper
result.
When I change 1.1 in exmodul.cu it also fails except value 1.0.
Behavior doesn't depend on constant 2.0 in the same module.
When I use another function instead of cos like sin or exp it works properly.
Use of double q = cos(1.1); has no effect.
When I copy function extFunc() to module main.cu it works properly.
If I uncomment *gd=1.0; in main.cu it returns correct 1.0.
Tested on Nvidia GT750M and GeForce GTX TITAN Black. (on GT750M returns different value gd=6.1232329394368592e-17 but still wrong). OS: Debian Jessie.
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2013 NVIDIA Corporation
Built on Thu_Mar_13_11:58:58_PDT_2014
Cuda compilation tools, release 6.0, V6.0.1
Have you got any idea what is wrong?
Thanks, Lukas
main.cu
#include <stdio.h> // printf
#include "exmodul.h" // extFunc
__global__ void mykernel(double*gd);
void deviceCheck();
int main(int argc, char *argv[])
{
double gd, *d_gd;
cudaMalloc(&d_gd, sizeof(double)); deviceCheck();
mykernel<<<1,1>>>(d_gd); deviceCheck();
cudaMemcpy(&gd, d_gd, sizeof(double), cudaMemcpyDeviceToHost);
deviceCheck();
cudaFree(d_gd); deviceCheck();
fprintf(stderr,"gd=%.17g\n",gd);
return 0;
}
void deviceCheck()
{
cudaError_t result = cudaSuccess;
cudaDeviceSynchronize();
result = cudaGetLastError();
fprintf(stderr,"result=%d\n",result); fflush(stderr);
}
__global__ void mykernel(double *gd)
{
*gd = extFunc();
//*gd=1.0;
__syncthreads();
return;
}
exmodul.cu
#include "exmodul.h"
__device__ double extFunc()
{
double q = 1.1;
q = cos(q);
if(q<2.0) { q = 1.0; }
return q;
}
exmodul.h
__device__ double extFunc();
I was able to reproduce problematic behavior on a supported config (CUDA 6.5, CentOS 6.2, K40).
When I switched from CUDA 6.5 to CUDA 7 RC, the problem went away.
The problem also did not appear to be reproducible on an older config (CUDA 5.5, CentOS 6.2, M2070)
I suggest switching to CUDA 7 RC to address this issue. I suspect an underlying bug in the compilation process that has been fixed already.
I'm looking for a way to atomically increment a short, and then return that value. I need to do this both in kernel mode and in user mode, so it's in C, under Linux, on Intel 32bit architecture. Unfortunately, due to speed requirements, a mutex lock isn't going to be a good option.
Is there any other way to do this? At this point, it seems like the only option available is to inline some assembly. If that's the case, could someone point me towards the appropriate instructions?
GCC __atomic_* built-ins
As of GCC 4.8, __sync built-ins have been deprecated in favor of the __atomic built-ins: https://gcc.gnu.org/onlinedocs/gcc-4.8.2/gcc/_005f_005fatomic-Builtins.html
They implement the C++ memory model, and std::atomic uses them internally.
The following POSIX threads example fails consistently with ++ on x86-64, and always works with _atomic_fetch_add.
main.c
#include <assert.h>
#include <pthread.h>
#include <stdlib.h>
enum CONSTANTS {
NUM_THREADS = 1000,
NUM_ITERS = 1000
};
int global = 0;
void* main_thread(void *arg) {
int i;
for (i = 0; i < NUM_ITERS; ++i) {
__atomic_fetch_add(&global, 1, __ATOMIC_SEQ_CST);
/* This fails consistently. */
/*global++*/;
}
return NULL;
}
int main(void) {
int i;
pthread_t threads[NUM_THREADS];
for (i = 0; i < NUM_THREADS; ++i)
pthread_create(&threads[i], NULL, main_thread, NULL);
for (i = 0; i < NUM_THREADS; ++i)
pthread_join(threads[i], NULL);
assert(global == NUM_THREADS * NUM_ITERS);
return EXIT_SUCCESS;
}
Compile and run:
gcc -std=c99 -Wall -Wextra -pedantic -o main.out ./main.c -pthread
./main.out
Disassembly analysis at: How do I start threads in plain C?
Tested in Ubuntu 18.10, GCC 8.2.0, glibc 2.28.
C11 _Atomic
In 5.1, the above code works with:
_Atomic int global = 0;
global++;
And C11 threads.h was added in glibc 2.28, which allows you to create threads in pure ANSI C without POSIX, minimal runnable example: How do I start threads in plain C?
GCC supports atomic operations:
gcc atomics