OpenCL 2.0 - work_group operations on CPU and GPU - c

I was testing the following code in order to perform a parallel array element addition in OpenCL 2.0 with the work_groups built-in functions. (inclusive_add and reduce_add in this case)
kernel void(global const float *input,
global float *sum,
local float *scratch)
{
uint local_id = get_local_id(0);
scratch[local_id] = work_group_inclusive_add(input[get_global_id(0)]);
if (local_id == get_local_size(0)-1)
sum[get_group_id(0)] = work_group_reduce_add(scratch[local_id]);
}
If I test it with an array of floats from 0 to 15 with steps of 1, global_size = 16 and local_size = 4 I'm expecting as a result "6.0 22.0 38.0 54.0" and this works fine if I choose my CPU as a device.
But as soon as I choose the GPU and run the same code I get "0.0 4.0 8.0 12.0" (which is just the element in the first position for each work-group)
Am I missing something?
Things that I tried to do but didn't affect a thing:
Adding "barrier(CLK_LOCAL_MEM_FENCE)" before the "if"
Changing the local size and/or the array size / global size.
Notes:
I am passing the input array with clEnqueueWriteBuffer and then reading the sum with clEnqueueReadBuffer
CPU: i5 6200u
GPU: Intel HD Graphics 520
(yes they support OpenCL 2.0 and I can build the kernel successfully with ioc64 passing -cl-std=CL2.0 as I do while building the program at runtime)

You are getting different results because you are using work_group_reduce_add wrong way.
The OpenCL 2.0 spec says:
This built-in function must be encountered by all work-items in a
work-group executing the kernel.
This isn't the case when you call work_group_reduce_add.
You need to remove that if statement from there altogether. By adding if statement which allows only one work item to access it you are calculating sum of one just one value. And that is returned to you.
After work_group_scan_inclusive_add the numbers should be as follows:
w1: 0,1,2,3 -> 0,1,3,6
w2: 4,5,6,7 -> 4,9,15,22
w3: 8,9,10,11 -> 8,17,27,38
w4: 12,13,14,15 -> 12,25,39,54
After work_group_reduce_add:
w1: 10
w2: 50
w3: 90
w4: 130
And 2nd thing from the spec:
NOTE: The order of floating-point operations is not guaranteed for the
work_group_reduce_, work_group_scan_inclusive_ and
work_group_scan_exclusive_ built-in functions that operate on
half, float and double data types. The order of these floating-point
operations is also non-deterministic for a given workgroup.
So the results after inclusive scan I calculated may not necessary be the same and this is what you are observing on what GPU is returning (GPU is returning 0,4,8,12 which happens to be the last value of each buffer).
To summarize: removing if statement before work_group_reduce_add should fix the issue.

Related

It's like OpenCL kernel instance ends abruptly

I'm new to OpenCL and I'm working on converting an existing algorithm to OpenCL.
In this process, I am experiencing a phenomenon that I cannot solve on my own, and I would like to ask some help.
Here's details.
My kernel is applied to images of different size (to be precise, each layer of the Laplacian pyramid).
I get normal results for images of larger size such as 3072 x 3072, 1536 x 1536.
But I get abnormal results for smaller images such as 12 x 12, 6 x 6, 3 x 3, 2 x 2.
At first, I suspected that clEnqueueNDRangeKernel had a bottom limit for dimensions, causing this problem. So, I added printf to the beginning of the kernel as follows. It is confirmed that all necessary kernel instances are executed.
__kernel void GetValueOfB(/* parameters */)
{
uint xB = get_global_id(0);
uint yB = get_global_id(1);
printf("(%d, %d)\n", xB, yB);
// calculation code is omitted
}
So after wandering for a while, I added the same printf to the end of the kernel. When I did this, it was confirmed that printf works only for some pixel positions. For pixel positions not output by printf, the calculated values in the resulting image are incorrect, and as a result, I concluded that some kernel instances terminate abnormally before completing the calculations.
__kernel void GetValueOfB(/* parameters */)
{
uint xB = get_global_id(0);
uint yB = get_global_id(1);
printf("(%d, %d)\n", xB, yB);
// calculation code is omitted
printf("(%d, %d, %f)\n", xB, yB, result_for_this_position);
}
It seems that there is no problem with the calculation of the kernel. If I compile the kernel turning off the optimization with the -cl-opt-disable option, I get perfectly correct results for all images regardless of their size. In addition to that, with NVIDA P4000, it works correct. Of course, in theses cases, I confirmed that the printf added at the bottom of the Kernel works for all pixels.
Below I put additional information and attach a part of the code I wrote.
Any advice is welcomed and appreciated.
Thank you.
SDK: Intel® SDK For OpenCL™ Applications 2020.3.494
Platform: Intel(R) OpenCL HD Graphics
for all images
{
...
const size_t globalSize[2] = { size_t(vtMatB_GPU_LLP[nLayerIndex].cols), size_t(vtMatB_GPU_LLP[nLayerIndex].rows) };
err = clEnqueueNDRangeKernel(_pOpenCLManager->GetCommandQueue(), kernel, 2,
NULL, globalSize, NULL, 0, NULL, NULL);
if (CL_SUCCESS != err)
return -1;
// I tried with this but it didn't make any difference
//std::this_thread::sleep_for(std::chrono::seconds(1));
err = clFinish(_pOpenCLManager->GetCommandQueue());
if (CL_SUCCESS != err)
return -1;
err = clEnqueueReadBuffer(_pOpenCLManager->GetCommandQueue(), memMatB, CL_TRUE,
0, sizeof(float) * vtMatB_GPU_LLP[nLayerIndex].cols *
vtMatB_GPU_LLP[nLayerIndex].rows, vtMatB_GPU_LLP[nLayerIndex].data, 0, nullptr, nullptr);
if (CL_SUCCESS != err)
return -1;
...
}
And I tried with event, too, but it works the same way.
for all images
{
...
const size_t globalSize[2] = { size_t(vtMatB_GPU_LLP[nLayerIndex].cols), size_t(vtMatB_GPU_LLP[nLayerIndex].rows) };
cl_event event;
err = clEnqueueNDRangeKernel(_pOpenCLManager->GetCommandQueue(), kernel, 2, NULL, globalSize, NULL, 0, NULL, &event);
if (CL_SUCCESS != err)
return -1;
err = clWaitForEvents(1, &event);
if (CL_SUCCESS != err)
return -1;
err = clFinish(_pOpenCLManager->GetCommandQueue());
if (CL_SUCCESS != err)
return -1;
err = clEnqueueReadBuffer(_pOpenCLManager->GetCommandQueue(), memMatB, CL_TRUE,
0, sizeof(float) * vtMatB_GPU_LLP[nLayerIndex].cols *
vtMatB_GPU_LLP[nLayerIndex].rows, vtMatB_GPU_LLP[nLayerIndex].data, 0, nullptr, nullptr);
if (CL_SUCCESS != err)
return -1;
...
}
/////// Added contents ////////////////////////////////////////////
Would you guys please take look at this issue in the aspect of clFinsh, or clWaitEvent. Am I missing something in this regard?
Sometimes I get less correct values and sometimes I get more correct values.
To be more specific, let's say I'm applying the kernel to 12 x 12 size image. So there're 144 pixel values.
Sometime I get correct values for 56 pixels.
Sometime I get correct values for 89 pixels.
Some other time I get correct value for n(less then 144) pixels.
If I turn off the OpenCL optimization when compiling the kernel by specifying -cl-opt-disable option, I get correct values for all 144 pixels.
The other thing that makes me think the calculation code is correct is that the same OpenCL code with no modification(other then device select code) runs perfectly correctly with NVIDIA P4000.
At first, I was really suspicious about the calculation code, but more I inspect code, more I'm confident there's nothing wrong with calculation code.
I know there's still a chance that there is an error in the calculation code so that there happen some exceptions anywhere during calculations.
I have plain C++ code for same task. I'm comparing results from those two.
/////// Another added contents ////////////////////////////////////////////
I made a minimum code(except projects template) to reproduce the phenomenon.
What's odd more is that if I install "Intel® Distribution for GDB Target" I get correct results.
https://github.com/heysweetethan/GPUOpenCLProjectforWindows
OpenCL kernels run threads in parallel on a specified global range, which in your case is the image size, with one thread per pixel.
The threads are grouped in workgroups, Workgroup size should be a multiple of 32; ideally 64 to make full use of the hardware, or 8x8 pixels in 2D. These workgroups cannot be split, so the global range must be a multiple of workgroup size.
What happens if global range is not clearly divisible by workgroup size, or smaller than workgroup size, like 3x3 pixels? Then the last workgroup is still executed with all 8x8 threads. The first 3x3 work on valid data in memory, but all the other threads read/write unallocated memory. This can cause undefined behavior or even crashes.
If you cannot have global size as a multiple of workgroup size, there is still a solution: a guard clause in the very beginning of the kernel:
if(xB>=xImage||yB>=yImage) return;
This ensures that no threads access unallocated memory.
As you don't supply a complete reproducible code sample, here's a loose collection of comments/suggestions/advice:
1. printf in kernel code
Don't rely on large amounts of printf output from kernels. It's necessarily buffered, and some implementations don't guarantee delivery of messages - often there's a fixed size buffer and when that's full, messages are dropped.
Note that your post-calculation printf increases the total amount of output, for example.
The reliable way to check or print kernel output is to write it to a global buffer and print it in host code. For example, if you want to verify each work-item reaches a specific point in the code, consider creating a zero-initialised global buffer where you can set a flag in each work-item.
2. Events
As you asked about events, flushing, etc. Your clFinish call certainly should suffice to ensure everything has executed - if anything, it's overkill, but especially while you're debugging other issues it's a good way to rule out queuing issue.
The clWaitForEvents() call preceeding it is not a great idea, as you haven't called clFlush() after queueing the kernel whose event you're waiting for. It's fairly minor, but could be a problem on some implementations.
3. Small image sizes
You've not actually posted any of the code that deals with the images themselves, so I can only guess at potential issues there. It looks like you're not using workgroups, so you shouldn't be running into the usual multiple-of-group-size pitfall.
However, are you sure you're loading the source data correctly, and you're correctly indexing into it? There could be all sorts of pitfalls here, from alignment of pixel rows in the source data, enqueueing the kernel before filling the source buffers has completed, creating source buffers with the wrong flags, etc.
So in summary, I'd suggest:
Don't believe in-kernel-printf if something strange is going on. Switch to something more reliable for observing the behaviour of your kernel code.
At minimum, post all your OpenCL API calling host code. Buffer creation, setting arguments, etc. Any fragments of kernel code accessing the buffers are probably not a bad idea either.
Thanks to a person from intel community, I could understand the phenomenon.
Briefly, if you spend to much time on a single kernel instance, 'Timeout Detection and Recovery(TDR)' stops the kernel instance.
For more information about this, you could refer to the followings.
https://learn.microsoft.com/en-us/windows-hardware/drivers/display/tdr-registry-keys
https://www.pugetsystems.com/labs/hpc/Working-around-TDR-in-Windows-for-a-better-GPU-computing-experience-777/
https://community.intel.com/t5/GPU-Compute-Software/It-s-like-OpenCL-kernel-instance-ends-abruptly/m-p/1386883#M478
I appreciate for all the people who gave me advices.

FFTW Output differs from Matlab with the same input dataset

I am developing an application that should analyze data coming from an A/D stage and find the frequency peaks in a defined frequency range (0-10kHz).
We are using the FFTW3 library, version 3.3.6, running on 64bit Slackware Linux (GCC version 5.3.0). As you can see in the piece of code included, we run the FFTW plan getting result in complex vector result[]. We have verified the operations using MATLAB. We run the FFT on MATLAB (that claims to use the same library) with exactly the same input datasets (complex signal[] as in the source code). We observe some difference between FFTW (Linux ANSI C) and MATLAB run. Each plot is done using MATLAB. In particular, we would like to understand (mag[] array):
Why is the noise floor so different?
After the main peak (at more or less 3kHz) we observe a negative peak in the Linux result, while MATLAB shows correctly a secondary peak as from the input signal.
In these examples, we do not perform any output normalization, neither in Linux nor in MATLAB. The two plots show the magnitude of the FFT results (not converted to dB).
The correct result is the MATLAB one. Does someone have any suggestion about this differences? And how can we produce with the FFTW library results closer to MATLAB?
Below the piece of C source code and the two plots.
//
// Part of source code:
//
// rup[] is filled with unsigned char data coming from an A/D conversion stage (8 bit depth)
// Sampling Frequency is 45.454 KHz
// Frequency Range: 0 - 10.0 KHz
//
#define CONVCOST 0.00787401574803149606
double mag[4096];
unsigned char rup[4096];
int i;
fftw_complex signal[1024];
fftw_complex result[1024];
...
fftw_plan plan = fftw_plan_dft_1d(1024,signal,result,FFTW_FORWARD,FFTW_ESTIMATE);
for(i=0;i<1024;i++)
{
signal[i][REAL] = (double)rup[i] * CONVCOST;
signal[i][IMAG] = 0.0;
}
fftw_execute(plan);
for (i = 0; i < 512; ++i)
{
mag[i] = sqrt(result[i][REAL] * result[i][REAL] + result[i][IMAG] * result[i][IMAG]);
}
fftw_destroy_plan(plan);

OpenCL - Local Memory

I do understand whats the difference between global- and local-memory in general.
But I have problems to use local-memory.
1) What has to be considered by transforming a global-memory variables to local-memory variables?
2) How do I use the local-barriers?
Maybe someone can help me with a little example.
I tried to do a jacobi-computation by using local-memory, but I only get 0 as result. Maybe someone can give me an advice.
Working Solution:
#define IDX(_M,_i,_j) (_M)[(_i) * N + (_j)]
#define U(_i, _j) IDX(uL, _i, _j)
__kernel void jacobi(__global VALUE* u, __global VALUE* f, __global VALUE* tmp, VALUE factor) {
int i = get_global_id(0);
int j = get_global_id(1);
int iL = get_local_id(0);
int jL = get_local_id(1);
__local VALUE uL[(N+2)*(N+2)];
__local VALUE fL[(N+2)*(N+2)];
IDX(uL, iL, jL) = IDX(u, i, j);
IDX(fL, iL, jL) = IDX(f, i, j);
barrier(CLK_LOCAL_MEM_FENCE);
IDX(tmp, i, j) = (VALUE)0.25 * ( U(iL-1, jL) + U(iL, jL-1) + U(iL, jL+1) + U(iL+1, jL) - factor * IDX(fL, iL, jL));
}
Thanks.
1) Query for CL_DEVICE_LOCAL_MEM_SIZE value, it is 16kB minimum and increses for different hardwares. If your local variables can fit in this and if they are re-used many times, you should put them in local memory before usage. Even if you don't, automatic usage of L2 cache when accessing global memory of a gpu can be still effective for utiliation of cores.
If global-local copy is taking important slice of time, you can do async work group copy while cores calculating things.
Another important part is, more free local memory space means more concurrent threads per core. If gpu has 64 cores per compute unit, only 64 threads can run when all local memory is used. When it has more space, 128,192,...2560 threads can be run at the same time if there are no other limitations.
A profiler can show bottlenecks so you can consider it worth a try or not.
For example, a naive matrix-matrix multiplication using nested loop relies on cache l1 l2 but submatices can fit in local memory. Maybe 48x48 submatices of floats can fit in a mid-range graphics card compute unit and can be used for N times for whole calculation before replaced by next submatrix.
CL_DEVICE_LOCAL_MEM_TYPE querying can return LOCAL or GLOBAL which also says that not recommended to use local memory if it is GLOBAL.
Lastly, any memory space allocation(except __private) size must be known at compile time(for device, not host) because it must know how many wavefronts can be issued to achieve max performance(and/or maybe other compiler optimizations). That is why no recursive function allowed by opencl 1.2. But you can copy a function and rename for n times to have pseudo recursiveness.
2) Barriers are a meeting point for all workgroup threads in a workgroup. Similar to cyclic barriers, they all stop there, wait for all until continuing. If it is a local barrier, all workgroup threads finish any local memory operations before departing from that point. If you want to give some numbers 1,2,3,4.. to a local array, you can't be sure if all threads writing these numbers or already written, until a local barrier is passed, then it is certain that array will have final values already written.
All workgroup threads must hit same barrier. If one cannot reach it, kernel stucks or you get an error.
__local int localArray[64]; // not each thread. For all threads.
// per compute unit.
if(localThreadId!=0)
localArray[localThreadId]=localThreadId; // 64 values written in O(1)
// not sure if 2nd thread done writing, just like last thread
if(localThreadId==0) // 1st core of each compute unit loads from VRAM
localArray[localThreadId]=globalArray[globalThreadId];
barrier(CLK_LOCAL_MEM_FENCE); // probably all threads wait 1st thread
// (maybe even 1st SIMD or
// could be even whole 1st wavefront!)
// here all threads written their own id to local array. safe to read.
// except first element which is a variable from global memory
// lets add that value to all other values
if(localThreadId!=0)
localArrray[localThreadId]+=localArray[0];
Working example(local work group size=64):
inputs: 0,1,2,3,4,0,0,0,0,0,0,..
__kernel void vecAdd(__global float* x )
{
int id = get_global_id(0);
int idL = get_local_id(0);
__local float loc[64];
loc[idL]=x[id];
barrier (CLK_LOCAL_MEM_FENCE);
float distance_square_sum=0;
for(int i=0;i<64;i++)
{
float diff=loc[idL]-loc[i];
float diff_squared=diff*diff;
distance_square_sum+=diff_squared;
}
x[id]=distance_square_sum;
}
output: 30, 74, 246, 546, 974, 30, 30, 30...

Sum reduction with parallel algorithm - Bad performances compared to CPU version

I have achieved a small code for doing sum reduction of a 1D array. I am comparing a CPU sequential version and a OpenCL version.
The code is available on this link1
The kernel code is available on this link2
and if you want to compile : link3 for Makefile
My issue is about the bad performances of GPU version :
for size of vector lower than 1,024 * 10^9 elements (i.e with 1024, 10240, 102400, 1024000, 10240000, 102400000 elements) the runtime for GPU version is higher (slightly higher but higher) than CPU one.
As you can see, I have taken 2^n values in order to have a compatible number of workitems with the size of a workgroup.
Concerning the number of workgroups, I have taken :
// Number of work-groups
int nWorkGroups = size/local_item_size;
But for a high number of workitems, I wonder if the value of nWorkGroups is suitable ( for example, nWorkGroups = 1.024 * 10^8 / 1024 = 10^5 workgroups, isn't this too much ?? ).
I tried to modify loca_item_size in the range of [64, 128, 256, 512, 1024] but the performances remain bad for all these values.
I have good benefits only for size = 1.024 * 10^9 elements, here are the runtimes :
Size of the vector
1024000000
Problem size = 1024000000
GPU Parallel Reduction : Wall Clock = 20 second 977511 micro
Final Sum Sequential = 5.2428800006710899200e+17
Sequential Reduction : Wall Clock = 337 second 459777 micro
From your experiences, why do I get bad performances ? I though that advantages should be more significative compared to CPU version.
Maybe someone could see into source code a main mistake because, at the moment, I can't get to solve this issue.
Thanks
Well I can tell you some reasons:
You don't need to write the reduction buffer. You can directly clear it in GPU memory using clEnqueueFillBuffer() or a helper kernel.
ret = clEnqueueWriteBuffer(command_queue, reductionBuffer, CL_TRUE, 0,
local_item_size * sizeof(double), sumReduction, 0, NULL, NULL);
Dont use blocking calls, except for the last read. Otherwise you are wasting some time there.
You are doing the last reduction in CPU. Iterative processing trough the kernel can help.
Because if your kernel is just reducing 128 elements per pass. Your 10^9 number just gets down to 8*10^6. And the CPU does the rest. If you add there the data copy, it makes it completely non worth.
However, if you run 3 passes at 512 elements per pass, you read out from the GPU just 10^9/512^3 = 8 values. So, the only bottleneck would be the first GPU copy and the kernel launch.

converting SSE code to AVX - cost of _mm256_and_ps

I'm converting SSE2 sine and cosine functions (from Julien Pommier's sse_mathfun.h; based on the CEPHES sinf function) to use AVX in order to accept 8 float vectors or 4 doubles.
So, Julien's function sin_ps becomes sin_ps8 (for 8 floats) and sin_pd4 for 4 doubles. (The "advanced" editor here fails to accept my code, so please visit http://arstechnica.com/civis/viewtopic.php?f=20&t=1227375 to see it.)
Testing with clang 3.3 under Mac OS X 10.6.8 running on a 2011 Core2 i7 # 2.7Ghz, benchmarking results look like this:
sinf .. -> 27.7 millions of vector evaluations/second over 5.56e+07
iters (standard, scalar sinf() function)
sin_ps .. -> 41.0 millions of vector evaluations/second over
8.22e+07 iters
sin_pd4 .. -> 40.2 millions of vector evaluations/second over
8.06e+07 iters
sin_ps8 .. -> 2.5 millions of vector evaluations/second over
5.1e+06 iters
The cost of sin_ps8 is downright frightening, and it seems it is due to the use of _mm256_castsi256_ps . In fact, commenting out the line "poly_mask = _mm256_castsi256_ps(emmm2);" results in a more normal performance.
sin_pd4 uses _mm_castsi128_pd, but it appears that is not (just) the mix of SSE and AVX instructions that is biting me in sin_ps8: when I emulate the _mm256_castsi256_ps calls with 2 calls to _mm_castsi128_ps, performance doesn't improve. emm2 and emm0 are pointers to emmm2 and emmm0, both v8si instances and thus (a priori) correctly aligned to 32 bits boundaries.
See sse_mathfun.h and sse_mathfun_test.c for compilable code.
Is there a(n easy) way to avoid the penalty I'm seeing?
Transferring stuff out of registers into memory isn't usually a good idea. You are doing this every time you store into a pointer.
Instead of this:
{ ALIGN32_BEG v4sf *yy ALIGN32_END = (v4sf*) &y;
emm2[0] = _mm_and_si128(_mm_add_epi32( _mm_cvttps_epi32( yy[0] ), _v4si_pi32_1), _v4si_pi32_inv1),
emm2[1] = _mm_and_si128(_mm_add_epi32( _mm_cvttps_epi32( yy[1] ), _v4si_pi32_1), _v4si_pi32_inv1);
yy[0] = _mm_cvtepi32_ps(emm2[0]),
yy[1] = _mm_cvtepi32_ps(emm2[1]);
}
/* get the swap sign flag */
emm0[0] = _mm_slli_epi32(_mm_and_si128(emm2[0], _v4si_pi32_4), 29),
emm0[1] = _mm_slli_epi32(_mm_and_si128(emm2[1], _v4si_pi32_4), 29);
/* get the polynom selection mask
there is one polynom for 0 <= x <= Pi/4
and another one for Pi/4<x<=Pi/2
Both branches will be computed.
*/
emm2[0] = _mm_cmpeq_epi32(_mm_and_si128(emm2[0], _v4si_pi32_2), _mm_setzero_si128()),
emm2[1] = _mm_cmpeq_epi32(_mm_and_si128(emm2[1], _v4si_pi32_2), _mm_setzero_si128());
((v4sf*)&poly_mask)[0] = _mm_castsi128_ps(emm2[0]);
((v4sf*)&poly_mask)[1] = _mm_castsi128_ps(emm2[1]);
swap_sign_bit = _mm256_castsi256_ps(emmm0);
Try something like this:
__m128i emm2a = _mm_and_si128(_mm_add_epi32( _mm256_castps256_ps128(y), _v4si_pi32_1), _v4si_pi32_inv1);
__m128i emm2b = _mm_and_si128(_mm_add_epi32( _mm256_extractf128_ps(y, 1), _v4si_pi32_1), _v4si_pi32_inv1);
y = _mm256_insertf128_ps(_mm256_castps128_ps256(_mm_cvtepi32_ps(emm2a)), _mm_cvtepi32_ps(emm2b), 1);
/* get the swap sign flag */
__m128i emm0a = _mm_slli_epi32(_mm_and_si128(emm2a, _v4si_pi32_4), 29),
__m128i emm0b = _mm_slli_epi32(_mm_and_si128(emm2b, _v4si_pi32_4), 29);
swap_sign_bit = _mm256_castsi256_ps(_mm256_insertf128_si256(_mm256_castsi128_si256(emm0a), emm0b, 1));
/* get the polynom selection mask
there is one polynom for 0 <= x <= Pi/4
and another one for Pi/4<x<=Pi/2
Both branches will be computed.
*/
emm2a = _mm_cmpeq_epi32(_mm_and_si128(emm2a, _v4si_pi32_2), _mm_setzero_si128()),
emm2b = _mm_cmpeq_epi32(_mm_and_si128(emm2b, _v4si_pi32_2), _mm_setzero_si128());
poly_mask = _mm256_castsi256_ps(_mm256_insertf128_si256(_mm256_castsi128_si256(emm2a), emm2b, 1));
As mentioned in comments, cast intrinsics are purely compile-time and emit no instructions.
Maybe you could compare your code to the already working AVX extension of Julien Pommier SSE math functions?
http://software-lisc.fbk.eu/avx_mathfun/
This code works in GCC but not MSVC and only supports floats (float8) but I think you could easily extend it to use doubles (double4) as well. A quick comparison of your sin function shows that they are quite similar except for the SSE2 integer part.

Resources