OpenCV Fourier Magnitude - doesn't seem correct - c

I believe I am having a scaling issue in trying to convert the Fourier magnitude spectrum to an Image.
I am working on my own visual odometry project to determine the translation and rotation between consequtive frames from a camera input. I have been successful with determining translation using phase correlation of the fourier transform, however part of determining the rotation requires the magnitude spectrum to be convolved. Essentially the magnitude I have produced does not seem correct, as below.
Original Image:
Magnitude, with the 'mag = 255*(mag/max)' scaling
Magnitude, without the scaling
Unfortunately I would require help as to the function I am using to determine the magnitude, I believe my error is in the scaling of the magnitude but am unsure exactly. This issue has had me for some time and your input would be appreciated, thankyou.
void iplimage_dft(IplImage* img)
{
IplImage* img1, * img2;
fftw_complex* in, * dft, * idft;
fftw_plan plan_f, plan_b;
int i, j, k, w, h, N;
/* Copy input image */
img1 = cvCloneImage(img);
w = img1->width;
h = img1->height;
N = w * h;
/* Allocate input data for FFTW */
in = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N);
dft = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N);
/* Create plans */
plan_f = fftw_plan_dft_2d(w, h, in, dft, FFTW_FORWARD, FFTW_ESTIMATE);
/* Populate input data in row-major order */
for (i = 0, k = 0; i < h; i++)
{
for (j = 0; j < w; j++, k++)
{
in[k][0] = ((uchar*)(img1->imageData + i * img1->widthStep))[j];
in[k][1] = 0.0;
}
}
/* Forward & inverse DFT */
fftw_execute(plan_f);
/* Create output image */
img2 = cvCreateImage(cvSize(w, h), 8, 1);
//Find the maximum value among the magnitudes
double max=0;
double mag=0;
for (i = 0, k = 1; i < h; i++){
for (j = 0; j < w; j++, k++){
mag = sqrt(pow(dft[k][0],2) + pow(dft[k][1],2));
if (max < mag)
max = mag;
}
}
// Convert DFT result to output image
for (i = 0, k = 0; i < h; i++)
{
for (j = 0; j < w; j++, k++)
{
double mag = sqrt(pow(dft[k][0],2) + pow(dft[k][1],2));
mag = 255*(mag/max);
((uchar*)(img2->imageData + i * img2->widthStep))[j] = mag;
}
}
cvShowImage("iplimage_dft(): original", img1);
cvShowImage("iplimage_dft(): result", img2);
//cvSaveImage("iplimage_dft.png", img2,0 );
cvWaitKey(0);
/* Free memory */
fftw_destroy_plan(plan_f);
fftw_free(in);
fftw_free(dft);
cvReleaseImage(&img1);
cvReleaseImage(&img2);
}
int main( int argc, char** argv )
{
argv[1] = "image1.jpg";
IplImage *img3 = cvLoadImage( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
iplimage_dft(img3);
return 0;
}

The spectrum of many images have characteristics like this - several relatively high peaks with the rest of the field quite small in magnitude. It looks like you're normalizing right, it's just that the details are lost because the magnitude of much of the spectrum is very small. I've often found it more useful to use log(mag(spectrum)) (or even log(log(mag(spectrum))) in some cases) to generate an image if you're wanting to inspect details.

Related

How to integrate my calculation C code with OpenMP

I am using a dual channels DAQ card with data stream mode. I wrote some code for analysis/calculation and put them to the main code for operation. However, the FIFO overflow warning sign always occur once its total data reach around 6000 MSamples (the DAQ on board memory is 8GB). I am well-noticed that a complicated calculation might retard the system and cause the overflow but all of the works I wrote are necessary to my experiment which means cannot be replaced (or there is more effective code can let me get the same result). I have heard that the OpenMP might be a solution to boost up the speed, but I am just a beginner in C, how could I implement to my calculation code?
My computer has 64GB RAM and Intel Core i7 processor. I always turn off other unnecessary software when running the data stream code. The code has been optimize as possible as I can, like simplify the hilbert() and use memcpy to pick out a specific range of data points.
This is how I process the data:
1.Install the FFTW source code for the Hilbert transform.
2.For loop to de-interleave pi16Buffer data to ch2Buffer
3.memcpy to get a certain range of data that I am interested put them to another array called ch2newBuffer
4.Do the hilbert() on ch2newBuffer and calculate its absolute number.
5.Find the max value of ch1 and abs(hilbert(ch2newBuffer)).
6.Calculate max(abs(hilbert(ch2))) / max(ch1).
Here is a part of the my DAQ code which in charge to calculation:
void hilbert(const int16* in, fftw_complex* out, fftw_plan plan_forward, fftw_plan plan_backward)
{
// copy the data to the complex array
for (int i = 0; i < N; ++i) {
out[i][REAL] = in[i];
out[i][IMAG] = 0;
}
// creat a DFT plan and execute it
//fftw_plan plan = fftw_plan_dft_1d(N, out, out, FFTW_FORWARD, FFTW_ESTIMATE);
fftw_execute(plan_forward);
// destroy a plan to prevent memory leak
//fftw_destroy_plan(plan_forward);
int hN = N>>1; // half of the length (N/2)
int numRem = hN; // the number of remaining elements
// multiply the appropriate value by 2
//(those should multiplied by 1 are left intact because they wouldn't change)
for (int i = 1; i < hN; ++i) {
out[i][REAL] *= 2;
out[i][IMAG] *= 2;
}
// if the length is even, the number of the remaining elements decrease by 1
if (N % 2 == 0)
numRem--;
else if (N > 1) {
out[hN][REAL] *= 2;
out[hN][IMAG] *= 2;
}
// set the remaining value to 0
// (multiplying by 0 gives 0, so we don't care about the multiplicands)
memset(&out[hN + 1][REAL], 0, numRem * sizeof(fftw_complex));
// creat a IDFT plan and execute it
//plan = fftw_plan_dft_1d(N, out, out, FFTW_BACKWARD, FFTW_ESTIMATE);
fftw_execute(plan_backward);
// do some cleaning
//fftw_destroy_plan(plan_backward);
//fftw_cleanup();
// scale the IDFT output
//for (int i = 0; i < N; ++i) {
//out[i][REAL] /= N;
//out[i][IMAG] /= N;
//}
}
float SumBufferData(void* pBuffer, uInt32 u32Size, uInt32 u32SampleBits)
{
// In this routine we sum up all the samples in the buffer. This function
// should be replaced with the user's analysys function
if ( 8 == u32SampleBits )
{
pu8Buffer = (uInt8 *)pBuffer;
for (i = 0; i < u32Size; i++)
{
i64Sum += pu8Buffer[i];
}
}
else
{
pi16Buffer = (int16 *)pBuffer;
fftw_complex(hilbertedch2[N]);
fftw_plan plan_forward = fftw_plan_dft_1d(N, hilbertedch2, hilbertedch2, FFTW_FORWARD, FFTW_ESTIMATE);
fftw_plan plan_backward = fftw_plan_dft_1d(N, hilbertedch2, hilbertedch2, FFTW_BACKWARD, FFTW_ESTIMATE);
ch2Buffer = (int16*)calloc(u32Size / 2, sizeof * ch2Buffer);
ch2newBuffer= (int16*)calloc(u32Size/2, sizeof* ch2newBuffer);
// De-interleave the data from pi16Buffer
for (i = 0; i < u32Size/2 ; i++)
{
ch2Buffer[i] = pi16Buffer[i*2+1];
}
// Pick out the data points range that we are interested
memcpy(ch2newBuffer, &ch2Buffer[6944], 1024 * sizeof(ch2Buffer[0]));
// Do the hilbert transform to these data points
hilbert(ch2newBuffer, hilbertedch2, plan_forward, plan_backward);
fftw_destroy_plan(plan_forward);
fftw_destroy_plan(plan_backward);
//Find max value in each segs of ch1 and ch2
for (i = 128; i < 200 ; i++)
{
if (pi16Buffer[i*2] > max1)
max1 = pi16Buffer[i*2];
}
for (i = 0; i < 1024; i++)
{
if (fabs(hilbertedch2[i][IMAG]) > max2)
max2 = fabs(hilbertedch2[i][IMAG]);
}
Corrected = max2 / max1 / N; // Calculate the signal correction
}
free(ch2Buffer);
free(ch2newBuffer);
return Corrected;
}
Loop are typically a good start for parallelism, for instance:
#pragma omp parallel for
for (int i = 0; i < N; ++i) {
out[i][REAL] = in[i];
out[i][IMAG] = 0;
}
or
#pragma omp parallel for reduction(max:max2)
for (i = 0; i < 1024; i++)
{
float tmp = fabs(hilbertedch2[i][IMAG]);
max2 = (max2 > tmp) ? max2 : tmp.
}
That being said, you need to profile your code find out where the execution takes the most time and try to parallelized if possible. However, looking at what you have posted, I do not see a lot of parallelism opportunity there.

Find k out of n subset with maximal area

I have n points and have to find the maximum united area between k points (k <= n). So, its the sum of those points area minus the common area between them.
]1
Suppose we have n=4, k=2. As illustrated in the image above, the areas are calculated from each point to the origin and, the final area is the sum of the B area with the D are (only counting the area of their intersection once). No point is dominated
I have implemented a bottom-up dynamic programming algorithm, but it has an error somewhere. Here is the code, that prints out the best result:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
typedef struct point {
double x, y;
} point;
struct point *point_ptr;
int n, k;
point points_array[1201];
point result_points[1201];
void qsort(void *base, size_t nitems, size_t size,
int (*compar)(const void *, const void *));
int cmpfunc(const void *a, const void *b) {
point *order_a = (point *)a;
point *order_b = (point *)b;
if (order_a->x > order_b->x) {
return 1;
}
return -1;
}
double max(double a, double b) {
if (a > b) {
return a;
}
return b;
}
double getSingleArea(point p) {
return p.x * p.y;
}
double getCommonAreaX(point biggest_x, point new_point) {
double new_x;
new_x = new_point.x - biggest_x.x;
return new_x * new_point.y;
}
double algo() {
double T[k][n], value;
int i, j, d;
for (i = 0; i < n; i++) {
T[0][i] = getSingleArea(points_array[i]);
}
for (j = 0; j < k; j++) {
T[j][0] = getSingleArea(points_array[0]);
}
for (i = 1; i < k; i++) {
for (j = 1; j < n; j++) {
for (d = 0; d < j; d++) {
value = getCommonAreaX(points_array[j - 1], points_array[j]);
T[i][j] = max(T[i - 1][j], value + T[i - 1][d]);
}
}
}
return T[k - 1][n - 1];
}
void read_input() {
int i;
fscanf(stdin, "%d %d\n", &n, &k);
for (i = 0; i < n; i++) {
fscanf(stdin, "%lf %lf\n", &points_array[i].x, &points_array[i].y);
}
}
int main() {
read_input();
qsort(points_array, n, sizeof(point), cmpfunc);
printf("%.12lf\n", algo());
return 0;
}
with the input:
5 3
0.376508963445 0.437693410334
0.948798695015 0.352125307881
0.176318878234 0.493630156084
0.029394902328 0.951299438575
0.235041868262 0.438197791997
where the first number equals n, the second k and the following lines the x and y coordinates of every point respectively, the result should be: 0.381410589193,
whereas mine is 0.366431740966. So I am missing a point?
This is a neat little problem, thanks for posting! In the remainder, I'm going to assume no point is dominated, that is, there are no points c such that there exists a point d with c.x < d.x and c.y < d.y. If there are, then it is never optimal to use c (why?), so we can safely ignore any dominated points. None of your example points are dominated.
Your problem exhibits optimal substructure: once we have decided which item is to be included in the first iteration, we have the same problem again with k - 1, and n - 1 (we remove the selected item from the set of allowed points). Of course the pay-off depends on the set we choose - we do not want to count areas twice.
I propose we pre-sort all point by their x-value, in increasing order. This ensures the value of a selection of points can be computed as piece-wise areas. I'll illustrate with an example: suppose we have three points, (x1, y1), ..., (x3, y3) with values (2, 3), (3, 1), (4, .5). Then the total area covered by these points is (4 - 3) * .5 + (3 - 2) * 1 + (2 - 0) * 3. I hope it makes sense in a graph:
By our assumption that there are no dominated points, we will always have such a weakly decreasing figure. Thus, pre-sorting solves the entire problem of "counting areas twice"!
Let us turn this into a dynamic programming algorithm. Consider a set of n points, labelled {p_1, p_2, ..., p_n}. Let d[k][m] be the maximum area of a subset of size k + 1 where the (k + 1)-th point in the subset is point p_m. Clearly, m cannot be chosen as the (k + 1)-th point if m < k + 1, since then we would have a subset of size less than k + 1, which is never optimal. We have the following recursion,
d[k][m] = max {d[k - 1][l] + (p_m.x - p_l.x) * p_m.y, for all k <= l < m}.
The initial cases where k = 1 are the rectangular areas of each point. The initial cases together with the updating equation suffice to solve the problem. I estimate the following code as O(n^2 * k). The term squared in n can probably be lowered as well, as we have an ordered collection and might be able to apply a binary search to find the best subset in log n time, reducing n^2 to n log n. I leave this to you.
In the code, I have re-used my notation above where possible. It is a bit terse, but hopefully clear with the explanation given.
#include <stdio.h>
typedef struct point
{
double x;
double y;
} point_t;
double maxAreaSubset(point_t const *points, size_t numPoints, size_t subsetSize)
{
// This should probably be heap allocated in your program.
double d[subsetSize][numPoints];
for (size_t m = 0; m != numPoints; ++m)
d[0][m] = points[m].x * points[m].y;
for (size_t k = 1; k != subsetSize; ++k)
for (size_t m = k; m != numPoints; ++m)
for (size_t l = k - 1; l != m; ++l)
{
point_t const curr = points[m];
point_t const prev = points[l];
double const area = d[k - 1][l] + (curr.x - prev.x) * curr.y;
if (area > d[k][m]) // is a better subset
d[k][m] = area;
}
// The maximum area subset is now one of the subsets on the last row.
double result = 0.;
for (size_t m = subsetSize; m != numPoints; ++m)
if (d[subsetSize - 1][m] > result)
result = d[subsetSize - 1][m];
return result;
}
int main()
{
// I assume these are entered in sorted order, as explained in the answer.
point_t const points[5] = {
{0.029394902328, 0.951299438575},
{0.176318878234, 0.493630156084},
{0.235041868262, 0.438197791997},
{0.376508963445, 0.437693410334},
{0.948798695015, 0.352125307881},
};
printf("%f\n", maxAreaSubset(points, 5, 3));
}
Using the example data you've provided, I find an optimal result of 0.381411, as desired.
From what I can tell, you and I both use the same method to calculate the area, as well as the overall concept, but my code seems to be returning a correct result. Perhaps reviewing it can help you find a discrepancy.
JavaScript code:
function f(pts, k){
// Sort the points by x
pts.sort(([a1, b1], [a2, b2]) => a1 - a2);
const n = pts.length;
let best = 0;
// m[k][j] represents the optimal
// value if the jth point is chosen
// as rightmost for k points
let m = new Array(k + 1);
// Initialise m
for (let i=1; i<=k; i++)
m[i] = new Array(n);
for (let i=0; i<n; i++)
m[1][i] = pts[i][0] * pts[i][1];
// Build the table
for (let i=2; i<=k; i++){
for (let j=i-1; j<n; j++){
m[i][j] = 0;
for (let jj=j-1; jj>=i-2; jj--){
const area = (pts[j][0] - pts[jj][0]) * pts[j][1];
m[i][j] = Math.max(m[i][j], area + m[i-1][jj]);
}
best = Math.max(best, m[i][j]);
}
}
return best;
}
var pts = [
[0.376508963445, 0.437693410334],
[0.948798695015, 0.352125307881],
[0.176318878234, 0.493630156084],
[0.029394902328, 0.951299438575],
[0.235041868262, 0.438197791997]
];
var k = 3;
console.log(f(pts, k));

Why is FFT of (A+B) different from FFT(A) + FFT(B)?

I have been fighting with a very weird bug for almost a month. Asking you guys is my last hope. I wrote a program in C that integrates the 2d Cahn–Hilliard equation using the Implicit Euler (IE) scheme in Fourier (or reciprocal) space:
Where the "hats" mean that we are in Fourier space: h_q(t_n+1) and h_q(t_n) are the FTs of h(x,y) at times t_n and t_(n+1), N[h_q] is the nonlinear operator applied to h_q, in Fourier space, and L_q is the linear one, again in Fourier space. I don't want to go too much into the details of the numerical method I am using, since I am sure that the problem is not coming from there (I tried using other schemes).
My code is actually quite simple. Here is the beginning, where basically I declare variables, allocate memory and create the plans for the FFTW routines.
# include <stdlib.h>
# include <stdio.h>
# include <time.h>
# include <math.h>
# include <fftw3.h>
# define pi M_PI
int main(){
// define lattice size and spacing
int Nx = 150; // n of points on x
int Ny = 150; // n of points on y
double dx = 0.5; // bin size on x and y
// define simulation time and time step
long int Nt = 1000; // n of time steps
double dt = 0.5; // time step size
// number of frames to plot (at denominator)
long int nframes = Nt/100;
// define the noise
double rn, drift = 0.05; // punctual drift of h(x)
srand(666); // seed the RNG
// other variables
int i, j, nt; // variables for space and time loops
// declare FFTW3 routine
fftw_plan FT_h_hft; // routine to perform fourier transform
fftw_plan FT_Nonl_Nonlft;
fftw_plan IFT_hft_h; // routine to perform inverse fourier transform
// declare and allocate memory for real variables
double *Linft = fftw_alloc_real(Nx*Ny);
double *Q2 = fftw_alloc_real(Nx*Ny);
double *qx = fftw_alloc_real(Nx);
double *qy = fftw_alloc_real(Ny);
// declare and allocate memory for complex variables
fftw_complex *dh = fftw_alloc_complex(Nx*Ny);
fftw_complex *dhft = fftw_alloc_complex(Nx*Ny);
fftw_complex *Nonl = fftw_alloc_complex(Nx*Ny);
fftw_complex *Nonlft = fftw_alloc_complex(Nx*Ny);
// create the FFTW plans
FT_h_hft = fftw_plan_dft_2d ( Nx, Ny, dh, dhft, FFTW_FORWARD, FFTW_ESTIMATE );
FT_Nonl_Nonlft = fftw_plan_dft_2d ( Nx, Ny, Nonl, Nonlft, FFTW_FORWARD, FFTW_ESTIMATE );
IFT_hft_h = fftw_plan_dft_2d ( Nx, Ny, dhft, dh, FFTW_BACKWARD, FFTW_ESTIMATE );
// open file to store the data
char acstr[160];
FILE *fp;
sprintf(acstr, "CH2d_IE_dt%.2f_dx%.3f_Nt%ld_Nx%d_Ny%d_#f%.ld.dat",dt,dx,Nt,Nx,Ny,Nt/nframes);
After this preamble, I initialise my function h(x,y) with a uniform random noise, and I also take the FT of it. I set the imaginary part of h(x,y), which is dh[i*Ny+j][1] in the code, to 0, since it is a real function. Then I calculate the wavevectors qx and qy, and with them, I compute the linear operator of my equation in Fourier space, which is Linft in the code. I consider only the - fourth derivative of h as the linear term, so that the FT of the linear term is simply -q^4... but again, I don't want to go into the details of my integration method. The question is not about it.
// generate h(x,y) at initial time
for ( i = 0; i < Nx; i++ ) {
for ( j = 0; j < Ny; j++ ) {
rn = (double) rand()/RAND_MAX; // extract a random number between 0 and 1
dh[i*Ny+j][0] = drift-2.0*drift*rn; // shift of +-drift
dh[i*Ny+j][1] = 0.0;
}
}
// execute plan for the first time
fftw_execute (FT_h_hft);
// calculate wavenumbers
for (i = 0; i < Nx; i++) { qx[i] = 2.0*i*pi/(Nx*dx); }
for (i = 0; i < Ny; i++) { qy[i] = 2.0*i*pi/(Ny*dx); }
for (i = 1; i < Nx/2; i++) { qx[Nx-i] = -qx[i]; }
for (i = 1; i < Ny/2; i++) { qy[Ny-i] = -qy[i]; }
// calculate the FT of the linear operator
for ( i = 0; i < Nx; i++ ) {
for ( j = 0; j < Ny; j++ ) {
Q2[i*Ny+j] = qx[i]*qx[i] + qy[j]*qy[j];
Linft[i*Ny+j] = -Q2[i*Ny+j]*Q2[i*Ny+j];
}
}
Then, finally, it comes the time loop. Essentially, what I do is the following:
Every once in a while, I save the data to a file and print some information on the terminal. In particular, I print the highest value of the FT of the Nonlinear term. I also check if h(x,y) is diverging to infinity (it shouldn't happen!),
Calculate h^3 in direct space (that is simply dh[i*Ny+j][0]*dh[i*Ny+j][0]*dh[i*Ny+j][0]). Again, the imaginary part is set to 0,
Take the FT of h^3,
Obtain the complete Nonlinear term in reciprocal space (that is N[h_q] in the IE algorithm written above) by computing -q^2*(FT[h^3] - FT[h]). In the code, I am referring to the lines Nonlft[i*Ny+j][0] = -Q2[i*Ny+j]*(Nonlft[i*Ny+j][0] -dhft[i*Ny+j][0]) and the one below, for the imaginary part. I do this because:
Advance in time using the IE method, transform back in direct space, and then normalise.
Here is the code:
for(nt = 0; nt < Nt; nt++) {
if((nt % nframes)== 0) {
printf("%.0f %%\n",((double)nt/(double)Nt)*100);
printf("Nonlft %.15f \n",Nonlft[(Nx/2)*(Ny/2)][0]);
// write data to file
fp = fopen(acstr,"a");
for ( i = 0; i < Nx; i++ ) {
for ( j = 0; j < Ny; j++ ) {
fprintf(fp, "%4d %4d %.6f\n", i, j, dh[i*Ny+j][0]);
}
}
fclose(fp);
}
// check if h is going to infinity
if (isnan(dh[1][0])!=0) {
printf("crashed!\n");
return 0;
}
// calculate nonlinear term h^3 in direct space
for ( i = 0; i < Nx; i++ ) {
for ( j = 0; j < Ny; j++ ) {
Nonl[i*Ny+j][0] = dh[i*Ny+j][0]*dh[i*Ny+j][0]*dh[i*Ny+j][0];
Nonl[i*Ny+j][1] = 0.0;
}
}
// Fourier transform of nonlinear term
fftw_execute (FT_Nonl_Nonlft);
// second derivative in Fourier space is just multiplication by -q^2
for ( i = 0; i < Nx; i++ ) {
for ( j = 0; j < Ny; j++ ) {
Nonlft[i*Ny+j][0] = -Q2[i*Ny+j]*(Nonlft[i*Ny+j][0] -dhft[i*Ny+j][0]);
Nonlft[i*Ny+j][1] = -Q2[i*Ny+j]*(Nonlft[i*Ny+j][1] -dhft[i*Ny+j][1]);
}
}
// Implicit Euler scheme in Fourier space
for ( i = 0; i < Nx; i++ ) {
for ( j = 0; j < Ny; j++ ) {
dhft[i*Ny+j][0] = (dhft[i*Ny+j][0] + dt*Nonlft[i*Ny+j][0])/(1.0 - dt*Linft[i*Ny+j]);
dhft[i*Ny+j][1] = (dhft[i*Ny+j][1] + dt*Nonlft[i*Ny+j][1])/(1.0 - dt*Linft[i*Ny+j]);
}
}
// transform h back in direct space
fftw_execute (IFT_hft_h);
// normalize
for ( i = 0; i < Nx; i++ ) {
for ( j = 0; j < Ny; j++ ) {
dh[i*Ny+j][0] = dh[i*Ny+j][0] / (double) (Nx*Ny);
dh[i*Ny+j][1] = dh[i*Ny+j][1] / (double) (Nx*Ny);
}
}
}
Last part of the code: empty the memory and destroy FFTW plans.
// terminate the FFTW3 plan and free memory
fftw_destroy_plan (FT_h_hft);
fftw_destroy_plan (FT_Nonl_Nonlft);
fftw_destroy_plan (IFT_hft_h);
fftw_cleanup();
fftw_free(dh);
fftw_free(Nonl);
fftw_free(qx);
fftw_free(qy);
fftw_free(Q2);
fftw_free(Linft);
fftw_free(dhft);
fftw_free(Nonlft);
return 0;
}
If I run this code, I obtain the following output:
0 %
Nonlft 0.0000000000000000000
1 %
Nonlft -0.0000000000001353512
2 %
Nonlft -0.0000000000000115539
3 %
Nonlft 0.0000000001376379599
...
69 %
Nonlft -12.1987455309071730625
70 %
Nonlft -70.1631962517720353389
71 %
Nonlft -252.4941743351609204637
72 %
Nonlft 347.5067875825179726235
73 %
Nonlft 109.3351142318568633982
74 %
Nonlft 39933.1054502610786585137
crashed!
The code crashes before reaching the end and we can see that the Nonlinear term is diverging.
Now, the thing that doesn't make sense to me is that if I change the lines in which I calculate the FT of the Nonlinear term in the following way:
// calculate nonlinear term h^3 -h in direct space
for ( i = 0; i < Nx; i++ ) {
for ( j = 0; j < Ny; j++ ) {
Nonl[i*Ny+j][0] = dh[i*Ny+j][0]*dh[i*Ny+j][0]*dh[i*Ny+j][0] -dh[i*Ny+j][0];
Nonl[i*Ny+j][1] = 0.0;
}
}
// Fourier transform of nonlinear term
fftw_execute (FT_Nonl_Nonlft);
// second derivative in Fourier space is just multiplication by -q^2
for ( i = 0; i < Nx; i++ ) {
for ( j = 0; j < Ny; j++ ) {
Nonlft[i*Ny+j][0] = -Q2[i*Ny+j]* Nonlft[i*Ny+j][0];
Nonlft[i*Ny+j][1] = -Q2[i*Ny+j]* Nonlft[i*Ny+j][1];
}
}
Which means that I am using this definition:
instead of this one:
Then the code is perfectly stable and no divergence happens! Even for billions of time steps! Why does this happen, since the two ways of calculating Nonlft should be equivalent?
Thank you very much to anyone who will take the time to read all of this and give me some help!
EDIT: To make things even more weird, I should point out that this bug does NOT happen for the same system in 1D. In 1D both methods of calculating Nonlft are stable.
EDIT: I add a short animation of what happens to the function h(x,y) just before crashing. Also: I quickly re-wrote the code in MATLAB, which uses Fast Fourier Transform functions based on the FFTW library, and the bug is NOT happening... the mystery deepens.
I solved it!!
The problem was the calculation of the Nonl term:
Nonl[i*Ny+j][0] = dh[i*Ny+j][0]*dh[i*Ny+j][0]*dh[i*Ny+j][0];
Nonl[i*Ny+j][1] = 0.0;
That needs to be changed to:
Nonl[i*Ny+j][0] = dh[i*Ny+j][0]*dh[i*Ny+j][0]*dh[i*Ny+j][0] -3.0*dh[i*Ny+j][0]*dh[i*Ny+j][1]*dh[i*Ny+j][1];
Nonl[i*Ny+j][1] = -dh[i*Ny+j][1]*dh[i*Ny+j][1]*dh[i*Ny+j][1] +3.0*dh[i*Ny+j][0]*dh[i*Ny+j][0]*dh[i*Ny+j][1];
In other words: I need to consider dh as a complex function (even though it should be real).
Basically, because of stupid rounding errors, the IFT of the FT of a real function (in my case dh), is NOT purely real, but will have a very small imaginary part. By setting Nonl[i*Ny+j][1] = 0.0 I was completely ignoring this imaginary part.
The issue, then, was that I was recursively summing FT(dh), dhft, and an object obtained using the IFT(FT(dh)), this is Nonlft, but ignoring the residual imaginary parts!
Nonlft[i*Ny+j][0] = -Q2[i*Ny+j]*(Nonlft[i*Ny+j][0] -dhft[i*Ny+j][0]);
Nonlft[i*Ny+j][1] = -Q2[i*Ny+j]*(Nonlft[i*Ny+j][1] -dhft[i*Ny+j][1]);
Obviously, calculating Nonlft as dh^3 -dh and then doing
Nonlft[i*Ny+j][0] = -Q2[i*Ny+j]* Nonlft[i*Ny+j][0];
Nonlft[i*Ny+j][1] = -Q2[i*Ny+j]* Nonlft[i*Ny+j][1];
Avoided the problem of doing this "mixed" sum.
Phew... such a relief! I wish I could assign the bounty to myself! :P
EDIT: I'd like to add that, before using the fftw_plan_dft_2d functions, I was using fftw_plan_dft_r2c_2d and fftw_plan_dft_c2r_2d (real-to-complex and complex-to-real), and I was seeing the same bug. However, I suppose that I couldn't have solved it if I didn't switch to fftw_plan_dft_2d, since the c2r function automatically "chops off" the residual imaginary part coming from the IFT. If this is the case and I'm not missing something, I think that this should be written somewhere on the FFTW website, to prevent users from running into problems like this. Something like "r2c and c2r transforms are not good to implement pseudospectral methods".
EDIT: I found another SO question that addresses exactly the same problem.

How to implement summation using parallel reduction in OpenCL?

I'm trying to implement a kernel which does parallel reduction. The code below works on occasion, I have not been able to pin down why it goes wrong on the occasions it does.
__kernel void summation(__global float* input, __global float* partialSum, __local float *localSum){
int local_id = get_local_id(0);
int workgroup_size = get_local_size(0);
localSum[local_id] = input[get_global_id(0)];
for(int step = workgroup_size/2; step>0; step/=2){
barrier(CLK_LOCAL_MEM_FENCE);
if(local_id < step){
localSum[local_id] += localSum[local_id + step];
}
}
if(local_id == 0){
partialSum[get_group_id(0)] = localSum[0];
}}
Essentially I'm summing the values per work group and storing each work group's total into partialSum, the final summation is done on the host. Below is the code which sets up the values for the summation.
size_t global[1];
size_t local[1];
const int DATA_SIZE = 15000;
float *input = NULL;
float *partialSum = NULL;
int count = DATA_SIZE;
local[0] = 2;
global[0] = count;
input = (float *)malloc(count * sizeof(float));
partialSum = (float *)malloc(global[0]/local[0] * sizeof(float));
int i;
for (i = 0; i < count; i++){
input[i] = (float)i+1;
}
I'm thinking it has something to do when the size of the input is not a power of two? I noticed it begins to go off for numbers around 8000 and beyond. Any assistance is welcome. Thanks.
I'm thinking it has something to do when the size of the input is not a power of two?
Yes. Consider what happens when you try to reduce, say, 9 elements. Suppose you launch 1 work-group of 9 work-items:
for (int step = workgroup_size / 2; step > 0; step /= 2){
// At iteration 0: step = 9 / 2 = 4
barrier(CLK_LOCAL_MEM_FENCE);
if (local_id < step) {
// Branch taken by threads 0 to 3
// Only 8 numbers added up together!
localSum[local_id] += localSum[local_id + step];
}
}
You're never summing the 9th element, hence the reduction is incorrect. An easy solution is to pad the input data with enough zeroes to make the work-group size the immediate next power-of-two.

Why isn't this audio file being read with sf_read_double?

I am trying to separate each subdivision of a drum sequence into a separate array inside a 2d array (rows for which subdivision, columns for data in each subdivision).
I determine how many samples per subdivision earlier in the code with user specifications on tempo and desired subdivision. I feel that I've a somewhat reasonable method for figuring out the size of the input file in samples (first section shown).
My question is: as is, the sf_read_double while loop will not run. It is only when I multiply "buflen" by 2 (perhaps number of channels) that the loop runs. And when it does run
the the loop goes past the total number of samples calculated and results in a sug fault. What am I doing wrong in this code?
double framesArray[sfinfo.frames];
int numframes = (sizeof(framesArray)/sizeof(double));
int totalSamps = numframes * sfinfo.channels;
int totalSubdivisions = totalSamps / sampsPerSubdivision;
int buflen = sampsPerSubdivision;
int i;
double** choppeddata = (double**) malloc(totalSubdivisions * sizeof(double**));
for (i = 0; i < totalSubdivisions; i++)
choppeddata[i] = (double*) malloc(buflen * sizeof(double*));
double* buffereddata = (double*) malloc(buflen * sizeof(double*));
double* outdata = (double*) malloc(totalSamps * sizeof(double*));
int j = 0, k = 0, sampnum = 0;
while ((readcount = sf_read_double (infile, buffereddata, buflen)))
{
for (k = 0; k < buflen; k++)
{
choppeddata[j][k] = buffereddata[k];
sampnum++;
}
j++;
}
Shouldn't sampsPerSubdivision be casted ? I suppose it has been declared as int. In which case you would need something like:
int totalSubdivisions = (int)(totalSamps / (double)sampsPerSubdivision);
So totalSubdivisions could be wrong... Anyway, this wouldn't explain why your buflen doesn't fit well the data to be read. My guess is that sampsPerSubdivision is not correct in the first place.
I couldn't say more. Hope this can help...

Resources