I need to simulate a Gaussian Channel in C.
How do I do that?
Where can I get code snippets for this?
IIRC, approximating Gaussian distribution is easy - but slow if you want a good approximation. Just add several independent random numbers to get each output. The more "inputs" per output, the better the approximation.
Definitely works if the "inputs" have uniform distribution. I seem to remember reading that it works for almost any input distribution, but you may need far more inputs per output to get a good approximation.
This is Gaussian white noise - the outputs are independent (all frequencies have same amplitude). There's also a similar pink noise algorithm. Still Gaussian distribution, but higher frequencies have lower amplitudes (the outputs aren't independent). Each ouput is still a sum of a fixed set of independent "input" random numbers, but only first is replaced for every output. The second is replaced for every other output, the third for every fourth output, the fourth for every eighth output etc. For most outputs, precisely two input random numbers are replaced - every 2^n outputs you only replace the first.
Related
I'm trying to make a simple music visualization application, I understand I need to take my audio samples and perform a Fast Fourier Transformation. I'm trying to find out how to determine what the scale of the magnitude is, so I can normalize it to be between 0.0 and 1.0 for plotting purposes.
My application is setup to allow reading audio in 16-bit and 24-bit format, so I scale all incoming audio samples to [-1.0,1.0), then I use a real-to-complex 1-dimensional transform for N samples.
From there, I think I need to take the absolute value of each bin (using the cabs function) between 0 and N/2, but I'm not sure what these numbers really represent or what I'm supposed to do with them.
I've figured out how to calculate the frequency of each bin, I'm not interested in finding the actual magnitude or amplitude in decibels, I really just want to get a value between 0.0 and 1.0.
Most explanations for fftw involve a lot of math that is honestly way above my head.
[Per comments, OP seeks to know the maximum possible magnitude of any output bin given inputs in [−1, 1]. This answer gives a way to determine that.]
DFT routines vary in how they handle scaling. Some normalize their output to keep the scale the same, and some let the arithmetic operations grow the scale for better performance or implementation convenience. So the possible scale of the output is not determined solely by mathematics; it depends on the routine used. The documentation of the routine ought to state what scaling it uses.
In the absence of clear documenrtation, you can determine the maximum output by writing a sine wave with amplitude one to the input (and a frequency matching one of the output bins), then performing the transform, and then examining the output to see which bin has the largest magnitude (it should be the one whose frequency you used, of course). It will likely be 1 or N (the number of inputs), with some slop due to floating-point rounding effects.
(When plotting, be sure to allow a little leeway for floating-point rounding effects—the actual numbers could be slightly greater than the maximum, so avoid overflowing or clipping where you do not want that.)
I am writing a program in C that requires generating a normal distribution of positive integers with mean less than 1.
You can normalize data that is already normally distributed, for example take data for average length of human beings (180 centimeter) and scale every number by a factor so that the mean becomes less than 1 e.g. multiply every length by 1/180.
I used a Poisson random number generator function in C, which takes the mean as input. I used a combination of rand() followed by exponentiation to get the distribution, which is the normal way to do this, as in Calculation of Poisson distribution in C.
Since I generate 2^14 random numbers, by Central Limit theorem, their distribution will tend to a normal distribution, with the same mean and variance.
I want to make a frequency shift on a .wav file.
The problem I have is that the FFT uses complex numbers, and the .wav file has integer values. So I want to make a frequency shift and that means that I have to make a direct transform and an inverse transform , the problem is that the inverse transform doesn't give me integer values (it gives me complex values), but I need integer values for the samples of the .wav file.
How do I interpret the values of the inverse transform ?
I want to make a frequency shift on a .wav file.
So you've got an audio, which means a real-valued, signal.
The spectrum of a real valued signal has symmetry to f=0, i.e. it's Fourier Transform has hermitian symmetry.
If you now shift that input spectrum (blue), the result (red) loses symmetry, i.e. the resulting signal is no longer real:
Notice how things are, through aliasing, circular, so what you "shift out" of the Nyquist range will appear at the opposite end. In my example, this means that you get unexpected high frequency components!
The problem I have is that the FFT uses complex numbers, and the .wav file has integer values. So I want to make a frequency shift and that means that I have to make a direct transform and an inverse transform , the problem is that the inverse transform doesn't give me integer values (it gives me complex values), but I need integer values for the samples of the .wav file.
Indeed! And that's because the result of your shift really isn't a real signal anymore.
What you can do, however, is:
shift (either in time or frequency domain – honestly, doing it in time domain would be easier! Just multiply the nth sample with exp(2j π f_shift/f_sample n ).)
apply a complex bandpass filter that removes everything outside the frequency [0; f_sample / 2 - shift ]. This gives you what is called an analytical signal (i.e. only positive frequencies), which still isn't real-valued, because it's not symmetric.
Throwing away the imaginary part now doesn't alter the information of your signal - it just halves the energy and gives you a symmetric spectrum, and something you can write to a .wav file.
Now, this whole "doing it in frequency domain through the FFT" is an approach that people in the Software Defined Radio world are pretty used to – they deal with complex baseband signals all the time.
How do I interpret the values of the inverse transform ?
As the complex signal that they are. Ignoring the imaginary part as suggested in the comments will lead to the energy contained in the negative frequencies being mirrored onto you positive frequencies (and the other way around), and is most likely not what you want – unless either:
you've made sure that prior to this "symmetricalization", the energy on either side of f=0 was 0, so that nothing bad happens, pr
you've made sure that you selectively shifted the negative and positive frequencies so, that symmetry was retained. Notice that this is not a "simple" shift of the whole frequency domain, but two selective shifts; the selection of these shifting regions has a shape, which boils down to using a window. If you just "select" or "not select" each bin for shifting, you're effectively applying a rectangular window – with all the Gibb's Phenomenom you can incur with that.
I looked at the other answers I found, but they don't seem suitable to my case.
I am working with ANSI C, on an embedded 32bit ARM system.
I have a register that generates a random 8bit value (generated from thermal noise in the chip). From that value I would like to generate evenly distributed integer values within certain ranges, those are:
0,1,2,3,4,5
0,1,2,3,4,5,6,7,8,9
"true" randomness is very important in my application, I need to generate white noise that could make a measurement drift.
Thanks!
Taking RandomValue % SizeOfRange will not produce a truly random value because in general the bucketing into the discrete possible values will be uneven.
I would suggest using a bit mask to ignore all bits outside the range of interest, then repeatedly getting a new random number until the masked value falls within the desired range.
For the range 0..5, look at the right-most 3 bits. That will produce a value in the range 0..7. "Reroll" results of 6 or 7.
For the range 0..9 look at the right-most 5 bits. The range is 0..16. Ignore results from 10..16.
As a real-word analog, think of trying to get a random number between 1 and 5 with a 6-sided die. There is no "fair" algorithm to map a roll of 6 into one of the desired numbers 1..5. Simply reroll a 6 until you get something in the desired range.
Masking high bits ensures that the number of "rerolls" is minimal.
Be sure to pay attention to any physical limitations on how often you can pull the special register and expect to get an entirely random value.
I'm trying to implement a program in C that calculates the factorial of a very large n (up to a million), using fft and binary splitting method.
I've implemented a simple library to represent arbitrary precision integer.
To calculate the fft and ifft, i use twofft.c and four1.c routines from "Numerical Recipes in C"
Up to a certain n, all goes right, but when the numbers (floating arrays) are too big, the ifft (calculate with four1),after normalization and rounding, has values that are wrong.
For example, if i have two number with 2000 digits that ends with 40 zeros, and i have to multiply them each other (using fft), when i calculate the ifft, some ending zeros become "one".
this happens because when i rounded one of this "zeros", (0,50009 for examples), they became "one".
Now, i don't know if is my implementation wrong or if i have to rounding this numebrs in a different way.
I've tried to use both binary split method and prime factorization, but for n >= 9000, the result is wrong.
there is a way to resolve this?
thanks for your attention and sorry for my bad english.
How do you represent arbitrary precision integers?
I mean what type are you actually using?
Can you please show us your code?
If you feel really lazy you can clone this project i've made few months ago:
https://github.com/nomadster/ESP
Edit:
By further reading your post i suppose by this statement
"this happens because when i rounded one of this "zeros", (0,50009 for examples), they became "one""
that you are still unaware of the fact that fft multiplication only works when the roundoff error is smaller than 0.5.
So it seems to me (if and only if i've correctly interpreted your cryptic message) that you are using a floating point type that doesn't have the required precision.
For the record:
I also noticed wrong values returned by ifft from four1.c from numerical recipes. I only tested it with N=256 complex values as input, assembled in a way, that they should result in a real only time domain signal.
The resulting time domain vector has to be mirrored (end to start and vice versa ...) and shifted by one to correspond with the IFFTs of other implementations. (I tested numpy.fft.ifft, octave's ifft and a inverse discrete fourier transformation without any optimisation, simply based on the IDFT formula, which should be definitly correct).
There has to be a fundamental algorithm fault in the version provided by numerical recipies. In their books nothing related to this problem is described.