Can I use a Adafruit Neopixel RGB STRIP with a Particle Photon? - led

I am not sure if the RGB strip below will work with the Particle Photon. At the RGB strip product description they mention the MCU (microcontroller) should have at least a processor faster than 8 MHz as well as a highly repeatable 100nS timing precision. I searched and found that we would be okay on the processing (STM32F205 120Mhz ARM Cortex M3) but I am completely unsure about the timing precision.
What is time precision?
Is the time precision of the Particle Photon enough?
Here is the link to the specific RGB strip for more details (ADAFRUIT NEOPIXEL DIGITAL RGB LED STRIP - WHITE 60 LED)
Thank you so much

Yes you should be fine. I have used my Particle Photon with an 8 LED strip and not had any issues. There is a NeoPixel Library available for the photon as well.

Related

RGB video ADC Conversion Color Palletes

I'm trying to better understand analog to digital video conversion and was hoping for some direction. Way I understand it, a dedicated 10-bit ADC chip will read the voltage of R, G, and B input pins, translate this to 10-bit RGB and output in parallel these value across 30-pins. (Ignoring sync/clock pins etc). My question however is this: If you know the source only has 5-bits per color, (2^5)^3 = 32,768 colors, dumps this to analog RGB, and you are using a 10-bit ADC, will the ADC interpolate colors due to voltage variances and the increase from 5 to 10 bits, thus introducing unoriginal/unintended colors, or is the sampling of analog to digital truly so precise the original source color pallet will be preserved correctly?
Most ADCs have a 1-LSB precision, so the lowest bit will toggle randomly anyway. If you need it to be stable, either use oversampling with increased frequency, or use a 12 bit ADC, this one will have an LSB toggling as well, but bit 2 will be probably stable.
Why probably you ask? Well, if you transmission line is noisy or badly coupled, it can introduce additional toggling in LSB range, or even higher. In some bad cases noise can even corrupt your higher 5 bits of data.
There might be some analog filters / ferrite beads / something else to smoothen your signal as well, so you won't even see actual "steps" on analog.
So, you never know until you test it. Try looking at your signal with a scope, that might solve some of your doubts.

Computing the discrete fourier transform of audio data with FFTW

I am quite new to signal processing so forgive me if I rant on a bit. I have download and installed FFTW for windows. The documentation is ok but I still have queries.
My overall aim is to capture raw audio data sampled at 44100 samps/sec from the sound card on the computer (this task is already implemented using libraries and my code), and then perform the DFT on blocks of this audio data.
I am only interested in finding a range of frequency components in the audio and I will not be performing any inverse DFT. In this case, is a real to real transformation all that is necessary, hence the fftw_plan_r2r_1d() function?
My blocks of data to be transformed are 11025 samples long. My function is called as shown below. This will result in a spectrum array of 11025 bins. How do I know the maximum frequency component in the result?
I believe that the bin spacing is Fs/n , 44100/11025, so 4. Does it mean that I will have a frequency spectrum in the array from 0 Hz all the way up to 44100Hz in steps of 4, or up to half the nyquist frequency 22200?
This would be a problem for me as I only wish to search for frequencies from 60Hz up to 3000Hz. Is there some way to limit the transform range?
I don't see any arguments for the function, or maybe there is another way?
Many thanks in advance for any help with this.
p = fftw_plan_r2r_1d(11025, audioData, spectrum, FFTW_REDFT00, FFTW_ESTIMATE);
To answer some of your individual questions from the above:
you need a real-to-complex transform, not real-to-real
you will calculate the magnitude of the complex output bins at the frequencies of interest (magnitude = sqrt(re*re + im*im))
the frequency resolution is indeed Fs / N = 44100 / 11025 = 4 Hz, i.e. the width of each output bin is 4 Hz
for a real-to-complex transform you get N/2 + 1 output bins giving you frequencies from 0 to Fs / 2
you just ignore frequencies in which you are not interested - the FFT is very efficient so you can afford to "waste" unwanted output bins (unless you are only interested in a relatively small number of output frequencies)
Additional notes:
plan creation does not actually perform an FFT - typically you create a plan once and then use it many times (by calling fftw_execute)
for performance you probably want to use the single precision calls (e.g. fftwf_execute rather than fftw_execute, and similarly for plan creation etc)
Some useful related questions/answers on StackOverflow:
How do I obtain the frequencies of each value in an FFT?
How to get frequency from fft result?
How to generate the audio spectrum using fft in C++?
There are many more similar questions and answers which you might also want to read - search for the fft and fftw tags.
Also note that dsp.stackexchange.com is the preferred site for site for questions on DSP theory rather than actual specific programming problems.

Real time compression of 32 bit RGBA image data

What is the fastest algorithm for compressing RGBA 32 bit image data? I am working in C, but am happy for examples in other programming languages.
Right now I am using LZ4 but I am considering run length / delta encoding.
Lossless encoding, a mix of real life images and computer generated / clipart images. Alpha channel always exists, but is usually constant.
I ended up just using LZ4. Nothing else was even close to as fast and LZ4 usually got at least 50% size reduction.
Lossy or lossless?
"Real" images or computer graphics?
Do you actually have an alpha channel?
If you need lossless (or semi=lossless) then converting into YUV and compressing that will probably reduce by about 1/2 (after already having it in going to 2bytes/pixel) try Huffyuv
If you have real images then H264 can do very high compression and there are optomised libraries and HW support so it can be very fast.
If you have computer graphics type images with few colours but need to preserve edges, or you actually have an A channel then run length might be good - try splitting the image into per-colour frames first.
LZ4 is LZ77 family which is a few lines of code but I never did it myself but I guess you are right run length or delta code is the fastest and also good for images. There is also snappy algorithm. Recently I tried the exdupe utility to compress my virtual machines. This thing is also incredible fast: http://www.exdupe.com. exdupe seems to use a rzip thing: http://encode.ru/threads/1354-Data-deduplication.

How to extract data from a WAV file for use in an FFT

I'm looking for a way to extract data from a WAV file that will be useful for an FFT algorithm I'm trying to implement. So far what I have are a bunch of hex values for left and right audio channels, but I am a little lost on how to translate this over to time and frequency domains for an FFT.
Here's what I need for example:
3.6 2.6
2.9 6.3
5.6 4.0
4.8 9.1
3.3 0.4
5.9 4.8
5.0 2.6
4.3 4.1
And this is the prototype of the function taking in the data for the FFT:
void fft(int N, double (*x)[2], double (*y)[2])
Where N is the number of points for the FFT, x is a pointer to the time-domain samples, y is a pointer to the frequency-domain samples.
Thanks!
For testing purposes you don't need to extract waveform data from WAV files. You can just generate a few signals in memory (e.g. 0, non-zero constant, sinusoid, 2 superimposed sinusoids, white noise) and then test your FFT function on them and see whether or not you're getting what you should (0 for 0, peak at zero frequency for non-zero constant signal, 2 peaks for every sinusoid, uniform non-zero magnitude across all frequencies for white noise).
If you really want to parse WAV files, see Wikipedia on the format (follow the links). Use either raw PCM encoding or A/ยต-law PCM encoding (AKA G.711).
FFT is usually implemented using an in-place algorithm, meaning that the output replaces the input. If you do the same, you don't really need the second pointer.
The most commonly found WAVE/RIFF file format has a 44 byte header followed by 16-bit or 2-byte little-endian signed integer samples, interleaved for stereo. So if you know how to skip bytes, and read short ints into doubles, you should be good to go.
Just feed your desired length of time domain data to your FFT as the real component vector; the result of the FFT will be a complex frequency domain vector.

How to extract semi-precise frequencies from a WAV file using Fourier Transforms

Let us say that I have a WAV file. In this file, is a series of sine tones at precise 1 second intervals. I want to use the FFTW library to extract these tones in sequence. Is this particularly hard to do? How would I go about this?
Also, what is the best way to write tones of this kind into a WAV file? I assume I would only need a simple audio library for the output.
My language of choice is C
To get the power spectrum of a section of your file:
collect N samples, where N is a power of 2 - if your sample rate is 44.1 kHz for example and you want to sample approx every second then go for say N = 32768 samples.
apply a suitable window function to the samples, e.g. Hanning
pass the windowed samples to an FFT routine - ideally you want a real-to-complex FFT but if all you have a is complex-to-complex FFT then pass 0 for all the imaginary input parts
calculate the squared magnitude of your FFT output bins (re * re + im * im)
(optional) calculate 10 * log10 of each magnitude squared output bin to get a magnitude value in dB
Now that you have your power spectrum you just need to identify the peak(s), which should be pretty straightforward if you have a reasonable S/N ratio. Note that frequency resolution improves with larger N. For the above example of 44.1 kHz sample rate and N = 32768 the frequency resolution of each bin is 44100 / 32768 = 1.35 Hz.
You are basically interested in estimating a Spectrum -assuming you've already gone past the stage of reading the WAV and converting it into a discrete time signal.
Among the various methods, the most basic is the Periodogram, which amounts to taking a windowed Discrete Fourier Transform (with a FFT) and keeping its squared magnitude. This correspond to Paul's answer. You need a window which spans over several periods of the lowest frequency you want to detect. Example: if your sinusoids can be as low as 10 Hz (period = 100ms), you should take a window of 200ms o 300ms or so (or more). However, the periodogram has some disadvantages, though it's simple to compute and it's more than enough if high precision is not required:
The raw periodogram is not a good
spectral estimate because of spectral
bias and the fact that the variance
at a given frequency does not decrease
as the number of samples used in the
computation increases.
The periodogram can perform better by averaging several windows, with a judious choosing of the widths (Bartlet method). And there are many other methods for estimating the spectrum (AR modelling).
Actually, you are not exactly interested in estimating a full spectrum, but only the location of a single frequency. This can be done seeking a peak of an estimated spectrum (done as explained), but also by more specific and powerful (and complicated) methods (Pisarenko, MUSIC algorithm). They would probably be overkill in your case.
WAV files contain linear pulse code modulated (LPCM) data. That just means that it is a sequence of amplitude values at a fixed sample rate. A RIFF header is contained at the beginning of the file to convey information like sampling rate and bits per sample (e.g. 8 kHz signed 16-bit).
The format is very simple and you could easily roll your own. However, there are several libraries available to speed the process such as libsndfile. Simple Direct-media Layer (SDL)/SDL_mixer and PortAudio are two nice libraries for playback.
As for feeding the data into FFTW, you would need to buffer 1 second chunks (determine size by the sample rate and bits per sample). Then convert all of the samples to IEEE floating-point (i.e. float or double depending on the FFTW configuration--libsndfile can do this for you). Next create another array to hold the frequency domain output. Finally, create and execute an FFTW plan by passing both buffers to fftw_plan_dft_r2c_1d and calling fftw_execute with the returned fftw_plan handle.

Resources