Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 days ago.
Improve this question
I am trying to make a simple tuner app for my Galaxy Watch 3 (C programming lang).
I am currently using a timer that triggers a callback function who buffers audio from the microphone. After the buffering in, I would like to apply FFT to find the strongest frequency present. The FFT library I am using wants the input as a double _complex array rather than a short array. I am confused on how to represent the signed 16 bit values as complex numbers. What does the imaginary part of the number correspond to in the PCM data. The way I understood PCM, is that it is a time domain representation of an audio signal sampled at a constant rate (sample_rate=48800), such that buf[0] = amplitude at t=0, buf[1] = amplitude at t = 0.000020833s (single channel).
Any advice would be greatly appreciated.
I tried simply making complex numbers with the imaginary part set to 0 but this resulted in a 0 array somehow.
complex double temp = ((double) shortData) + 0.0 * I
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Recently I've started implementing CHIP-8 emulator in C. After implementing most of the opcodes I've faced a problem of implementing display for my emulator. After some googling and reading I've decoded to give OpenGL a shot. And here's the problem - display information is stored as a 1 bit per pixel monochrome image in the last 256 bytes of CHIP-8 memory (memory is an uint8_t array of size 4096). Of course, I can create another array for storing display data in a more usable format (1 byte per pixel) and render it via OpenGL as a texture, but what I want to know is if there are more elegant and efficient solutions in modern OpenGL or others libraries/frameworks which can be used within the C programming language.
Thank you in advance.
P.S. English is not my mother tongue so error fixes would be appreciated.
With modern OpenGL you can use integer textures and use a 8 bit single channel image format. Then in the shader you divide the fast running coordinate by 8 to determine the texel and the remainder to select the bit, something like this in GLSL
texelFetch(texture, ivec2(texcoord.x/8, texcoord.y), 0).x &
(1 << texcoord.x%8) != 0;
I'm currently on mobile, so please excuse if this is too concise. If you need more details, ask for it!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a problem in C, I am not allowed to use floats as the microcontroller it will be flashed does not support that data type. Now all my integers are being rounded off as it should. How do I handle this case?
A short research indicates using bit wise operation such as left shift and right shift. I know what are these operations. But I do not know how to use these operations to achieve what I want.
Another possibility is the Q number format.
You will get some results if you google "Q number format" or some variations.
It is often used for some DSP related topics in C. Here another blog post that explains that number format and here is an example code implementation for q-numbers in C.
In general you can say that q-numbers represent a number between -1 and 1 without using floating point arithmetic.
Normally a microcontroller don't have a floating point unit, everything works with integers. But its up to you which unit you like for your integers.
For example:
100 could be 100 cm or 1,00 m
1000 could be 100,0 cm or 1,000 m and so on..
Please have a look at the description:
electronic.stackexchange
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
http://www.tech.dmu.ac.uk/~eg/tensiometer/fft/fft.c
http://www.tech.dmu.ac.uk/~eg/tensiometer/fft/fft_test.c
I have found a good working C Code for FFT Algorithm for converting Time Domain to Frequency Domain and vice versa in the above links. But I wanted to know the flowchart or step by step process of how this code works. I am trying to analyze the code with butterfly method of decimation in time for FFT but i am facing difficulties in understanding the code. The code is working very well and giving me the correct results but it would be very helpful to me if someone could give a brief or detailed explaination on how this code works.
I am confused with the array and the pointers used in the fft.c code. Also I am not getting what are the variables offset and delta mean in the code. How the rectangular matrix of real and imaginary terms are considered/used in the code?? Please guide me.
Thanks,
Psbk
I strongly recommend to read this: https://stackoverflow.com/a/26355569/2521214
Now from the first look the offset and delta is used to:
make the butterfly shuffle permutation
you start with step 1 and half of the interval
and by recursion you will get to log2(N) step and interval of 1 item ...
+/- one recursion level
I usually do the butterfly in reverse order
The XX array
is a buffer to store the subresult or input data
you can not perform FFT inplace easily (if at all)
so you compute to/from the temp buffer instead
and on each recursion just swap the data and temp buffers (physically or their meaning)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm developing an encryption algorithm on the GPU. This algorithm requires the addition and multiplication of very very large integers . These numbers have a bit length of an estimated 150,000 bit or more.These numbers have different bit length. What algorithms can be used to perform addition and multiplication of these numbers? Please give me your information. Thank you.
Large integer addition is relatively simple: JackOLantern already provided the link to the post. Basically it's just doing carry propagation via parallel prefix sum.
For large-integer multiplication on CUDA, there are two ways come to my mind:
convert the integers to RNS (Residue Number System): then multiplication and addition can be done in parallel (as long as RNS base is large enough). Whenever you need to compare the numbers you can convert them to mixed radix system (see, e.g., How to Convert from a Residual Number System to a Mixed Radix System?). Finally, you can use CRT (Chinese Remaindering) to convert the numbers back to positional number system
implement large-integer multiplication directly using FFT since multiplication can be viewed as acyclic convolution of sequences (150Kbits length is not that much for FFT but can already give you some speedup). Still GNU MP switches to FFT multiplication routines starting from 1Mbit or even more. Again for multiplication via FFT there are two options:
use floating-point double-precision FFT and encode large-integer bits into mantissa (easier to implement)
use the so-called Number-Theoretic transform (FFT over finite field)
Anyway, there is a bunch of theory behind these things. You can also check my paper on FFT mul in CUDA. But there are also many research papers on this subject especially in cryptography field.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have a lengthy calculation (polynomial of 4th degree with fixed decimals) that I have to carry out on a microcontroller (TI/LuminaryMicro lm3s9l97 [CortexM3] if somebody is interested).
When I use 32bit-Integers, some calculations flow over. When I use 64bit Integers the compiler emits an ungodly amount of code to simulate 64bit-multiplication on the 32bit-processor.
I am looking for a program into which I could input (just for example):
int a, b, c;
c = a * b; // Do the multiplication
c >>= 10; // Correct for fixed decimal point
c *= a*b;
where I could specify, that a and b would be in the range of [15000..30000] [40000..100000] respectively and it would tell me what sizes the integers need to not overflow (and/or underflow; I would possibly get a false positive there for the >> 10) in the specified domain, so that I could use 32bit-integers where possible.
Does something like this exists already or do I have to roll my own?
Thanks!
I think you have to roll your own. Implementing an extended sequence of muls and divs in fixed-point can be tricky. If fixed-point is applied without careful thought, overflow can happen quite easily. When implementing such a formula, I use a spreadsheet to experiment with the following:
Ordering of operations: muls require twice the number of bits on the left-hand side, i.e. multiplying two 22.10 numbers can yield a 44-bit result. Div operations reduce the number needed on the LHS. Strategically re-ordering the equation's evaluation, or even rewriting it (expanding, factoring, etc) can provide opportunities to improve precision.
Pre-computed scalars: along the same lines, pre-computing values may help. These scalars may not be need to be constant, since look-up tables may be used to store a collection of pre-computed values.
Loss of precision: is 10-bits of precision really needed at steps in the evaluation of the equation? Perhaps some steps need lower precision, leaving more bits on the LHS to avoid overflow.
Given these concerns (all of which are application-specific), optimal use of fixed-point math remains very much a manual exercise. There are good resources on the web. I've found this one useful on occasion.
Ada might be able to do that using range types.