Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
see i have one audio track whose sample rate is 44100 then what does it mean ?
what is the duration of one frame of audio? How can i get it in c?
Does frame and sample both are different term for audio?
A sample of audio is similar to a frame of video, it's a capture of the sound level at a particular time point, in the same way as a frame of video is a capture of the light pattern. A frame rate of 44,100 is 44,100 samples per SECOND, or 44.1 kHz. So a sample duration is 1/44,100 seconds = 2.26e-5 seconds.
We can hear sounds in the approximate range 20 Hz to 20 kHz, so the sample rate needs to be very high to capture that information accurately enough to reproduce it without too many artifacts.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have a simple C program that will be running on my raspberry Pi. I am planning on taking data from sensors with an interval of 10-15 mins. Should I sleep()
the C program for this period in a loop then have it take the readings so on. Or should I not have a loop at all and have a command in cron tab to run the C program after every 15 mins or so. What are the advantages/disadvantages of sleep() in this case or is there a better approach this ?
Is data available in same machine where C program is running?
If not same, better to
1) have a small C collect data from sensor
2) have a cron task that runs every 15 min, and that invokes your C program
3) This way if network connection breaks between your C program and sensor where data available will not be a problem.
Also this approach helps you if any memory leak is there, also that would not be a problem.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am new to microprocessor programing and currently have a RGB sensor which reads an RGB value and increments a variable by an arbitrary number. I want the sensor to turn off for 0.3 seconds when I reach a certain value. Is there a way to do this or will I have to figure out a different way to throw out all the values the RGB sensor receives during that 0.3 second time span? I am writing in C.
Note: The sensor I am currently using is a TCS230.
According to the datasheet pin #3 is Output Enable ('OE, active low). So if you drive this pin high it should cut off the chip's output.
Or more to your question, it looks like if you drive pins S0 and S1 both low, it will place the chip in a "Power Down" state.
Whichever option you choose depends on what's more important. Do you want quickest reaction time, or do you want to conserve power? If you want the quickest reaction time, use 'OE. There is a typical 100ns delay between asserting this signal and the chip responding. The downside is the chip is still running during this whole time. If you choose the Power Down state, then you will save energy vs the Output Enable option, but the photodiodes have a typical 100 microsecond "recovery from power down" delay. Obviously that's a factor of 1000, and if you're doing time-critical work, probably not the best option.
Keep in mind, I have never used this chip in my life, just basing my answer a quick read of the datasheet.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am trying to implement bit-banging i2c to communicate between an atmega128A's GPIO and an SHT21 (I2C bus was used for some other devices). The first task is to send a write sequence to the SHT21. Using an oscilloscope, I can see that the sequence sending out from the atmega has the correct start signal, correct order of bits, correct stop signal, signal levels looks correct. The serial analyzer from the scope reads out correct I2C message: S80W~A . Yet there is not any ACK response from the SHT21. The SDA and SCL are both pulled up to 3.3V via 6.7K resistors.
I really need help finding out what is wrong
Thank you so much.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Hello I am running Plesk 10.4.4 with Qmail and noticed that one of my customers was spammed with an invalid address to which qmail kept trying to reply to. This unfortunately caused a major pile up (over 100 emails) in the system with some retries being held up for over 7 days and any new emails trying to go out would take up to 2 hours even if the addresses are correct.
Is there any way to tell qmail to not keep retrying and delete anything from queue that is over 2 hours old?
echo "7200" > /var/qmail/control/queuelifetime
/etc/init.d/qmail restart
7200 - it's 2 hours * 60 minutes * 60 seconds
But I think that 2 hours may be not enough, 1 or 2 days is OK for me.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a doubt regarding time division multipluxing.
In GSM we are using time division multiplexing.
In time division multiplxing, there is timeslots foreach signel. But while using GSM mobiles, we are getting a continues flow of data, but actually we are not transmitting it continuesly rite??
How do we get continues signal even if transmission is done using TDM.
The simple answer is that you're not receiving a continuous flow of data. What you are receiving is short bursts of data close enough together that they seem to form a continuous stream.
In case you care, the specific numbers for GSM are that it starts with 4.615 ms frames, each of which is divided into 8 timeslots of .577 ms. So, a particular mobile handset receives data for .577 ms, then waits for ~4 ms, then receives data for another .577 ms, and so on. There's a delay of 3 time slots between receiving and transmitting, so it receives data, then ~1.8 ms later, it gets to transmit for .577 ms.
The timing is close enough together that even if (for example) the signal gets weak and/or there's interference for a few ms, and a particular hand-set misses receiving data for one time slot won't necessarily be audible. When the signal is lost for about 20 ms, most people can start to perceive it as actual signal loss. Losses shorter than that will result in lower sound fidelity, but not necessarily as actual loss of signal.
It's also worth noting that most of the newer (3G, 4G, LTE) systems work entirely differently, being based on CDMA instead of TDMA.