I'm working on a music player on Windows, and I'm using the waveOutOpen (with friends) API. To specify the format of the audio data I fill out the WAVEFORMATEX structure, specifying the correct wFormatTag. However, I recently learned that there exists audio files with a sample rate of 88.2 kHz. That is not among the available WAVE_FORMAT_*, which puzzles me.
Assuming the DAC being used supports 88.2 kHz input, how can that be used on Windows? And what about even higher sample rates like 192 kHz which is also not among the WAVE_FORMAT_* values? I see that there's WAVEFORMATEXTENSIBLE which as far as I can tell allows for specifying a higher bits per sample value, but what about higher sample rates?
In my "Speaker Properties" I see that my system supports 24-bit 192 kHz audio, but how would one provide data at that sample rate on Windows?
Related
I write a full-duplex ALSA program and run it on a linux-based embedded system.
Its sound configurations are:
Sample rate: 16Hz
Channels: 1 (mono)
Format: S16_LE
min avail: 160 (frames)
For real-time application, I need to capture sound every 10ms, so I set the min avail to 160.
My problem is: While the program is running, the CPU usage is very High which might be 99.9% (by top command). Sometimes the CPU load is low, but once it gets up to 99.9%, then it can not go back to low CPU usage.
I found out that it might be configuration problem. In asound.conf file (see it in the followed code), I have created a asym type card named "asym0" to choose two different slave cards for playback and capture.
Originally, I use the "primary" as capture device, but it cause high CPU usage. Then I created a rate type card named "rate0", and set it as capture device. The CPU usage becomes lower which floats between 20%~60%, but the captured sound sounds bad. I 've heard some "po po po" in my voice if I test the Mic(capturing).
So...
If I choose "primary", CPU usage is high, but no "po po po" sound.
If I choose "rate0", CPU usage is lower, but has "po po po" sound.
What are the different from "type hw" and "type rate"?
Is the effect caused by the different interrupt frequency?
asound.conf file:
pcm.primary {
type hw
card mycard
}
pcm.rate0 {
type rate
slave {
pcm "primary"
rate 16000
}
}
pcm.asym0 {
type asym
playback.pcm "primary"
capture.pcm "primary" or "rate0"
}
Please anyone help me to solve this problem. Thank you!!!
Sound capture should be a very trivial task for the CPU because most of it is happening in silicon hardware and occasionally it needs to fire up the thread to handle input audio. Typically if your periods or buffers are very small it will require more CPU attention and is likely to have overruns. Overruns may be where your signal dropouts are occuring.
If your sample rate is 16 kHz, and you capture every 10ms, that is indeed 160 frames.
Some things to look at are whether your period is smaller then 10ms, whether you are doing processing which is very heavy in your thread.
To help you, there is some code in gtkIOStream which implements a C++ OO ALSA hierarchy. You can look at this ALSAFullduplex.C test application as a reference and test it to see if it suffers the same problems you are suffering.
Information on building gtkIOStream is given in this email :
https://lists.audioinjector.net/pipermail/people/2020-March/000028.html
I work in Code Composer Studio Version: 6.0.1.00040 with the card LCDK C6748.
In this card there is LINE_OUT for sampling out audio into speakers.
My question arises, because I encountered some phenomena that look like I reached a limit value when I assigened a value to LINE_OUT:
codec_data.channel[LEFT]= (uint16_t)outputLeft_referenceSignal;
// this union is where I have to "place" the audio sample I create,
// but I suspect outputLeft_referenceSignal exceed the limit value
When it happens it sounds, like a cracked "PACK" in the speakers and then the expected audio signal is not played
The T.I. has complete code examples on how to handle each of the built-in peripherals of the C6847 DSP.
I strongly suggest you start searching/reading the T.I. web site for info on the C6748 DSP
amongst other things, like initializing the DSP, you need to understand the usage of the McASP and the AIC31 peripherals.
It is not a simple write to a I/O address.
If you have setup the above peripherals, please post the relevant code so we can determine the underlying problem.
I am trying to communicate to the Nonin Pulse oximeter device to read the data (Pulse rate and SPO2 level) via Bluetooth. Nonin device supports SPP and HDP profile. I want to communicate through SPP profile. I am able to scan and pair with the device by the sample code available in Bluez.
Please tell me next steps how to send command and read data from the device. I have got struck at this point.
I realize this is a late response, but I recently setup data acquisition from a Nonin PalmSAT 2500A VET unit. I am using the RTC-1000 cable and an RS232 to USB converter.
This is straight from the manual:
"Information from the device, in the real-time mode, is sent in an ASCII serial format at 9600 baud with 9 data bits, 1 start bit, and 1 stop bit. The data are output at a rate of once per second.
NOTE: The 9th data bit is used for odd parity in memory playback mode. In real-time mode, it is always set to the mark condition. Therefore the real-time data may be read as 8 data bits, no parity.
Real-time data may be printed or displayed by devices other than the pulse oximeter. On power up a header is sent identifying the format and the time and date. Thereafter, the data are sent once per second in the following format:
SPO2=XXX HR=YYY
where “XXX” represents the SpO2 value, and “YYY” represents the pulse rate. The SpO2 and pulse rate will be displayed as “---” if there are no data available for the data reading."
Link to manual:
http://www.proactmedical.co.uk/proshop_support_docs/2500aman.pdf
What model oximeter are you working with?
I am writing a C program where I would like to enumerate all the capture devices in my system (in practice, I know I have three webcams plus the "integrated" microphone), recognize them and start capturing from them all at the same time.
I has some success using snd_device_name_hint() to enumerate all PCM devices and then snd_device_name_get_hint() to determine the "IOID" to see if they support capture. But now, how do I open the related device with snd_pcm_open() so that I can capture? I would like to use the "hw" interface as I do not want to overload the system with many conversion rates, so I would also like to see if there is a method to report the HW supported sampling frequencies.
Thank you!
snd_device_name_hint() can return multiple device names for the same hardware device (e.g., plughw and hw).
It can also returns devices that do not correspond to a single hardware device (such as null, or PulseAudio/Jack/Bluetooth devices).
To enumerate hardware devices, call snd_card_next() and snd_ctl_pcm_next_device() (see the aplay source code for an example).
To check whether a sample rate is supported, call snd_pcm_hw_params_test_rate().
I have developed an embedded solution which communicates over a Multi Drop Bus and now I would like to develop a PC based application which monitors traffic on the bus.
MDB supports true 9 data bits (plus start/stop/parity - and *no fudging* by using the parity bit as a 9th data bit) whereas standard Windows and Linux libraries offer a maximum of 8 data bits.
I have a StarTech PCI2S950 PC serial port card which supports 9-data bits, but am not sure how to code my monitoring app & have googled a lot to no great avail.
I would prefer to code in C (or Delphi, or C++). I have a slight preference for Cygwn, but am willing to use straightforward Windows or Linux.
Just anything to read/write 9 data bits over that PC serial port card.
Can anyone help?
The document at http://www.semiconductorstore.com/pdf/newsite/oxford/ox16c950b.pdf describes the differences between various UARTs.
While your StarTech board includes the 16C950, which is RS-485 (and 9-bit) capable, it uses it in RS-232 compatible (550) mode, similar to 16550/8250 from IBM-PC days, and supports max 8 bit data.
You need a board with the same chip (16C950) but that exposes the RS-485 compatible 950 mode that supports 9 bit data as per the spec. And any board claiming such support would have to come with custom drivers for Windows, since Microsoft's is 8 bit only.
There are several other chips that can do 9-bit RS-485 mentioned here but again finding Windows driver support will be tricky. And, of course, many boards use 16C950 but only in 8-bit and/or RS-232 only mode, and without appropriate drivers.
In answer to your related question on Superuser, sawdust suggested the Sealevel 7205e, which looks like a good choice, with Windows driver support. It is pricey but they specifically mention 9-bit, RS-485 support, and Windows drivers. It may well be your best option.
The card you selected is not suitable for this application. It has just plain RS-232 ports, it is not suitable for a multi-drop bus. You'll need to shop elsewhere for an EIA-485 style bus interface, you could only find those at industrial electronics suppliers. By far the best way is to go through the National Automatic Merchandising Association, the industry group that owns the MDB specification.
The 9-bit data format is just a trick and is used in the MDB protocol to mode-switch between address bytes and data bytes. All ports on the bus listen to address bytes, only the addressed port listens to data bytes.
The 9th bit is simply the parity bit that any UART can generate. The fundamental data size is still 8 bits. An UART auto-generates the parity bit from the way it was initialized, you can choose between mark, space, odd and even parity.
Now this is easy to do in a micro-controller that has an UART, the kind of processor used on a bus like this. You simply re-program the UART on-the-fly, telling it to generate mark parity when you send the address bytes. And re-program it again to space parity when you send the data bytes. Waiting for the fifo to empty will typically be necessary although it depends on the actual UART chip.
That is a lot harder to do on a regular Windows or Linux machine, there's a driver between the user mode program and the UART. The driver generates a "transmit buffer empty" status bit, like WaitCommmEvent() for EV_TXEMPTY on Windows, but this doesn't include the fifo empty status, it only indicates that the buffer is empty. A workaround would be to wait for the buffer empty status and then sleep() long enough to ensure that the fifo is emptied. A fifo is typically 16 bytes deep so sleep for 16 times the bit time. You'll need the datasheet for the UART on the card you selected to know these details for sure.
Under Win32 serial ports are just files, so you create a handle for it with CreateFile and then use a DCB structure to set up the configuration options (the members are documented here and include number of data bits as ByteSize).
There's a good walk through here:
http://www.codeproject.com/Articles/3061/Creating-a-Serial-communication-on-Win32
The link provided shows the card supports 9 data bits and Windows 8, so I would presume all the cards features are available to the application through the standard Windows API.
Apart from setting the correct data format in a DCB and opening the port, I would have thought the standard ReadFile would work. I wonder if the data read in would actually be 2*8 bit bytes which represent the 9 data bits, rather than a continuous 9 bits streamed in (which you will need to decode at a later date).
Is the 9th bit used for some purpose other than data?