Playing the solfege notes with the ALSA API? - c

I'm playing with the Alsa API and I wonder which parameters I should pass to the function snd_pcm_writei to simply play the solfège syllables/notes (A-G / do re mi fa sol la si do).
Thanks

If you really want to do it with that function, generate a waveform in a buffer. A triangle-shaped wave may not sound too awful and should be simple enough to generate.
The base "la" (A) is 440Hz, that is, 440 cycles of the waveform of your choice per second.
The other notes can be obtained by multiplying/dividing by 2^(1/12) (1.05946309) for each half tone above/below this base frequency. You will need to know at what frequency the output device is set up (that's probably an argument to another ALSA function). If the device frequency is, say, 44100 Hz, and you want to play the base "la", each period of your waveform should occupy 44100 / 440 or about 100 samples. Pay attention to the sample width and the number of channels the device is configured for, too.
Explanation: there are 12 half tones in an octave, and an octave is exactly half (lower pitched) or double (higher pitched) the frequency. Once you have multiplied 12 times by 2^(1/12), you have multiplied by 2, so each half-tone is at a factor of 2^(1/12) above the previous one.

Sounds like you want midi, not ALSA. ALSA deals with sampled audio (e.g. digital waveforms derived from a CD, wav, mp3, etc). It is not a sound synthesis program.

Related

Send Tone from arduino through mono aux cord to speakers via 3.5mm jack

Hello I am using Arduino C to program my microcontroller to output frequencies via tone(); to a piezo buzzer. Currently the frequency(s) is/are chosen with a potentiometer. I would like to ditch the piezo buzzer and instead output the frequency to a 3.5mm headphone jack, where either headphones can be plugged in or a mono auxiliary cord, with a pair of desktop speakers on the other end. What is the best and most efficient way to do this as far as coding/translating the frequency to be output over the 3.5mm jack?
update :
so for my 3.5mm audio jack, i have the 10k ohm resistor inline with the ground connection and then one pin running to positive lead and the other pin running to digital pin 4 on my arduino. I have tried testing multiple frequencies on a pair of desktop speakers with a subwoofer, and comparing them to an actual tone generator app i have on my phone. my prototype seems to emit more of a noisy/fuzzy sound compared to the tone generator app which seems to be a lot more crisp/clean. Also frequencies under 100 Hz arent playing what they should sound like, however the app outputs frequencies under 100 Hz just fine. My three questions are:
1. How can I get to output to be as close as possible to the actual frequency?
2. How can i get the output to be crisp and clean, not noisy and fuzzy?
3. Is there something i'm missing/any ideas how to make the frequencies go below 100 Hz?
i know theres 5 pins on the 3.5mm audio jack but im only using 3, could this be an issue?
please feel free to ask any questions, i can also upload pictures as needed.
It's exactly the same thing- just send the square wave through the audio jack. Assuming your speakers have their own amplifier (which most desktop speakers do), you should just put a 10k resistor on the output line. Just hook up the pin and ground to the jack and you should be fine.

Generating a tone with PWM signal to a speaker on a PIC32 microcontroller

I'm currently working on generating a tone on a PIC32 device. The information I've found has not been enough to give me a complete understanding of how to achieve this. As I understand it a PWM signal sends 1's and 0's with specified duty cycle and frequency such that it's possible to make something rotate in a certain speed for example. But that to generate a tone this is not enough. I'm primarily focusing on the following two links to create the code:
http://umassamherstm5.org/tech-tutorials/pic32-tutorials/pic32mx220-tutorials/pwm
http://www.mikroe.com/chapters/view/54/chapter-6-output-compare-module/#ch6.4
And also the relevant parts in the reference manual.
One of the links states that to play audio it's necessary to use the timer interrupts. How should these be used? Is it necessary to compute the value of the wave with for example a sine function and then combine this with the timer interrupts to define the duty cycle after each interrupt flag?
The end result will be a program that responds to button presses and plays sounds. If a low pass filter is necessary this will be implemented as well.
If you're using PWM to simulate a DAC and output arbitrary audio (for a simple and dirty tone of a given frequency you don't need this complexity), you want to take audio samples (PCM) and convert them each into the respective duty cycle.
Reasonable audio begins at sample rates of 8KHz (POTS). So, for every (every 1/8000th of second) sample you'll need to change the duty cycle. And you want these changes to be regular as irregularities will contribute to audible distortions. So you can program a timer to generate interrupts at 8KHz rate and in the ISR change the duty cycle according to the new audio sample value (this ISR has to read the samples from memory, unless they form a simple pattern and may be computed on the fly).
When you change the duty cycle at a rate of 8KHz you generate a periodic wave at the frequency of 4KHz. This is very well audible. Filtering it well in analogue circuitry without affecting the sound that you want to hear may not be a very easy thing to do (sharp LPF filters are tricky/expensive, cheap filters are poor). Instead you can up the sample rate to either above twice what the speaker can produce (or the human ear can hear) or at least well above the maximum frequency that you want to produce (in this latter case a cheap analogue filter can help rid the unwanted periodic wave without much effect on what you want to hear, you don't need as much sharpness here).
Be warned, if the sample rate is higher than that of your audio file, you'll need a proper upsampler/sample-rate converter. Also remember that raising the sample rate will raise CPU utilization (ISR invoked more times per second, plus sample rate conversion, unless your audio is pre-converted) and power consumption.
[I've done this before on my PC's speaker, but it's now ruined, thanks to SMM/SMIs used by the BIOS and the chipset.]
For playing simple tones trough PWM you first need a driver circuit since the PIC cannot drive a speaker directly. Typically a push-pull is used as actively driving both high and low results in better speaker response. It also allows for a series capacitor, acting as a simple high-pass filter to protect the speaker from long DC periods.
This, for example, should work: http://3.bp.blogspot.com/-FFBftqQ0o8c/Tb3x2ouLV1I/AAAAAAAABIA/FFmW9Xdwzec/s400/sound.png
(source: http://electro-mcu-stuff.blogspot.be/ )
The PIC32 has hardware PWM that you can program to generate PWM at a specific frequency and duty cycle. The PWM frequency controls the tone, thus by changing the PWM frequency at intervals you can play simple music. The duty cycle affects the volume, but not linearly. High duty cycles come very close to pure DC and will be cut off by the capacitor, low duty cycles may be inaudible. Some experimentation is in order.
The link mentions timer interrupts because they are not talking about playing simple notes but using PWM + a low pass filter as a simple DAC to play real audio. In this case timer interrupts would be used to update the duty cycle with the next PCM sample to be played at regular intervals (the sampling rate).

Changing the play back rate of a buffer in C?

I am using an Altera DE2 FPGA board and playing around with the SD card port and audio Line Out. I'm programming in VHDL and C, but the C portion is where I'm stuck due to lack of experience/knowledge.
Currently, I can play a .wav file from the SD card to the Line Out. I'm doing this by reading and sending the SD card data > FIFO > Audio Codec > Line Out. Ignoring all the other details, the code simply is:
UINT16 Tmp1=0;
...
Tmp1=(Buffer[i+1]<<8)|Buffer[i]; //loads the data from the SD card to Tmp1
//change the buffer rate?
IOWR(AUDIO_BASE, 0, Tmp1); //sends Tmp1 data to Line Out
If I were to print Tmp1, it's basically the points on a sine wave. What I want to do now is fiddle with how the sound plays by changing the play back rate (ideally I want to play the sound up or down an octave, which is just double or half the frequency). Can anyone provide some suggestions on how I can do this in the section:
//change the buffer rate?
Is it possible in C to write a few lines of code in that section to obtain what I'm looking for? ie. change how fast I'm reading from the Tmp1 buffer to the AUDIO_BASE.
Thanks in advance!
~Sarengo
If the IOWR interface provides no such option then you will have to do it yourself: You have to re-sample the sound. The theory can be found here 1 here 2 here 3 and here 4.
Raising the freqency by a multiple is easy: Just drop some samples, eg lower the freqency by factor 2 by just dropping every second sample from the buffer so that it then has half the size.
Lowering the frequency is harder because you need information you dont have: the samples in-between samples. You could start with simple linear interpolation and if you think that it does not sound good enough you can change it for something more advanced. Eg you can half the frequency by inserting a sample between two samples with their average value. If your waveform looks like this: 5 9 7 3 you would get 5 7 9 8 7 5 3

Creating MIDI Files - Explanation of time division in header chunk

Edit: Posted on Audio/Video Production site https://video.stackexchange.com/questions/4148/creating-midi-files-explanation-of-time-division-in-header-chunk
I've been reading about MIDI file structure as I'm interested in writing an application that would read/write files in this format, but I'm a little confused about time divison in the header chunk.
My understanding is that this part is essentially 16 bits, where if the sign bit is 1 the remaining bits specify an SMPTE timecode, and if it's 0 then the bits specify the number of ticks/pulses per quarter note (PPQ).
My questions, specifically, are:
What does a higher/lower PPQ do to a MIDI file? Does this change the quality of the sound? My understanding is that it does not affect tempo
How does the SMPTE timecode affect the MIDI file in playback?
Essentially, I'm trying to understand what these actually mean to the end result.
I'm not registered over on that forum, so I'll paste it here:
I can answer part 1.
PPQ absolutely affects the tempo of the MIDI file. It doesn't change the quality of the sound, it changes the rate at which events are processed.
Tempo is defined in terms of microseconds per quarter note. If you change the number of ticks (pulses) in a quarter note (PPQ), you effectively change the rate at which the file is played back. A standard value for PPQ is 480. If the only change you make to a file is to double the PPQ, you essentially halve the playback rate (tempo).
I know this is an old question, but it wasn't answered completely, or entirely accurately.
All MIDI files use delta times. There are no absolute timings in a MIDI file, SMPTE or not.
In original MIDI format files, the header timing information specifies the PPQN, or Pulses Per Quarter Note. The SetTempo meta-event specifies the number of microseconds per quarter note (the tempo). The MIDI event delta information specifies the number of pulses between this event and the last event.
In SMPTE-style MIDI files, the header timing information specifies two values - the frames per second, and frame subdivisions. Frames per second is literally FPS (some values need to be adjusted, like 29 being really 29.97). Frame subdivisions can be thought of as the number of pulses per frame. The MIDI event delta information specifies the number of frame subdivisions (or pulses) since the last event.
One important difference is, SMPTE files do not use the SetTempo meta-event. All time scales are fixed by the header timing field.
#LeffelMania got it right, but I just wanted to add that SMPTE is simply a different way of keeping the time in your arrangement. If you use SMPTE, then you get an absolute time for each event, but otherwise the events are relative to the previous ones.
In my experience, most MIDI files use the conventional way of relative event timing (ie, not SMPTE), as this is easier to work with.

can anyone say how sampling rate and framesize is realted?

Can anyone say how sampling rate and framesize are related ?
I decoded a spx file to wav, with sampling rate of 10 kHz and at 16 bit. The frame size applied during the decoding process was 640.
The decoded file is playable in vlc. But I want to play that file in Flex.
Flex supports rate of 44.1 kHz, 22.5 kHz and 11.2 kHz only. I want to increase the sampling rate during decoding process. I know how to do that in the code but I guess the framesize also should be increased. I don't know the dependency between these two. Can anyone help?
Frame size and sampling rate are generally orthogonal concepts. They don't need to affect each other unless a particular format demands it.
For PCM .wav, the frame size will always be bits/channels * channels. In your case, 16 bits for mono, or 32 bits for stereo.
Also, there is no need to change the decoding frame size only because you later apply resampling.
You mix two independent tasks: spex decoding and resampling. The mentioned frame size should be considered only as a buffer that contains PCM samples. These PCM samples you should pass to a resampler (for example SSRC: http://shibatch.sourceforge.net/).
Frame Size depends on the codec used to compress the original data. It will contain an integral number of samples (320 in this case).
If I'm correct in thinking raw audio has a frame size equal to the sample size. However some codecs perform compression over a range of samples. Usually the larger the frame size, the more memory needed to compress the data but the potentially better compression you can achieve.
You can't increase the sampling rate during decoding however you could resample the decoded audio. Presumably you're actually re-encoding the data to send it to Flex? You'll need to have a look at the codec you're using to rencode. Which codec are you using?
irrespective of number of channels used, frame rate and sampling rate are same.
because that is the purpose of TDM.
New channels are introduced in the gap left between two consecutive samples.
As the number of channels increase time allotted to each channel decrease there by time taken by each bit.
but tame gap between consecutive samples of any channel will remain constant and it will equal to the total frame time.
i.e. Time gap between samples = Frame time, hence Frame rate is equal to sample rate.

Resources