I have a question. I've been researching sound APIs and libraries to realise that I don't seem to understand the audio concept in the first place.
A lot of APIs require that you call a callback function. I'm confused as to what this does and how to implement one. So, you would initialize the API, then open an audio stream which normally calls an audio callback function. Could someone help explain this?
Audio is different from graphics because it has a direct relationship between time and the amount of data. That is, the more audio data you have, the longer the sound.
The way it typically works with streaming sounds is, you have a buffer that you fill with audio data, and the sound card slowly empties this buffer and plays the sound as it needs it. It's your job as the programmer to keep this buffer full, because if the buffer ever completely empties there will be audible (very ugly) breaks in the sound. This is known as buffer underrun.
A good analogy is an hourglass. You put sand (audio data) in the top and it slowly drains (plays) out the bottom at a steady rate. If the top runs out of sand, the flow is disrupted, so you have to keep filling the top of the hourglass with sand.
This is what the audio callback does. Typically the callback passes a buffer that you must fill with audio data. The audio data you supply will be eventually output. As more time passes and the buffer empties, you'll need to provide more and more sound.
Alright. I'll be posting an example after school. There are plenty of things that I need to figure out but I can't seem to find much documentation on this anywhere...