I couldn't figure out exactly how to word it in a google search, so I just thought I would ask here. How does it play a sound, decode it, everything. Well, not everything, but you get what I mean. I'm just curious, as I know it would probably be way to hard to do anything with at my current knowledge.
It's not the easiest subject to get into. I don't have a lot of experience with it, but I'd guess that OpenAL is a good, albeit not a beginner-friendly, way to go.
Audio data comes in the form of "samples". You can think of each sample as a "point" on a sound wave. If you take all these points and put them on a grid where the X axis is time and the Y axis is the value of the sample, it makes an audible sound wave.
Audio libs take this data and stream it over time.
When you use an audio lib, you must keep supplying new audio data so that the sound wave can be played back. Think of it like filling a hourglass -- you must keep filling the top with audio data (sand), while it slowly drains out the bottom at a steady rate. If you run out of sand, you have ugly audible breaks in the sound (known as buffer underrun).
How the sound wave actually represents a sound is another topic entirely. Basically it comes down to forming tons of sine waves (aka "harmonics"). Different harmonics played at different tones at the same time produce different sounds.