But here's where my code fails, it keeps on switching between two sounds quickly.
This is because the sound() function was not meant to be used this way. It will only allow you to play one tone at a time.
To do this, you'll have to generate the waveform yourself. As in.. the raw samples.
The second way (that i understand now what you were trying to say) is to output the amplitude ( or the samples) itself with respect to time.
Yes.
xismn's code example of how to create a wav file should be enough to get you started. Simply generate your own samples, write those to a wav, then play it back. I think that's a good starting point.
Or I can show you how to stream the audio so you don't have to open up a wav in another program. I'll make a post about that tomorrow... but for now I need to sleep.
But here's where my code fails, it keeps on switching between two sounds quickly.
This is because the sound() function was not meant to be used this way. It will only allow you to play one tone at a time.
To do this, you'll have to generate the waveform yourself. As in.. the raw samples.
Without getting too far into it, the reason why this technique will never work is because computers are discrete. That means that the precision of a computer can only be represented down to a certain level, and then you find that, unlike with a pure mathematical representation of the number line, there are empty spaces between numbers. With the kind of dual wave interaction you are describing, you can't have those empty spaces.
So, because of that, you must actually do something that will combine output frequencies before reaching the discrete level.
once we have written the samples to our .wav file can a standard .wav encoder or library be able to play it, coz it is missing some things like headers and other chunks as you had said
Yes. Infact, it's more likely that you'll have less trouble playing a "canonical" wave file, stripped of miscellaneous chunks. Whenever I work with audio which I plan to load into memory, I always make sure it's stripped of additional chunks and PADs because I'm too lazy to write code that does this for me. Might be a viable idea down the road to do so.
htirwin once we have written the samples to our .wav file can a standard .wav encoder
Yes. libsndfile is a very prolific library used in many open source and commercial programs, as well as by other libraries. It will also read and write many other formats.
It's a cross platform library. The code will work on other operating systems. In order to do this, it uses platform specific code (depending on which operating system) to read and write the files. So in the background, in windows, it's using code similar to what xismn wrote.
Source + windows binary included. The readme has operating instructions.
It streams any number of voices. All mixing is done in software.
It basically does what I have been explaining in this thread, and I tried to comment the relevant parts. But if you have any Qs about it feel free to ask.
voice.h/cpp and the first ~100 lines of main are probably what you'll find most interesting.
(EDIT: Yes I know the squares and sawtooth have terrible aliasing... I'm going for algorithm simplicity here, not sound quality.)
Without sounding conceited, I think this is an excellent thread. When I was starting out with DSP, this would have been the kind of thread I would have loved to stumble upon.