Because to make the sound delay, surely the time will increase. Is my logic correct? |
Yes. More samples = longer audio. If you're going to duplicate the number of samples you're playing, the audio will be twice as long.
So, if i wan add delay to this audio do i just duplicate @ to @ @ ? (hope u get wat i mean) |
This will have the desired effect. A side effect of this is that everything will play an octave lower (think of an old cassette player with the battery dying -- it's slow and low pitched).
If you look at the wave you're generating you can see why. For example let's say you have a simple square wave:
(1 digit = 1 sample)
00005555000055550000555500005555
Here we have a wave that repeats every 8 samples, and cycles 4 times (IE: a tone of 4 Hz)
Duplicating the samples get's you this:
00000000555555550000000055555555
Now we have a wave that repeats every 16 samples, and cycles only 2 times in the same window of time (a tone of 2 Hz -- half of what we had before)
This is a simplistic example, of course, but the concept applies to complex waveforms also.
If you want to slow the audio down
without changing the pitch, there are ways to do it, but they're pretty involved. If interested, I can get into it but for now I'll just assume the above is what you're looking for.
EDIT:
For what it's worth, I should mention the above method will work, but is naive. A better method would employ linear interpolation when adding samples. Basically this means to "blend" the samples together so the transitions are smoother. This makes for a better sound:
For example consider the below audio:
02573461
original
0022557733446611
naive duplicate
0123567533456311
interpolated