Stable Low Latency with sockets/winsock

Hello,
I got a question thats more general. i hope this might still be the right place to ask...
i did a server/client test application in C++ with sockets/winsock that runs on macOS and windows. what i'm testing is the general latency until i get an answer from the server. to be more precise, i send midi data to the server and get audio data in return.

- client sends a TCP packet with a request about every 16ms
- server only responds to those requests and sends a TCP packet to the client.
thats about 63 TCP packets in both directions.
My minimum latency until i get an answer from the server is around 30ms.
the audio data i receive is buffered to handle latency issues.
If i'm using a 50ms audio latency buffer its relatively unstable. a lot of responses from the server/TCP Packets don't arrive within the 50ms timeframe so i get a lot of dropouts in the audio. when changing the buffer to 100ms latency the signal gets very stable. but still every now and then, it might run for 2 hours straight without any hickups until i get multiple TCP packets in a row that have a much higher latency until they arrived on the client. a packet might got lost and needed to be resent? IDK.
i tried pretty much everything to improve this problem to get a lower and more stable latency thats reliable. but so far i failed.
i know its hard to answer without looking into the code but my question here is really just... Is it possible to get a very stable Connection with TCP where all packets will arrive without a timeframe of 50ms or maybe 75ms? or is it just not possible since no one knows if and when a packet might get lost? i'm really wondering if i need to continue researching this problem or maybe this is already as good as it gets with TCP?
Another option i was thinking about was to try and use UDP and send my data at least twice to ensure that if one packet got lost then maybe the other one might arrive just in time.
Or maybe someone knows the right place/forum to ask questions like this?

Thanks in advance
sometimes, UDP is the answer. I highly recommend you try it to see if it gives you what you want. I would not try to double down. You won't notice the missed packets; if you do, you need to get some network engineering going and forget the software side of it for now.

many such programs 'cheat' and buffer up so that latency can be absorbed. That is, the data is actually sent faster than it is needed, and the rx machine buffers up a second or so of packets. Then if a packet is slow to arrive, it keeps on playing the extras it had already stored while it waits. Then it starts getting them again and builds the buffer back up. For real time (get and transmit asap) you can't send it faster than it exists, of course, so there is just a small delay from speak to hear introduced in the system (typically about 1 second or so).

A dedicated network can be real time (needs configuration), but if you have standard routers, the web, wireless, or other stuff you will never be able to ensure a specific latency -- stuff happens. What exactly are you trying to do, if you don't mind? If its just standard soft-phone, buffer it is probably what you wanted to know.

smaller data is often 'safer'. You can try degrading the quality to compress it more to see if that helps.
Last edited on
thanks for the info.
what i try to do is really send midi information to another server. the server runs a virtual instrument that is being played by the midi data and returns the audio data over the internet. this way you are playing an instrument thats located somewhere else. and thats why i also need fast response. the serve reacts to every midi request i send. so in a ways it's like a soft-phone but its important that audio packets are not dropped if possible.
In a LAN environment i can get down to stable zero Latency. So basically realtime. But on the Internet i still don't know where the Latency Spikes come from.

One possibility is that packets got lost of course. But how often do packets get lost? right now in my tests there are 63 TCP packets per second each direction. sometimes when i buffer my returned audio to 70ms Latency i can got 30min without problems until some hickups. so that would be around 100k packets in both directions until i get some peaks again. would i loose packets that often?

Another possibility i read about is that the servers memory might get fragmented a lot and after a while garbage collection kicks in on the sockets server? don't know if this is true. I'm running a windows server btw. clients are mac or windows.
you never will, lol :)
the internet... your ISP's server is busy playing 100000 utoob videos for bored teenagers and is overwhelmed. The router up the street has gone bad so its routing through 1000 extra miles go across the street. Someone in the middle is using wireless and its dropping packets because bob's mom is using her blender which causes EMI to scramble the signal. Or whatever else.

Some connections lose a lot of packets (hotel wireless, for example). Some are rock solid (my fiberoptic ISP running hard wires all the way).

Its possible your server is not set up correctly. You can monitor its performance to see, either the machine itself or the bandwidth used in & out to see if its introducing massive latency.

If its your server, you can improve it. If its just the internet, you can't fix it, all you can do is work around it, which again is probably going to involve buffering.




On the Internet, data consistency is positively correlated with latency. You can have high consistency, but you have to give up low latency; you can have low latency, but you have to give up consistency.
If this trade-off is unacceptable, then your packets should not pass through the Internet.
Topic archived. No new replies allowed.