So I was perusing the c++ and networking tags on S.O. today and I kept bumping into separate but similar questions on how big to make a buffer in which one would recv a datagram of an unknown size, and I kept on seeing these massive numbers like 4000 bytes and such. I found this a bit ridiculous, though I understand it’s probably quite necessary. The thing is I don’t really like the idea of having to allocate these massive buffer sizes, and along the way amidst my perusing I found a post that stated I could recv without actually consuming data using I think it was NO_BLOCK. Recv is supposed to return the number of bytes read, yes? So my question is could I just allocate the correct block size using
That’s just an arbitrarily large number, it’s not an actual size. As you can see I pass a null pointer as the buffer, so it’s not meant for making a buffer of that size. It is simply so that I don’t accidentally cut off the amount of bytes that are actually in the datagram with my imaginary limit. If the limit is actually 64k, then I guess I would revise it to be something like
> It is simply so that I don’t accidentally cut off the amount of bytes that are actually in the datagram with my imaginary limit.
So what's wrong with
- calling recv with a large buffer to actually fetch the data,
- finding out how large the actual datagram is from the return result of recv,
- then calling malloc to create another buffer to store that data,
- followed by a memcpy?
...waste of space? And not to mention a memcpy doesn’t seem like a great performance booster. I guess I’ll just have to settle for the large buffer. Hm. Or maybe use realloc to shave the size after.. eh. Oh well. Thanks!