I don't think the hash affects how much memory is used but it can have a big impact on performance.
unordered_set and unordered_map allocates a number of buckets. The number of buckets is usually nothing you need to worry about. The bucket array will be resized automatically as you add more elements, to try and have a good balance between memory usage and lookup speed.
The hash is used to decide in what bucket the element should be stored. I guess it uses something as simple as
|
hashValue % numberOfBuckets
|
to decide which bucket to store the element.
Finding a bucket by index is very fast. Finding an element in a bucket is slow if there are many elements in the bucket. With a good hash function the number of elements in each bucket is likely to be close to 1 so it will usually be very fast. If the hash function is bad you might end up with a lot of elements in the same bucket and that will be slow to search through.
So the reason I think you should not be doing %size in your hash function is:
1. It is unnecessary because the unordered_set/map is already doing something similar.
2. If you happened to increase the size of the unordered_set/map beyond the 50 that you used in the hash you would limit the number of buckets that will be used, leading to worse performance.
I'm no expert in designing good hash functions so I have no idea if using the ID number as-is is good or not (I guess it's not that bad). I mentioned using std::hash because I thought it was a safe bet, because it has been implemented to work well with any distribution of values. To know for sure you should test the performance using different hashes and see which one perform best.