How do x86 processors usually go about dividing numbers while getting around the rounding error of binary division? That's probably a broad question with a lot of answers, but if you know any techniques they use to divide numbers, I'd like to know.
But how do computers deal with it? Why don't we just have numbers really close to the actual answer when you do division on a computer in whatever program? Rounding? But then if the answer was actually supposed to be that number non-rounded, it would be wrong... Clearly I'm pretty ignorant on the subject. Haha.