I don't know how they hash files -- the only way I can think of doing it is to hash the entire contents of the file, but for very large files that won't work: you'll run out of memory (and it would take forever to hash). The only way you could handle large files would be to hash portions at a time and then re-hash the previous hashes, but that would be inconsistent -- with a 4 GiB file, if one MD5 program hashes it in 512 kiB chunks and another in 1024 kiB chunks, you'll end up with different hashes.
Hashing the filename also doesn't work -- it makes the whole concept of hashing files redundant. Hashing the inode number also doesn't work because when you download a file the inode number only has a 1 in n chance of being correct (where n is the number of inodes on the filesystem; which would likely be different for different filesystems). The only logical way of hashing a file would be to hash the contents, but like I said, it's inconsistent between implementations...