I'm working on a small driver that does a little magic on kernel space. I originally wrote it for XP x86, but now I'm testing on W7 x86-64.
Anyway, I communicate with the driver by opening "\\.\driver" and passing a pointer to a structure through WriteFile(). Basically WriteFile(file,&pointer,sizeof(&pointer),/*...*/). The driver verifies that the buffer passed is at least as big as sizeof(structure *).
Here's the thing: the driver is obviously build for x86-64, but the client application is built for x86. When I call WriteFile() like above, the function fails because sizeof(&pointer) is determined at compile time for x86, which differs from 64-bit pointers, but if I multiply the pointer size by 2, the function succeeds and the structure is modified correctly! The structure looks something like this:
1 2 3 4 5
|
struct structure{
char operation;
ulong parameter[4];
char string[16];
};
|
structure::string is what's supposed to be modified by the call. How exactly is this working when the driver and the client are using different offsets to access the structure, and why does the driver not crash when it tries to read four bytes past the end of the pointer? Does Windows perform some kind of code modification on user space code?
PS: Don't bother pointing out the problems with reading memory from user pointers. The IO mode I'm using allows the driver read user memory.