Do you know the difference between the ANSI char set and the UNICODE char set?
ANSI chars are a single byte
UNICODE (well, UTF-16 UNICODE wchar_ts) are 2 byte
For the normal letters, the UNICODE chars are the same as the ANSI char + an extra 0 byte
for example, the UNICODE string for "Hello" is
{'H', '\0', 'e', '\0', 'l', '\0', 'l', '\0', 'o', '\0', '\0', '\0'}
So if you cast or, as you have done, copy this string into a char buffer, it looks like a series of short null terminated ANSI strings:
1 2 3 4 5 6
|
{'H', '\0'}
{'e', '\0'}
{'l', '\0'}
{'l', '\0'}
{'o', '\0'}
{'\0','\0'}
|
(Some UNICODE strings use both bytes and would have probably looked like a single string of random chars (including control char).)
You cannot just copy UNICODE to ANSI or vice versa. You must convert using either wcstombs or WideCharToMultiByte. As you care working with WIN32 calls already, I would user the latter here.
Andy