Be aware that UTF-8 or "Unicode" and wide-char are two different things.
Per definition UTF-8 uses 1 byte characters for everything that matches ASCII in the code range of 0 - 127. For All characters beyond that UTF-8 uses more than one byte.
For example, the string "50 °F" in UTF-8 is internally:
0x35 0x30 0x20 0xC2B0 0x46 0x00 |
Notice the 2 byte value in between the standard ASCII values.
Standard wide-char (w/o Unicode) uses ASCII + a code page that defines characters between 128 - 255 depending on your system settings:
0x0035 0x0030 0x0020 0x00B0 0x0046 0x0000 |
For the string above you'll get this same result if you use wide-char with unicode.
Microsoft calls this UCS-2 which is basically UTF-16.
Using UTF-8 AND wide char sould give you: (although, I'm not entirely sure about that)
0x0035 0x0030 0x0020 0xC2B0 0x0046 0x0000 |
Please someone correct me if I'm wrong, the unicode / wide-char thing is quite messy.