Simply making it a wide char won't help, necessarily.
It comes down to 2 things:
1) How is the IDE saving the file?
2) How is the compiler interpretting the file?
Ideally, in this case, the answer to both questions would be "UTF-8" but that's not really an assumption you can make.
It's likely that both IDE and compiler options are configurable. But I'm too lazy to check how to do that right now.
The easiest way to test this is to do something like this:
1 2 3 4 5 6 7 8 9 10 11
|
const char* c = "£"; // £ sign is U+A3, which in UTF-8 is stored as 0xC2 0xA3
// so an easy way to check to see if it's really UTF-8:
if( c[0] == 0xC2 && c[1] == 0xA3 )
{
// yes, it's UTF-8
}
else
{
// no, it's some other encoding
}
|
This would probably be better done with an assert or something.
If you want to ensure that it's UTF-8 all around, the safe bet is this:
|
const char* c = "\xC2\xA3";
|
but that's hardly intuitive...