I've voiced my dismay over this issue many times before, but here it is again! ^^
What 'char' and 'wchar_t' represent are completely ambiguous. You might think that they represent a "character", but depending on the encoding, that might not be true. One UTF codepoint might be 4 'char's. And one "charater" might be multiple UTF codepoints. And what in the world is a "wide character" supposed to be, anyway? Why is it even called "wide" when there's no guarantee that it actually is wider than a 'char'? Completely pointless.
</unhelpful rant>
Basically -- just use whatever the lib you're using uses. STL and other standard libraries typically prefer chars, so use chars when using them. Other libs prefer wchar_t, so if using such a lib, then use wchar_t. Some libs give you a choice, and when they do that they probably indicate the pros and cons of each choice (and if they don't, it likely doesn't matter which you choose).
If you really want to concern yourself with internationalization (which is never a bad idea), read up on Unicode and seek out libs that support it. Unfortunately, finding libs that are Unicode friendly isn't always as easy as it should be. std::string is a complete joke if you're looking for Unicode support, for example. I ended up having to write my own string class.
The "char = a single character" connection can only really be made if using ASCII text, which limits you pretty much to the English alphabet and a few common symbols: Only 127 characters total, 30 or so of which are reserved for special formatting purposes like \n, \r, \t, etc. The other 128 values are [usually] subject to the end-user's installed locale setting, so what they represent can be completely different on two different machines (again, more ambiguity).