How to convert unicode string to char?

How to convert unicode string to char?
Can we have the context? In other words what function are you trying to pass the unicode string to? I only ask because there may be a better way to do this seeing as most Win32 functions do not use straight chars.
there is WideCharToMultibyte() API function exported from kernel32.dll.
Plus you should be using Unicode for everything these days. Why the need for an ANSI string?
webJose:
Plus you should be using Unicode for everything these days. Why the need for an ANSI string?

I don't dispute the statement, but my main project does a lot of file i/o, much of it txt files, and I have never been able to write a Unicode text file that isn't either read-only when accessed by other editors, or is screwed up in some other way, especially formatting.

Accordingly, I have my entire project set to Multi-Byte.

I'm told that VS takes a Multi-Byte project and converts it to Unicode and then back to Multi-Byte, thus degrading efficiency, but until they can get the Unicode and text file thing worked out, or until I can figure out how to work it properly, it's a whole lot easier to do file i/o with Multi-Byte than Unicode.
Most text editors look for the BOM. Most likely you are not doing this yourself.

BOM: Byte Order Mark.

In .Net it is rather easy to append the BOM: All Encoding classes provide a method to get it, so it is a matter of just making sure the BOM is written to the file at the very beginning.
Any idea on how I would append the BOM in a VS C++ file write?
VS has non-standard functions to do it for you, but you don't really need them if you understand what the BOM is.
Well, obviously I don't understand what the BOM is. I guess I'll investigate all of that when it's time for me to write my next file i/o project in Unicode.

By the way, has anyone else had problems with this board not posting messages, i.e., it says it's waiting for a connection and your message never gets posted unless you do it over again?

This has been going on for a couple of months with me.
Well, I have never had the need to write text files from C++, so I don't have tested code for this.

I will venture say that you can use:

1
2
3
4
5
	char leBOM[] = { 0xFF, 0xFE };
	std::ofstream file("MyFile.txt");
	file.write(leBOM, _countof(leBOM));
	std::wstring data = "This is Unicode.";
	file.write(reinterpret_cast<char*>data.c_str(), data.size() * sizeof(wchar_t));


Note that I use the ANSI version of the file stream object instead of the Unicode one. It allows me for full control of the BOM, but it also exposes me to a problem: What if the architecture the code is running is big endian? Then the BOM that should be written should be the beBOM, which is the same as le, but backwards.

Maybe I am being too cautious and maybe wofstream will automatically swap my bytes depending on the architecture. Like I said, I have never done this in C++ before.
Thanks for the example. I've explored BOM a little since you alerted me to it. The upshot is that I won't be dealing with it anytime soon. -:)

Not until I really have need for it. For now, Multi-byte file i/o is working pefectly (and easily) so I'm going with the old maxim -- if ain't broke, don't fix it.

I suspect there will come a day when I'll need to adhere to Unicode all the way, but until then, I ain't gonna worry about it.
Thanks, i will try it later!
Topic archived. No new replies allowed.