Macro in a DLL

This might be a little too much to ask, but i'm going to go through with it anyway. I have a .dll containing a class that can work with both SBCS and Unicode. Problem is that to unable Unicode in an application you would normally #define UNICODE. But how can i make the class inside the .dll aware that i defined UNICODE inside my main application?
there are the #ifdef and #ifndef directives, you could use them in your DLL header file
say dll.h looks like this

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#ifndef _DLL_H_
#define _DLL_H_

#ifdef BUILDING_DLL
#define DLLIMPORT __declspec(dllexport)
#elif
#define DLLIMPORT __declspec(dllimport)
#endif

class DLLIMPORT myclass {
//class member declarations
};

#endif 


and dll.cpp looks like this

1
2
3
4
5
6
#include <windows.h>
#include "dll.h"

//member definitions that use TCHAR ,_T and _t*** macros for string handling

//DllMain(); 


Now if this were an usual header i would just define UNICODE in the main application source (let's say main.cpp) and then include dll.h which includes windows.h which detects the UNICODE macro and everything works smoothly. Problem is it's a .dll. So where do i put the #ifdef's or #ifndef's?
Last edited on
Nevermind. I just realised that I can't do that because the .dll compiles with the SBCS functions because UNICODE is not defined in it's files. So i need two different versions.
Microsoft's original approach to programatic support for Unicode was to provide a macro (UNICODE) that would redefine string functions to use the Unicode versions, forcing you to use char/MBCS or Unicode but not both. This was a mistake. With the OS/2 / Windows NT code split, WIN16 compatibility, and the panic to get 32bit Windows out the door, there simply wasn't the time to do it properly back in 1994.

The fact is char strings, MBCS strings (such as UTF-8) and Unicode strings are all different types and can coexist together in the same program happily. In STL, char strings are implemented as std::string and UNICODE-16 strings as std::wstring. STL doesn't provide a MultiByte Character String type.

There are two ways to go. You can switch off Unicode support (that is, don't define UNICODE). This will give you 8bit char strings in the Windows API and you can read and write char strings with ease. You'll need to do this if you interact with a network stream, a file, a database, ... If you don't you'll have to convert to and from Unicode yourself. It's horrible.

On the other hand, Windows uses Unicode strings in it's native interface, but provides char versions where the WIN32 API does the conversion to and from Unicode for you (in user32.dll). An example is SetWindowText(). This is mapped to SetWindowTextW or SetWindowTextA; however, SetWindowTextW is the native implementation and SetWindowTextA performs a conversion to Unicode and calls SetWindowTextW. The A and W suffixes are for Ascii and Wide. If you're concerned about raw speed or are doing Unicode I/O, you may want to use the Unicode WIN32 functions, if which case you'd define UNICODE.
Last edited on
Topic archived. No new replies allowed.