szAppName

Pages: 123
Mar 4, 2011 at 9:55pm
Because you need the window class name again when you create the window, and for some other things too (like you normally give the default menu the same name as the window class).
Mar 4, 2011 at 11:51pm
Oh ok, so theyve made it so that it requires a variable for convinience reasons :)
Last edited on Mar 5, 2011 at 12:50am
Mar 5, 2011 at 12:51am
Am i correct in saying that? :D
Mar 5, 2011 at 3:31am
Yes I'm pretty sure that's why they did it.
Mar 5, 2011 at 11:19am
Because i have made the variable which stores the class name a TCHAR, which supports unicode, is it a MUST that i have to use TCHAR for all my future character variables, or can i if i wanted to use char? Do you's have any tips on when its best to use TCHAR instead of char?

:D thanks in advance.
Mar 5, 2011 at 12:04pm
...and i know that to use TCHAR i have to #include<tchar.h> for TCHAR's to work, why do i not have to do it for window applications, does windows.h bring this in?
Mar 5, 2011 at 3:11pm
? :D
Mar 5, 2011 at 6:52pm
I'll give you a hint: Go to the default include folder of your compiler, open your windows.h, and look it up!
Mar 6, 2011 at 1:56am
I had a look in the windows.h, and i cant see #include <tchar.h>???
Mar 6, 2011 at 2:13am
And none of the other headers it is including include it either?
Mar 6, 2011 at 2:18am
I guess windows.h includes tchar.h for us then :)

1. Because i have made the variable which stores the class name a TCHAR, which supports unicode, is it a MUST that i have to use TCHAR for all my future character variables, or can i if i wanted to use char? Do you's have any tips on when its best to use TCHAR instead of char?

2. If i want to use unicode (TCHAR) to support the unicode encoding scheme, do i have to also define _UNICODE? Because i read that you need to do that, but i havnt and i can still use TCHAR?
Last edited on Mar 6, 2011 at 4:28am
Mar 6, 2011 at 4:54am
3. Is a macro just a replacement of something, like for example, if i had:

#define blah int
blah number = 7;

Would that be valid code, blah is equal to the type int?
Mar 6, 2011 at 5:42am
1. You can use what you like, so long as you take care to pass functions what they want. Many of the Windows API functions have both normal and wide versions that are called based on your definition of UNICODE; using TCHAR just means that you can then define or undefine UNICODE and all your code will still work.

2. Yes, you'll need to define UNICODE, though I believe Visual Studio actually has an option somewhere that does that automatically.

3. Yes, but you should be careful of using macros like that as they replace that set of text anywhere except in string literals. If you just want to alias a type, use a typedef instead.
Mar 6, 2011 at 12:05pm
My reply to 1:

One other thing, as i use the wide version of char to support unicode (which is TCHAR), are there other WIDE types, like a wide version of int, or was unicode only made to support international chars?

My reply to 2:

Ok, i guess my visual studio did automatically define UNICODE, ill define it anyway just for good practice.

My reply to 3:

Ok, so thats all macros are, they are just replacements. What is there purpose, is it for convinience reasons?
Mar 6, 2011 at 7:05pm
? :D
Mar 6, 2011 at 7:35pm

One other thing, as i use the wide version of char to support unicode (which is TCHAR), are there other WIDE types, like a wide version of int, or was unicode only made to support international chars?

wchar_t is NOT a char type for unicode characters. It's just wide characters, but whether wide means 8, 16, 32 or anything-else bit depends on your compiler.
ANSI/ISO C leaves the semantics of the wide character set to the specific implementation but requires that the characters from the portable C execution set correspond to their wide character equivalents by zero extension.

Though C compilers for windows usually define wchar_t as 16 bits, which allows you to use it for UTF-16. PS: You don't even need wchar_t for Unicode. With UTF-8 you can just use normal chars, though you won't be able to use the default string functions anymore then.

TCHAR is not a char type for unicode characters, it's just a typedef that allows you (when used together with a number of other macros and typedefs) to switch your entire code from using char-strings to using wchar_t strings with just defining the UNICODE macro.

As to other "wide" character types:
1
2
3
4
5
char a; /*at least 8 bits - the size of this is a byte in C++, and all other types must consist of bytes of this length */
short int b; // at least as large as a char, not larger than an int
int c; //at least as large as a short, not larger than a long
long int d; //at least as large as a int, not larger than a long long
long long int d; //at least as large as a long 
Mar 6, 2011 at 7:50pm
Tell me if im right. I think i might be thinking of unicode in the wrong way.

Unicode is just a encoding scheme for all computers to use so that characters do not get mixed up when a program is being used on another computer in a different country.

So by telling my program to use the unicode scheme it is just a way of telling it how to handle characters, and it doesnt actually have anything to do with data types.

However, i have to use TCHAR because TCHAR can support 16bit characters whereas char cant.

???
Mar 6, 2011 at 8:11pm
Well, sort of...

Think of Unicode as a giant code table, where each number represents a character. The numbers are all 32 bits.

I think you still didn't get what TCHAR is. TCHAR is the same exact thing as char, just if you define the UNICODE macro before including the tchar header it becomes wchar_t instead. wchar_t is a wide character type. It is NOT guaranteed to be 16 bits, it could also be 32 bits (technically, other sizes are possible, but you won't have to worry about those in any environment you will normally encounter- and if you do encounter one of those, by then you should know how to deal with them).
Mar 6, 2011 at 8:56pm
Thanks very much, can you also tell me if im right in saying this:

Im also abit confused on what exactly 8bit/16bit/32bit means, does it mean:

When we refer to 8bit characters, we are refering to an encoding scheme which has 128 codepoints for characters?

When we refer to 16bit characters, we are refer to an encoding scheme which has 65536 code points for characters.

And im not sure how many code points 32bit characters support.

But basically do we have more "bits" because some characters have code points which have more digits in its codepoint name, which takes up more bytes?
Mar 6, 2011 at 9:15pm
That's basically the meaning of it. The number of bits just refers to how many bits a single character has. UTF-8, though, as has been mentioned, is somewhat special.
Pages: 123