Code::Blocks

I just started using Code::Blocks. It's great! Does anyone else use it?
Hi, I use codeblocks too, but I also have visual studio 2008.

I actually have a question if someone can answer it. I feel it may be relevant because it might reveal a flaw in either VS or CodeBlocks. It pertains to this piece of code:

1
2
3
4
5
6
7
    hSerial = CreateFile("COM1",
                         GENERIC_READ | GENERIC_WRITE,
                         0,
                         NULL,
                         OPEN_EXISTING,
                         0,
                         NULL);


This compiles perfectly in codeblocks, but it doesn't compile in Visual Studio. In order to make it work in VS, I have to do this:

1
2
3
4
5
6
7
    hSerial = CreateFile(L"COM1",  //Added L infront of string literal
                         GENERIC_READ | GENERIC_WRITE,
                         0,
                         NULL,
                         OPEN_EXISTING,
                         0,
                         NULL);


Does anyone know why?
Last edited on
The default setting in Code::Blocks is Unicode disabled, wrong in my opinion, as nobody wants to program for windows 95 these days.


Of course it could be enabled, but beginners usually don't know how to do it.
Hm, I read through the article and I thought I understood it. So I did this in my MSVS IDE:

1
2
3
4
5
6
7
8
9
10
	const TCHAR* port = T("COM1");

    hSerial = CreateFile(port,
                         GENERIC_READ | GENERIC_WRITE,
                         0,
                         NULL,
                         OPEN_EXISTING,
                         0,
                         NULL);


Output: Fails to compile
error C3861: 'T': identifier not found

Haha, I'm sorry if I'm changing the thread of this topic. I'll throw in one more comment to keep this on track:
IMO, C::B is better (once I found out how to include all these macros). Because MSVS desn't highlight the words or bring up the parameters required when writing out a function or automatically put braces and semicolons. Code Blocks is just 10x easier to read and understand.
The correct name of 'T' macro is _T() or TEXT(). You need to #include <tchar.h>


anonymousxyz wrote:
Because MSVS desn't highlight the words or bring up the parameters required when writing out a function or automatically put braces and semicolons. Code Blocks is just 10x easier to read and understand.


Default IntelliSense is crap, agreed. But for MSVS there is VisualAssistX plugin which do all these things and much more. Unfortunately, it is not free.
Last edited on
@ anonymousxyz:

Don't bother with TCHAR's. They're retarded and make your code overly complicated.

Decide whether or not you want Unicode strings, and use the A or W version of these functions accordingly.

1
2
3
4
5
// don't care about Unicode strings
hSerial = CreateFileA( "COM1" ,  ... );

// Want support for Unicode strings
hSerial = CreateFileW( L"COM1", ... );


Note that if you're just using the fixed value of "COM1"... you don't really need Unicode for this function call... as the non-Unicode version will have the exact same effect.
Last edited on
@Disch,
That is a peculiar stance. I was told (or rather I read from tons of tutorials and other forum board topics) to always use TCHARS because that allows one to compile in multiple different IDEs and compilers.

Although if I had to pick one, I suppose it'd always be to support Unicode. I'm not sure how relevant ASCII is anymore but if Microsoft decided to enable UNICODE as a default, it must be the way of the future.

I don't want to have to go into settings on every single microsoft PC I try to work on to disable UNICODE.

So yeah, is ASCII relevant? The reason I hesitate following your advice is because I'm 90% ASCII is used for intercommunication between devices because it allows for faster processing. So might as well have best of both worlds.
I was told (or rather I read from tons of tutorials and other forum board topics) to always use TCHARS because that allows one to compile in multiple different IDEs and compilers.


As @Disch posted, his example it will compile in different windows compilers without any problem, regardless of compiler settings.

Although if I had to pick one, I suppose it'd always be to support Unicode. I'm not sure how relevant ASCII is anymore but if Microsoft decided to enable UNICODE as a default, it must be the way of the future.

I don't want to have to go into settings on every single microsoft PC I try to work on to disable UNICODE.


Microsoft uses Unicode internally since windows 2000, so using Unicode in your code makes your application a little faster. In ANSI build windows converts internally into Unicode behind the scenes.


So yeah, is ASCII relevant? The reason I hesitate following your advice is because I'm 90% ASCII is used for intercommunication between devices because it allows for faster processing. So might as well have best of both worlds.


To send/receive data to various devices there is no difference, you communicate with bytes of data, does not matter what it is. Windows has ReadFile() and WriteFile() which works with array of bytes, does not know or interfere in any way with Unicode settings.
I was told (or rather I read from tons of tutorials and other forum board topics) to always use TCHARS because that allows one to compile in multiple different IDEs and compilers.


The compiler/IDE has absolutely nothing to do with it. It's a question of WinAPI, and WinAPI has 3 forms (TCHAR, char, wchar_t). Always. You can look at any msdn page for an applicable function and it will confirm:

http://msdn.microsoft.com/en-us/library/windows/desktop/ms645505%28v=vs.85%29.aspx

that msdn page wrote:

Unicode and ANSI names
MessageBoxW (Unicode) and MessageBoxA (ANSI)



TCHARs are dumb. They're basically a macro to be either a normal char or a wchar_t depending on whether or not the UNICODE macro is defined.

Really... all Windows.h does is [something like] this:

1
2
3
4
5
6
7
8
9
10
11
#ifdef UNICODE

#define TCHAR wchar_t
#define MessageBox MessageBoxW

#else

#define TCHAR char
#define MessageBox MessageBoxA

#endif 


There's no real magic. 'MessageBox', or 'CreateFile' are not actually functions... they just get #define'd to either the W or A version depending on whether or not TCHARs are wide.

All you're doing by not using TCHARs is picking which on you want and using it directly, rather than using an obfuscated macro layer.


The thing that makes TCHARs particularly stupid is that they're variable size. Trying to use TCHARs with data that is a fixed size will result in you having to write 2 different blocks of code: one for use when TCHAR is char and one for use when TCHAR is wchar_t.

The only advantage TCHARs give you is they allow you to flip a switch to enable/disable Unicode support in your program (provided you write your code to be properly TCHAR aware -- most people don't). But that's dumb because if you're going through all the trouble to make your program Unicode friendly there's no real reason to ever disable it.

Although if I had to pick one, I suppose it'd always be to support Unicode. I'm not sure how relevant ASCII is anymore but if Microsoft decided to enable UNICODE as a default, it must be the way of the future.


That's a good stance to take.

Windows has operated with Unicode (UTF-16) "under the hood" since at least Win2k. So not only is it the future, but it's pretty much the standard in the present... and has been for the past 13+ years as well.

There are some reasons why you might not want to do it sometimes.... but if you want to do it, that's great.

So yeah, is ASCII relevant?


If you have a narrow ASCII string (like read from a file or something) that you want to pass to WinAPI... you don't have to be Unicode friendly for that specific function call.

What I mean is... there's no point in manually widening an input string just to call the W version of a WinAPI function. It's easier to just call the A version with the ASCII string and let Windows do the widening for you.

The reason I hesitate following your advice is because I'm 90% ASCII is used for intercommunication between devices because it allows for faster processing.


Anything you pass to WinAPI gets widened if it isn't widened already. In fact it's probably slower to use the A functions over the W functions because Windows has to look up the user's locale settings in order to widen the string. (this is speculation as I haven't tested, but I would be very surprised if I was wrong).

Windows uses UTF-16 for everything. Even in programs that don't. So anything you pass to WinAPI is going to get widened.

That said... if you're doing a lot of text processing in your program without passing to/from WinAPI... then yeah wide strings might be slower. But you would have to be doing a whole lot of text processing for it to make any significant speed difference.


EDIT: ninja'd by modoran!
Last edited on
All this has completely strayed away from my topic!
^
lol :))

i use C:B :)

btw nice discussion
Topic archived. No new replies allowed.