I want to get into the habit of using a popular naming convention. I already do macros in uppercase, and then everything else with a mixture of lowercase/capitalized. This is what I mean:
Class member objects: myNameStuff
Struct/Union member objects: nameStuff
Global scope objects: theNameStuff
Function objects (scope/parameters): nameStuff
struct point
{
double x, y;
point( double x, double y ): x( x ), y( y ) { }
void increment( double x = 1, double y = 1 )
{
this->x += x;
this->y += y;
}
};
I am accustomed to CamelCase, which works OK for me, but I prefer_to_use_underscores.
I can't stand the abuse that people have put to so-called "Hungarian Notation". (The Wikipedia article is a good read about the differences between the original, useful "Apps" and the widely-used nonsense called "Systems".)
See Joel for how to do it Wrong Right: http://www.joelonsoftware.com/articles/Wrong.html
I can't stand the abuse that people have put to so-called "Hungarian Notation". (The Wikipedia article is a good read about the differences between the original, useful "Apps" and the widely-used nonsense called "Systems".)
See Joel for how to do it Wrong Right: http://www.joelonsoftware.com/articles/Wrong.html
I read that article and have to say...Apps sounds way better. If you use *that* then I wouldn't have any problems. (As long as you document what they mean)
Don't try too hard to fit any popular convention, just keep true to two things:
1)Keep your style consistent.
2)When working in collaberation with others, use whatever you agreed on with your partner(s) (this is basically just point 1 applied to team scale).
What is the difference between the type of a string that is safe and one that is not safe? Should I create two distinct string types to store them?
The proper (original) purpose of Hungarian Notation is to name your variables so that you know something about the meaning of the content, not its data type.
Hence, continuing Joel's example, if you have a string named "usName" you know better than to simply print it out to your web page, since the data in the string is unsafe.
Sure, it is possible to write classes and all kinds of data typing systems to do this for you, but the simple human-readable component suffices without time and money spent on blithely circumventable bloat.
Joel's premise is that by using Hungarian Notation properly, you can write code that is obviously correct or obviously incorrect just by looking at it. The compiler can't help you there.
The difference between them is that the safe version has different content than the unsafe one. It has different usages.
Despite the naming convention, it is still possible for me to miss out on an incorrect usage. His examples are very simple. Consider the same examples in a much larger application that contains many such conventions. Perhaps you may still argue that it is easy for a programmer to spot, but it is still easier -- and guaranteed -- for the compiler to spot it.
There isn't any code bloat whatsoever associated with creating strong types if inline functions are used. Yes, the source code is longer, and yes, it takes time for the developer to write it. But now you end up with code that, if it compiles, is essentially guaranteed to be correct, and not dependent upon diligent programmers to spot.
You are arguing code complexity over code review. It is a waste of money to spend time writing classes that try to guarantee a given value is correct over simply firing someone who simply does the wrong thing. The problem with your argument is that both approaches require someone not to make mistakes, but adding code complexity increases the probability of errors. Also, code bloat isn't limited to compiled stuff, it also applies to source code.
I am not arguing my point across subsystem boundries.
Well "simply firing someone" is a lot easier said than done, and I'd hate to work for a company that fired someone at the slightest mistake.
My approach does require someone to not make a mistake when writing the class. But that's a one-time thing, whereas what you're saying requires programmers to continually get it right each time they use such a variable.
To say that the introduction of strong types adds to code complexity requires first a precise definition of code complexity. The user of such a variable would see virtually no difference in interface as a result of making it a strong type.