What if in some language very similar to C++, the only variable types were int and float. What if, for int, you had to specify signed or unsigned? (Maybe you would not have to write 'int' then)
1 2 3
signedint x;
unsignedint y;
int z; //syntax error
And what if the size of int was the platform's native size, eg 32 bits on a 32 bit platform?
Well, let's say you can specify the size to the compiler, and there was a list of sizes guaranteed to be supported by all compliant compilers:
1 2 3 4 5 6 7
unsignedint v; //platform native size
unsignedint@8 w; //8 bit int
unsignedint@16 x; //16 bit int
unsignedint@32 y; //32 bit int
unsignedint@64 z; //64 bit int
unsignedint@65 zz; //error: unsupported size
unsignedint@24 xy; //24 bit int for 24 bit platforms, or if the compiler supported it
What if the same was for floats?
1 2 3
float a; //platform native size
float@32 b; //32 bit float
float@64 c; //64 bit float aka 'double'
If there were such a language as this, would you like or dislike these features? I'd like to know why.
Personally I think the @size feature be cool, but I have not considered all the downsides. As for the need to specify sign, I think it may be a hassle, but would prevent useless limitations from accidentally using signed rather than unsigned.
So on a 24-bit platform (hey, I've programmed for those), a program that uses @32 would not compile? It's a serious detriment if the language is not portable between platforms.
I would rather see just one integer type, one real type, and one complex type, with automatic arbitrary precision when not fitting in machine words (there are languages like that). Make the numbers match the math they come from.
PS: fun fact: the standard allows a C++ implementation where char, short, int, long, and long long are all the same size, 64 bits.
I personally don't like the @size argument thing. It gives the false impression one can write anything.
For example: what if I wrote int@15 i;?
Would I get an error, or would it compile by converting to 16 bits and with a warning?
I'd be fine with a size parameter so long as it was instead written in terms of the number of bytes of the integer, like so:
1 2 3 4 5 6
unsignedint v; //platform native size, which would cause loads of portability issues.
unsignedint@1 w; //8 bit int
unsignedint@2 x; //16 bit int
unsignedint@4 y; //32 bit int
unsignedint@8 z; //64 bit int
unsignedint@3 xy; //24 bit int
EDIT: I would NOT be okay with a two-type language, though.
I was talking about the types from which other types could be derived. I guess I just think with different words and in diferent ways than how it is techinally...
> ... If there were such a language as this, would you like or dislike these features?
Would depend quite a lot on the precise syntax, I suppose.
I found COBOL to be too verbose for my taste:
1 2 3 4 5
05 BALANCE-DUE PICTURE IS S999999V99 USAGE IS COMPUTATIONAL.
* or (I thought that this was even worse)
* 05 BALANCE-DUE PIC S9(6).99 USAGE COMP.
* ...
88 RECORD-COUNT PICTURE IS 9999 USAGE IS DISPLAY.
I was quite ok with FORTRAN and PL/I:
1 2 3 4
INTEGER*2 s = 78
INTEGER*4 i = 1234
REAL*4 f = 67.8
IMPLICIT REAL*8 Q
And comfortable with <cstdint> (it is only for integer types, though):
uint v; //platform native size
uint8_t w; //8 bit int
uint16_t x; //16 bit int
uint32_t y; //32 bit int
uint64_t z; //64 bit int
uint65_t zz; //error: unsupported size
uint24_t xy; //24 bit int for 24 bit platforms, or if the compiler supported it
A C++ compiler that doesn't support the "guaranteed" types probably wouldn't have a compiler for that hypothetical language anyway, as it would be impractical to emulate all the standard types in software.
Hahaha, too bad those are typedefs and not actual built-in types.
And why would that matter?
What I'm saying is that this makes no difference for practical purposes, so you're describing something that already exists in C++. You'd just have to allow @ in type names and you're all set.
Speaking of which, int8/int32/float64 etc. would be easier on the eyes.