"It's for backwards compatibility"

Pages: 123
Why does the language have to be backwards compatible?
Use an older version of the compiler for older code: this seems too obvious to me..

You might be maintaining a legacy codebase to which you want to add new features.. but can't you do that by writing the newer code in a different translational unit which is compiled with a newer compiler while the legacy code with a legacy compiler?
I suppose an argument could be made that if newer versions of the language are no longer required to be strict supersets of previous versions (or very close), the language would be free to diverge indefinitely far from its roots until it was only C++ in name. It also raises the problem of needing to know a file's version in order to compile it.
What of features which have a better modern equivalent but are not removed for the sake of backwards compatibility?
Last edited on
What if you have some old code but want to change some of it to use latest language features? The 'older' code still needs to compile with the newest compiler for the latest feature used. The old compiler won't compile the added new features.

Yes, you could specify a different labguage version for different compilation units. But this assumes that the ABI hasn't changed so that the linker links the various .obj files correctly.

but can't you do that by writing the newer code in a different translational unit which is compiled with a newer compiler while the legacy code with a legacy compiler?
Two compilation units from different compiler versions won't necessarily link.

What of features which have a better modern equivalent but are not removed for the sake of backwards compatibility?
A related problem sometimes happens: C++ committee wants to implement a new keyword, but several projects are already using it. For example, "module" as a keyword.

C++17 and 20 actually did remove several features that had stayed around since C++11. For example,
- auto_ptr was still around during C++11, but was marked as deprecated and removed in C++17.
- Trigraphs have been in C++ since forever, but were removed in C++17.
- throw() has been deprecated since I think C++11, but was finally removed in C++20.

Some other features:
- rand() could be replaced by <random> functionality, but it's ubiquitous and simple enough that it will probably never be removed.
- std::source_location is meant as a replacement to avoid using macros like __LINE__, __FUNC__, and __FILE__ https://en.cppreference.com/w/cpp/utility/source_location but I doubt those macros will be removed any time soon
- std::bind is not really necessary because of lambdas, and other helper features introduced in C++17. Tools like clang-tidy can spot this: https://clang.llvm.org/extra/clang-tidy/checks/modernize-avoid-bind.html
Last edited on
Poor std::bind(). The middle child of language features.
Another C++ algorithm that was removed in C++17: std::random_shuffle, superseded by C++11's std::shuffle.

The former generally can use C's std::rand, the latter uses a random engine from <random>.

https://en.cppreference.com/w/cpp/algorithm/random_shuffle

If you don't like backward compatibility then you are going to love C++20's comparison "spaceship operator <=>".

https://en.cppreference.com/w/cpp/language/operator_comparison
Also, it's not just the language/libraries that evolve, but also the hardware.

We wouldn't need std::atomic without NUMA, and so on.
Lots of reasons.
Say your code is from the mid 90s and uses the union hack. That won't compile anymore with strict settings, but the compilers all support it on relaxed. This would be hard to find and fix across hundreds of files in a large project, and once it was 'fixed', .... you have the exact same thing you started with except it now uses a custom class that replaced the union to do it. On paper, you now have 'better' code, sure, but it does the same thing, so you just paid some doofus $40USA or whatever per hour for 3 months to accomplish nothing in terms of real work done. Not every small company can afford to throw $10k here and 10k there just to keep the code up to the latest.

If they removed everything that was obsolete, all existing code needs a do over if it used... arrays.. raw pointers.. c style casts... c style strings.. rand()... math.h ... unions... redundant integer types.. and dozens of other things that are slipping my mind now ... even the 'new' stuff has already come and gone in some places bits of the STL were removed/redone.

your suggestion creates a headache using multiple compilers on the same project. Try setting something like that up in visual studio... let me see you set up visual 6 MFC project with a 2019 managed one and get them to play nice together. Its probably doable, but you will have less hair than an onion when you are done, from pulling it out.

It also prevents adding a new feature using modern style into an older file from the old code base.
Last edited on
thanks
@helios @seeplus, @Ganado, @Furry Guy, @kbw, @jonnin
for the interesting insights.

What's the point of <=>? I read what it does.. but why? Why is it a useful addition?
Last edited on


https://en.cppreference.com/w/cpp/language/default_comparisons

Hi,

One advantage is the compiler can generate all six two-way comparison operators, see the example in the link. The other things it does are on the same page.
One advantage is the compiler can generate all six two-way comparison operators
std::move is good, but a lot of what's changed is syntactic sugar to support a different style of programming.

C++ has no absolute definition, yeah there's a standard document, but it doesn't have a BNF for example, it's "evolution by committee", rather than "evolution by community". So maybe C++ is a bad example in this talk of backwards compatibility. How about using C as the context language?
Last edited on
compiler.. generate operators..?

C++ is surprisingly vast. They lied to me when they said that C++ is just C with a little bit of extra syntax to support object oriented programming.
Where did you hear that? That hasn't been true since the very, very early days of the language. The "C with classes" days. That's like saying that the Saturn V is essentially a bottle rocket with some minor modifications for larger payload capacity.
chirumer wrote:
compiler.. generate operators..?


https://en.cppreference.com/w/cpp/language/classes

C++ can do all kinds of things behind the scenes, for example, it has ways to specify either defaulting the special members (in the link above), or deleting them. For example std::unique_ptr has it's copy constructor deleted, which makes them un-copyable. And the compiler can implicitly create special member functions if none are supplied.

C++ is surprisingly vast. They lied to me when they said that C++ is just C with a little bit of extra syntax to support object oriented programming.


C++ seems to be the most badly taught language: that massive understatement is the first of the lies... I guess it may have been kind of true in 1979, but with the introduction of templates, the STL, and boost libraries, it just blows that statement out of the water.

Another thing to compare is the amount of documentation on cppreference for C and C++. C is a small language compared to C++.

https://en.cppreference.com/w/cpp/language/history
So, for the original question, imagine the amount of version bloat of a user's computer. A large program will often rely on 20 or more libraries, and many have code that spans decades. If libraries were not backwards compatible it would be a logistical nightmare to have to track all the correct libraries required for the hundred or so programs that you installed, not to mention the thousands of system programs that go into actually running the computer. Each of those libraries probably rely on a half dozen drivers to do the job. The fact that the language is backwards compatible means that these drivers, libraries, and old code don't all have to be recompiled every time your computer gets a system update, or any time you install a new program.


With the current system of backwards compatibility, the user's computer only needs to keep the newest version of the libraries and drivers. With that they can run programs that were compiled 8 years ago alongside a program they install today. All of this concert of compatibility would not be possible if the language itself was not also backwards compatible.
Last edited on
The fact that the language is backwards compatible means that these drivers, libraries, and old code don't all have to be recompiled every time your computer gets a system update, or any time you install a new program.
No, it really does not mean that. It's perfectly possible to link together compiled code written in different languages that are not source-compatible (i.e. that neither compiler can compile code written in the other language). What makes you think it'd be impossible to link together compiled code written in different versions of the same language, if newer versions were not backwards compatible?
As long as there's a common ABI that everyone can agree on, interoperability is not impossible. Nowadays that ABI is almost invariably C's.
The ABI is a function of backwards compatibility. You might even say that it helps to enforce aspects of backwards compatibility and linkability. The reason so many languages followed the C ABI is because they rarely change it. Now, that's not to say that new ABI's haven't been created, but old ones do not defunct old functionality lightly (if they did then again it would break backwards compatibility and old code would require old libraries and new code would need new libraries which was my point). But an ABI is more of a decision on a compiler/OS level than a language level. I'm pretty sure gcc on 64 bit linux defaults to System V ABI and you can ask gcc to change an ABI for a function at compile time using preprocessor commands, and gcc uses a different abi for 32 bit compiles.

{
As an example: https://sourceforge.net/projects/mingw/files/MinGW/Base/gcc/Version4/gcc-4.7.2-1/
"Binary incompatibility notice!
------------------------------

The C and C++ ABI changed in GCC 4.7.0, which means in general you can't
link together binaries compiled with this version of the compiler and
with versions before GCC 4.7.0."
}
They say "In general" because you are right, you could do some deep work to hack your way around the changes in linkage, but for the average user using the default settings, things just broke between new compiles and old ones.

As far as backwards compatibility in a language:
C++ makes backwards compatibility easier to maintain because it has name mangling. C without name mangling means that you can't overload a function. If a language didn't care about backwards compatibility then they could defunct functions whenever they want. If a function took two arguments yesterday, and one argument today, then yesterday's code is broken. A CD program that you bought last year would not be able to relink to newer libraries without being updated (to be fair, many if not most professional programs do provide a copy of all libs required in order to future proof their code rather than depend on system installed libraries). I was expanding this problem into a theoretical world where no backwards compatibility existed, sorry for the ad absurdum argument in my previous post, but I don't think what I said was wrong.
Last edited on
I learned it pre STL, in the early 90s, I think it was my third language. At that point I would say it was still not far off to say C++ was C with some OOP added. C-strings were all we had; it was that or pick your horrid third party option and there were at least 10 of those in play. If you wanted a list or dynamic array you used pointers. Yea, you could roll a class that deleted and constructed the pointers for you, but you still had to be careful with it and it wasn't far off C pointer soup.
The STL kicked off in 98 or so, but it was poorly implemented: an old style array program could be so much faster than a vector back then, for a double whammy reason of 1) they were poorly implemented and 2) the tricks to use them were not well known yet. I did not feel safe using it for several more years. since Y2k its grown so far from C its hardly recognizable.
The ABI is a function of backwards compatibility. You might even say that it helps to enforce backwards compatibility.
If a function took two arguments yesterday, and one argument today, then yesterday's code is broken. A CD program that you bought last year would not be able to relink to newer functions without being updated.
You're treating different things that are related as if they were one and the same. There are multiple ways in which a programming system can be backwards-compatible. It can be backwards-compatible
* at the source level,
* at the executable level,
* and at the library level.
It's true that if you don't touch any of them you'll have full backwards compatibility. What's not true is that breaking backwards compatibility at one level necessitates breaking it at the other levels. Each of these can be changed independently, and which ones are changed will change which programs in which part of their development cycle will break. It's perfectly possible to break compatibility at the source level (some existing sources will no longer compile correctly) without breaking it for code that's already compiled.

A simple example: the committee goes crazy and they decide that the keyword 'new' must be renamed to 'allocate'. The semantics of allocate is exactly that of new and nothing else about the language changes. This will break a lot of code, right? For starters any code that uses new and any code that uses allocate as an identifier. But it will certainly not break all code and it will definitely not have any effect on code that's already compiled into object files, let alone code into an executable.
So, there are in principle ways to break backwards compatibility at some levels and not at others.
Pages: 123