Although twice makes no difference in the capital-O complexity, still I prefer an algorithm that is twice faster. |
I doubt anyone would argue against that. Here is what hypothetically could happen though. First, the vendors could publish detailed statistics. Second, the community or some organization could publish detailed statistics. Third, the programmers could perform detailed tests when they need them.
The first option is up to the vendors themselves. Describing a non-binding characteristic in the specification is a bit against contemporary methodology, because it actually makes your implementation opaque. There is a chance that this will backfire, because the software would be designed with very specific performance expectations in mind. If tomorrow the vendor decides to change the nuts and bolts, that would be felt much more extremely by its clients.
The community option is just difficult to maintain. I mean, someone will have to do the work for all the vendors. if you include embedded computing, then you have humongous range of possibilities. Who would just donate the resource for the cause? It is possible, but I dunno.
The third option is the most feasible. Knowing the financing method of projects today however, I would think that going into details will be the last resort, after doing what-have-you fails.
---
The compiler optimization strategy is difficult to convey. First, there is nothing that the "usual user" can comprehend about those things, me included. Second, even if I had the appropriate background, I still wouldn't read 1000 pages of manual that describe how the particular compiler operates. I just know some things, that I consider sufficiently common and important, and this gives me advantage in rare cases. But unless something applies rather universally, would you really want to go though hundreds of pages, of compiler analysis techniques to only get to knowledge that may or may not be useful.
I mean, the whole idea of programming C++, and not C, and not assembly, is to abstract above those things. If you wanted to know what exactly contributes to your execution time, then you should have used macro assembler instead. The code may not be as portable, but the entire purpose is to get some edge from micro optimizations, right? If you want to have complete control over the performance of some routine, you should provide your own implementation. Do you really want me to implement something for you, only so that you spend all the time in the world in gray-box analysis of my implementation choices. This is not interface based programming to say the least.
For me, the whole point of not doing something yourself, is to not check how it is done. A relative of mine once said: "If I wanted to explain to them exactly how to do it, I would have done it myself." The entire product is the information. If I acquire all the information, then what do I need the product for. This is also major problem with some software enterprises. By reuse they mean to take undocumented piece of code that someone in the company has written long ago and reverse engineer the hack out of it to understand how to use it. All you need is to have answers to strategical questions. Once you get into details, you are not reusing anything anymore. For example, I don't ask if my STL uses skip list or RB-tree. If I really prefer one over the other, I should implement it myself.
---
You want more intelligent IDEs. But let me start with something general and then I'll get to the warnings problem.
The C++ toolchain is, I think, flawed and retrograde by today's standards. Every intelligent IDE contains a C++ parser, a static analysis tool must contain a parser too, introspection utilities have semi-intelligent parsers, the compiler has a parser, the documentation extraction tool (if you use one) contains yet another parser. Besides the obvious duplication of effort, there are other problems with this, like consistency. If one of this tools evolves independently (to accommodate the new standard, to implement optional feature, etc), there is no guarantee that the tools up the chain will remain compatible, not to mention interoperable. This explains why people still use configurable text editors for development. It is just less hassle.
Another problem is, that since the format is plain, you must have unnecessarily complex syntax analysis that extracts additional semantics from the text. I can argue that plain text is not suitable for storage of user documents. All the scope resolution, collision evasion rules in the language. All of this is because the document is virtually unstructured, there is no metadata, and the meaning depends on the context. IDEs map the point of reference to the definition/declaration for browsing, re-factoring, etc. This information is not inherently supported in the source file, and it is either re-acquired every time or saved in auxiliary databases (which leads to potential versioning problems). The compiler performs the same duty all over again. The IDE can not load and host the parser in its process (and perform spell checks with it). The parsing done by the IDE can not be used by the compiler. Also, if I want some special pre-processing to the language, then the IDE will not recognize the new syntactical constructs.
My point is, instead of having reusable modules loaded in the tools (IDE, compiler, meta-compiler, etc), and plug-ins that extend those modules with custom syntax, and structured format that supports unambiguous queries, we pipe the tools with text as the communication and storage medium. There are some projects that try to fix this, like LLVM, but I think they plan to work in the confines of the compilation model, which (if true) is limiting. Also, some commercial solutions have tried to use databases as the permanent storage medium for source files, but there is no promise for financial return to the investors, and all this goes more or less silently under the radar.
Regarding the warnings. Indeed, it appears that disabling individual warnings is not supported in GCC at the moment. I have used static analysis tool that employs your strategy, but can not tell you what MS uses. The arguments of the GCC team are that warnings should be fixed, because the programmer is forced into a more responsible attitude. Of course, that is assuming that the warnings can be fixed. This is not always true. There are a few warnings that are simply attention grabbers, with no work around. You can see the relevant discussion here:
http://gcc.gnu.org/ml/gcc/2000-06/msg00638.html
You can use the comments for saving information. Doxygen uses them to store documentation, the version control systems use them, and some IDEs (like Emacs) use them for storing configuration options. First, if you decide to migrate from one tool to the next, these special comments become ordinary text. Second, folding works only if the IDE knows what to fold. Interleaving different varieties of information in a single source is IMO messy and highly non-interoperable. Compare this to using standardized extensible format that allows annotations. For example, a database (-like) format (even if it is XML database), so that the relations of the objects in the code can be captured.
---
I am sorry that I rambled so much. I for one understand, that there are many ideas out there, but only few of them will see the light of day. (I can hardly do anything before I read some more and acquire solid skills.)
Regards