Well, which is it? Did you mean "inefficient" as in "hard" or as in "not scalable"
|
Inefficient and scalable can be spoken in many contexts. Inefficient to code = it takes lots of time to implement. Inefficient to run = it takes lots of CPU time to accomplish a task.
Scalability in context of coding = proportion between amount of resources like people, time, money and size of the programming task (functionality). Scalability in context of execution = proportion between amount of resources such as RAM/CPU and size of the problem being solved.
C++ is efficient only in the context of execution speed / memory (actually it is not because of C++, but because of programmers - C++ just gives them lots of control, but it doesn't magically make programs efficient - C++ compilers are restricted serverely in optimizations they can perform). It is not efficient to code in, doesn't scale well with the size of the system, and in many cases also doesn't scale well in the meaning of "multithreading".
The problem is not that the resource is mutable, but that it's both mutable and shared |
Agreed. But if you can't have immutable state in C++ (or at least it is difficult and unnatural), how would your threads communicate? Only a small portion of problems can be parallelized in a way ray-tracers work. Many algorithms require communication between concurrent entities. The default answer in C++ is shared mutable state.
I really hope you're joking with that figure, or that you forgot two or three orders of magnitude |
I usually program with small steps, at the single function/class level. Most of the functions are not longer than 15 loc. I write a class and a set of tests. So, no, I never write 2000 loc and **then** debug it. This is not a good way of coding and there is no need to do so if you follow DRY, YAGNI, KISS, BDD rules. Anyway, 100 loc in HLL is often equivalent to 1000 loc in C++ or more, so you have a one order of magnitude. :P
Just as an example, one of the current web mining systems I work on is just about 500 loc (including comments) and includes:
- retrieving documents from the web by following the links
- clearing content of unneeded rubbish
- splitting pages into smaller fragments
- eliminating stop words
- lemmatizing
- calculating tfidf coefficients
- full text indexing of the documents
- searching
- clustering the search results and extracting common content
All this with the standard library only and one library (Lucene) for indexing.