Is C really faster than c++??

Pages: 12
Jan 31, 2011 at 10:34am
In my program the main complexity was because of reading from file and writing in the file . My program was taking 18 mins when i was using fstream objects but it was reduced to 3 mins when i use FILE* . Can anybody please explain me the reason for such a change..
Jan 31, 2011 at 10:38am

were you using a lot of?

stream << blah << endl;

AFAIK endl causes a flush which can slow things down a bit.
Jan 31, 2011 at 5:36pm
May be it can slow things but how it could produce such an effect that the time becomes 1/6th of the earlier time???
Jan 31, 2011 at 6:12pm
Post the code.
Jan 31, 2011 at 6:33pm
which code initial or final????
Jan 31, 2011 at 6:53pm
Post both so we can compare them.
Jan 31, 2011 at 8:23pm
@OP
If your C++ code is not slower than C then you are not using C++. C++ is intended to be higher level language and the lunch is (unfortunately) never free.

However, with the proper compiler implementation, the language design permits conservative use of the features. You should not be penalized (in principle) for using C code in C++ project and there should be no overhead performance-wise. There will be always overhead memory-wise. This is sadly a must.

Look at this site for comparison of machine efficiency of different languages: http://shootout.alioth.debian.org/

What is important to note, besides the statistics, is that the relative performance changes with different implementations. In fact, because the site is updated with new generation of tests every now and then, the ranking also varies. This is important, because it shows that if the implementer uses hacks to improve the performance of the code, he can rank his language higher. But then, this is not the same language. C++ is intended to be used for systems that need higher level of abstraction and there is always a price to pay for that. Try introducing several design patterns in C code and it will also become slower.

Regards
Last edited on Jan 31, 2011 at 8:24pm
Jan 31, 2011 at 9:18pm
If your C++ code is not slower than C then you are not using C++.
I strongly disagree.

There will be always overhead memory-wise.
Again, I strongly disagree.

C++ is intended to be used for systems that need higher level of abstraction and there is always a price to pay for that.
Virtual functions imply a cost, but implementing a similar mechanism using a switch is more costly. You'd have to resort to pointers to functions to do better.

C++ was designed to be as fast as C. STL is as fast as hand rolled containers/algorithms. Object oriented programming implies a runtime cost, but in C++ it's very cheap and typesafe at the expense of flexibility compared to Smalltalk based languages. C doesn't offer language support for OO.
Jan 31, 2011 at 10:09pm
+2 kbw

Both C and C++ are compiled to machine code, therefore neither is "faster" than the other. Having said that, C++, due to its higher-level abstractions, gives the programmer much more ability to write inefficient code than in C.

Jan 31, 2011 at 11:18pm
Let me clarify my own comments.

simeonz wrote:
If your C++ code is not slower than C then you are not using C++...C++ is intended to be used for systems that need higher level of abstraction and there is always a price to pay for that.

If somebody says that inheritance, encapsulation and polymorphism are cheap, then I completely agree. If they say that having decoupled, cohesive, modular, reusable architecture is resource efficient, then generally, I don't agree. I am absolutely not referring to run-time binding or some technicality like that, but to limitations imposed from the design principles.

Is it cheap to isolate functionalities performance-wise? No. Coupling can improve performance, because it means direct access to information. C++ offers capsulation and stimulates decoupling. I say stimulates, not enables. C can be used with implementation hiding too, but the mindset of a C programmer, given the language features, is a bit different.

Is it cheap to design your functions with one specific responsibility, instead of making them multi-functional? No, because many algorithms have essentially more than one output. But C++ stimulates the use of many atomic operations packaged in an interface instead of big multi-purpose functions. Again, only stimulates, not enables.

Is it cheap to portion the system into units that are independent from each other. No, because of additional layers of indirection that facilitate the communication between the sub-systems and more broadly tailored cooperation.

Is it cheap to specify a set of operations using a predefined contract. No, because this idealizes the responsibility they will have. You could instead specialize them to the problem that the system solves.

C++ motivates modularity and reusability. The language for me is the foundation for your programming style, not just a programming infrastructure. You don't program in C++ unless you consider the above development principles seriously. C is very versatile for small systems, but if it is used for big modular architectures, I think it shouldn't be. I would suspect it to be a legacy situation.

simeonz wrote:
There will be always overhead memory-wise.

The run-time support in general will increase the size of the executable and therefore the memory consumption. Some compilers allow you to disable some parts of it, true, and this overhead is constant amount, but this and the language complexity remain the top reasons why C++ continues to be rejected for some embedded products. Then again, the same products also reject the use of libraries, floating point arithmetic, dynamic memory management, and recursion, so this is not saying much. I confess, that without clarification, the statement can be interpreted as memory overhead for each structure, which is not the case.

I want to comment a bit on your response.

kbw wrote:
Virtual functions imply a cost, but implementing a similar mechanism using a switch is more costly. You'd have to resort to pointers to functions to do better.

There is very little point in discussing whether and how a C++ feature can be implemented/emulated in C. The languages are approximately functionally interchangeable. It is the programming style that differs. Also, the technical differences are not so impacting, if any.

kbw wrote:
C++ was designed to be as fast as C.
The language impacts for me are, from most to least important:

A. algorithms and technology (asymptotic complexity, hardware features, platform choice)
B. style (decoupling, cohesion, modularity, etc. - design patterns, best practices and stuff)
C. compiler technology (effectiveness of the translation to machine operations)

For (B), I think that the coding style that lends itself naturally to the facilities offered in C++ results in slower programs and faster development.
EDIT: Or slower development, depends. C++ has many caveats. Or faster C++ programs, when the algorithms are more carefully designed, since other aspects are taken care of. Indirect consequences are too many. I anticipate a point of harsh dispute.

What if you don't employ such style? Then, why do you employ this language. For having classes, templates, and virtual functions, instead of structures, preprocessor tricks, and function pointers? This is not a such big motivation for me. It is not something to ignore completely, but it would not be my top priority.

kbw wrote:
STL is as fast as hand rolled containers/algorithms.
Of course, in a sense it was hand-rolled too. Can it be adapted to every situation without performance losses? Ultimately, no.

It is an excellent library though. In fact, I think it is renowned for it. C++ is great language for library developers. More so then for programmers. If you are still referring to the performance cost of say, function call that can be inlined, that was not my point entirely. If that was the basis for comparison, C++ would turn out better.

kbw wrote:
C doesn't offer language support for OO.
I know. Then again, C programmers will say the OO model of C++ is restrictive and they can do better with object-based programming. Not that I learned what "true OO" means to this day.

Regards
Last edited on Jan 31, 2011 at 11:27pm
Feb 1, 2011 at 12:01am
anyway back to my earlier point.


May be it can slow things but how it could produce such an effect that the time becomes 1/6th of the earlier time??


quite a bit of difference, a shuffle of war and peace
one using

cout << blah << endl;
the other:
cout << blah "\n";

as you can see an appreciable difference.

$ time ./fisheryates < war_and_peace.txt > 1
0.25s real 0.13s user 0.11s system
$ gmake
g++ -Wall -O3 -Wall -I/home/billy/include -I/usr/local/include -I/home/billy/include -I/usr/local/include -c -o fisheryates.o fisheryates.cpp
g++ -L/home/billy/lib fisheryates.o -lbill -o fisheryates
$ time ./fisheryates < war_and_peace.txt > 1
0.15s real 0.13s user 0.00s system
Feb 1, 2011 at 8:43am
simeonz I don't think you're comparing like with like. I think you're mixing paradigms rather language. It's the procedural model that doesn't scale to the apps we commonly develop now. If you build larger apps you need more robust infrastructure that itself has a cost, but that doesn't mean C++ is intrinsicly slower.

+1 bigearsbilly
Once you establish that the app is I/O bound, you can increase the buffer sizes, use streams that aren't tied, use binary file mode, and if necessary use OS specific features to improve performance.
Last edited on Feb 1, 2011 at 8:45am
Feb 1, 2011 at 3:03pm
Maybe the book--"more effective C++"(Item 23) would tell you why and what you should do
Feb 1, 2011 at 3:41pm
Maybe you could quote it and enrich the thread.
Feb 1, 2011 at 3:59pm
from the Item 23 of "more effective C++"
I hope this would not violate the laws...

Library design is an exercise in compromise. The ideal library is small, fast, powerful, flexible, extensible,
intuitive, universally available, well supported, free of use restrictions, and bug-free. It is also nonexistent.
Libraries optimized for size and speed are typically not portable. Libraries with rich functionality are rarely
intuitive. Bug-free libraries are limited in scope. In the real world, you can't have everything; something always
has to give.

Different designers assign different priorities to these criteria. They thus sacrifice different things in their
designs. As a result, it is not uncommon for two libraries offering similar functionality to have quite different
performance profiles.

As an example, consider the iostream and stdio libraries, both of which should be available to every C++
programmer. The iostream library has several advantages over its C counterpart (see Item E2). It's type-safe, for
example, and it's extensible. In terms of efficiency, however, the iostream library generally suffers in
comparison with stdio, because stdio usually results in executables that are both smaller and faster than those
arising from iostreams.

Consider first the speed issue. One way to get a feel for the difference in performance between iostreams and
stdio is to run benchmark applications using both libraries. Now, it's important to bear in mind that benchmarks
lie. Not only is it difficult to come up with a set of inputs that correspond to "typical" usage of a program or
library, it's also useless unless you have a reliable way of determining how "typical" you or your clients are.
Nevertheless, benchmarks can provide some insight into the comparative performance of different approaches to
a problem, so though it would be foolish to rely on them completely, it would also be foolish to ignore them.

Let's examine a simple-minded benchmark program that exercises only the most rudimentary I/O functionality.
This program reads 30,000 floating point numbers from standard input and writes them to standard output in a
fixed format. The choice between the iostream and stdio libraries is made during compilation and is determined
by the preprocessor symbol STDIO. If this symbol is defined, the stdio library is used, otherwise the iostream
library is employed.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#ifdef STDIO
#include <stdio.h>
#else
#include <iostream>
#include <iomanip>
using namespace std;
#endif
const int VALUES = 30000;                 // # of values to read/write
int main()
{
  double d;
  for (int n = 1; n <= VALUES; ++n) {
#ifdef STDIO
    scanf("%lf", &d);
    printf("%10.5f", d);
#else
    cin >> d;
    cout  << setw(10)                     // set field width
          << setprecision(5)              // set decimal places
          << setiosflags(ios::showpoint)  // keep trailing 0s
          << setiosflags(ios::fixed)      // use these settings
          << d;
#endif
    if (n % 5 == 0) {#ifdef STDIO
      printf("\n");
#else
      cout << '\n';
#endif
    }
  }
  return 0;
}



I have run this program on several combinations of machines, operating systems, and compilers, and in every
case the stdio version has been faster. Sometimes it's been only a little faster (about 20%), sometimes it's been
substantially faster (nearly 200%), but I've never come across an iostream implementation that was as fast as the
corresponding stdio implementation. In addition, the size of this trivial program's executable using stdio tends to
be smaller (sometimes much smaller) than the corresponding program using iostreams. (For programs of a
realistic size, this difference is rarely significant.)


only part of it
Feb 1, 2011 at 5:13pm
Thanks a lot. It's not illegal to quote a book if you give proper credit.
Feb 1, 2011 at 6:59pm
Ok i understand that there may be some differences in C and C++ in terms of speed but can it make such a large difference something like my code just take 3 mins to execute when i just change every ifstream and ofstream by FILE* and earlier it was taking 18 mins to execute . Thats what the real question is . I want to know is there someone who can explain me this????
Feb 1, 2011 at 7:07pm
It is impossible to say without seeing both versions of the code. Anything else is idle speculation.
Feb 1, 2011 at 11:46pm
Mostly, the difference is I feel in the difference between:
fprintf(out,"%#08x %+.2ld",idnum,late_book_fee);
and
1
2
out << showbase << setfill('0') << setw(8) << hex 
	<< idnum << setw(0) << setprecision(2) << showpos << late_book_fee;

You see, the first is one call, and the second is 13 (count em) calls, 9 of which are overloaded operators, and I didn't even leave the stream back in a sane state. This is automatically slower. On top of it, it also compiles slower. *stream sucks in many other ways too, like thread safety, storing formats in files, and the fact that after you hex a stream, you can't really go back unless you stored the flags. It's in my opinion the worst feature of STL (with the best ones being string, vector, and map).
EDIT: And who the hell decided to overload "bit shift left" to mean "print to file"? That's like overloading "exclusive or" to mean "exponent", or "greater than" to mean "put in file", like in the shell.
Last edited on Feb 2, 2011 at 12:06am
Feb 2, 2011 at 12:19am
EDIT: And who the hell decided to overload "bit shift left" to mean "print to file"? That's like overloading "exclusive or" to mean "exponent", or "greater than" to mean "put in file", like in the shell.
What nice ideas right there lol In fact it has been considered in forums on the web to use such overloads. The exponent thing is particularly tempting.
Pages: 12