I got some free time on my hands and decided to have some fun by implementing factorial computation. I was quite surprised at how quickly one can compute factorial without any high-powered mathematics - just common sense and without even including math.h !
cheers!
P.S. Sorry the code is a bit too long and "over-engineered" - it's cause it is composed of ready-written components I have in my main project.
@ the post: http://chaosinmotion.com/blog/?p=622 common, this is a joke.
I know it doesn't matter for anything, but I'd like to see that guy compute 10000! with that code ... [Edit:] alright, this guy's implementation is actually faster than what I wrote, due to my horrible large integer class
(My program does it in 5.6 seconds *together* with printing it out on the screen in decimal on my 1.4Ghz single core netbook).
20 000! though was a lot slower - almost 24 seconds. I tried wolfram's servers: mathematica computes 10 000! considerably faster (although i presume it runs on 3+Ghz and is likely parallelized), but gives no precise answer on 20 000!
@hamsterman: actually, I do handle negative input: the AssignFactorial function takes unsigned int as input. C++ is a strongly typed language, which means that the compiler should issue a warning, should you attempt to pass an int to the function.
Furthermore, as the number of primes smaller than a negative integer is 0 (I assume the definition that primes are always positive), even if you changed the AssignFactorial to accept int (intead of unsigned int), you would still get x!=1 for x<0.
I know that the latter is incorrect (according to the "morally correct" definition of factorial by the Gamma function), as the Gamma function has poles in negative integers and therefore x!=\infinity for x<0, x-integer. However, I did think of the possibility of the user enterring negative input.
The code is so long because I did not want to use a ready-made large integer library (that kinda defeats the purpose of the exercise: such a library should have a super-optimized implementation of the factorial function anyways). Fortunately or unfortunately, 90% of the code is implementation of an (unsigned) large-integer class.
By the way, for using gmp: I got advised to do this about 2 years earlier by a friend. I tried to compile gmp, but since it is in c, and I was a total beginner in c++, I simply couldn't get gmp to run.
I probably could have done it, but it felt horrible to lose so much time on something I could implement* myself "in a few hours". It took longer in the end (especially making it run fast enough), however, I did have a buggy version up and running in a few hours in the same day I failed to install gmp.
*by implementing I mean just the basics: by no means the full power of gmp. For example, these guys use "karatsuba three way multiplication" which is probably quite a pain in the butt to implement.
@ the post: http://chaosinmotion.com/blog/?p=622
common, this is a joke.
I know it doesn't matter for anything, but I'd like to see that guy compute 10000! with that code ...
(My program does it in 5.6 seconds *together* with printing it out on the screen in decimal on my 1.4Ghz single core netbook).
I don't know. Yours is pretty slow. If Java's BigInteger uses a proper arbitrary precision arithmetic library, even code with a lot of overhead could leave yours in the dust. The simplest Haskell implementation running on GHCi computes 10000! in under a second. A compiled version computed 10^5! in ~18 seconds. And it's not like Haskell is known for being a speed demon.
EDIT: Then again, I'm not running this on a netbook. But still, how much difference could there be?
def factorial(n: Int, r: BigInt = 1): BigInt =
if (n == 1) r else factorial(n - 1, r * n)
This code does 10000! in less than 0.3 s, with printing the result, including the JIT overhead, when run in REPL, on Core i5 2.6 GHz. So, even if we can roughly assume a single core of Core i5 is 3x or maybe 4x faster than a core of a 1.4 GHz netbook, your code, tition, is still almost an order of magnitude slower. Which is funny, because it only proves my point that you are one of those C++ programmers who think they write fast code only because they wrote it in C++.
This is what I call a "hello world" effect - someone writes a hello world program in C/C++ and sees it runs blazingly fast. Then he writes the same hello world program in Java, and concludes "wtf? 1 second to run a simple hello? Java must be sooo slow".
BTW: This Python code:
1 2 3 4 5
def factorial(n):
r = 1L
for i in range(1,n):
r = r * i
return r
works for me with a similar performance - less than 0.5 sec for a 10000!.
What a shame, your C++ code is slower even than Python! Python is faster than C++. LOL. :D
Hang it up man, I was just trolli... ehm joking.
It is something funny in it, when someone brags about "how fast a C++ code I wrote" and then that code turns out to be slower even than interpreted Python (which has one of the slowest interpreters ever, maybe faster only than Ruby). More such posts, and Pythoners start to widely believe C++ is slow. BTW: C and assembly programmers already believe that. Or at least if not slow, then terribly bloated.
Leaving jokes aside, there is such a phenomenon, that when someone uses a tool, then often she identifies with a group of people using thet tool / doing similar things. For example many beginners in Scala consider themselves "hackers" because they use a language very often used by real hackers. But using Scala does not make anyone automatically a hacker, just as putting complicated math formulas in your presentation doesn't prove you are smart or using C or C++ does not make your programs run automatically fast.
On the plus side, I am fairly confident in saying that rapidcoder will approve of my language. After all, it seems that he approves of high-level languages (Java, Python, Scala) and goes out of his way to shun the lower-level ones (C, C++, Assembly).
@Albatross,
On glue, you should make a thread about it. It's mean not to give any information about it while occasionally mentioning it..
On rapidcoder, I don't remember him criticizing anything other than C++. And he as a point.
In this particular thread, he does have a point. Yes. A bogosort program in GAS Assembly, for example, will be slower than a well-written mergesort written in Ruby. Just writing something in a language will not necessarily make it inherently faster, however you can't generalize that one language is faster than the other based on that bit of information.
And... I would feel a bit awkward making a dedicated thread about another programming language in a C++ forum.
And... I would feel a bit awkward making a dedicated thread about another programming language in a C++ forum.
There has been a bunch of Java threads and some Scala threads here already. Probably something else too. I don't think that's a problem. Also, you're writing the compiler in C++, aren't you?
On the plus side, I am fairly confident in saying that rapidcoder will approve of my language. After all, it seems that he approves of high-level languages (Java, Python, Scala) and goes out of his way to shun the lower-level ones (C, C++, Assembly).
This might be the impression caused by the fact this is a C++ forum. But in reality, I could make quite a long list of things that I don't like about Java (and even about Scala or Python, too). And C and assembly? Almost nothing to complain about. The syntax of C could be better sometimes, but there are no big issues.