(Project Euler) Number of nCr greater than one million

I'm trying to solve Problem 53 in Project Euler.
Here's the problem:
http://projecteuler.net/problem=53

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#include <iostream>
using namespace std;

int main()
{
	const int nLimit = 101;
	const int rLimit = nLimit;
	long C[nLimit][rLimit] = {};
	int count = 0;
	
	for	(int n = 0; n < nLimit; ++n)
		for	(int r = 0; r <= n; ++r)
		{
			if	((r == 0) || (r == n))
				C[n][r] = 1;
			else
				C[n][r] = C[n-1][r-1] + C[n-1][r];
				
			if	(C[n][r] > 1000000)
				++count;
		}
		
	cout << "Number of nCr greater than 1000000 is "
		<< count << endl;
}

I wrote the program and found that my answer is 2378.
However, the website said my answer is incorrect.

Anything wrong in my program? Please help.
Here's a site listing the solutions of almost all Project Euler problems.
http://code.google.com/p/projecteuler-solutions/wiki/ProjectEulerSolutions

Of course, the site above isn't meant to be copy-pasting the solutions from. It's meant to help you figure out what you may be doing wrong.

I just now solved this problem in the D language, the dumb straight-forward way, by using std.bigint and indeed your answer is incorrect.

In C++, you may want to use an arbitrary precision arithmetic library such as GNU GMP.
https://gmplib.org/
Last edited on
Using unsigned long or unsigned long long gives 4075. So there must be wrapping into negative numbers.

Indeed, the first at:
n=34 r=16 C[n][r]=-2091005866


with 32 bit and 64 bit. So you will have to use something larger than 64 bits, and be able to detect wrapping by using signed numbers.

Add test:
if (C[n][r] < 0)
in the loop to detect.

As you only need precision close to 1,000,000, switch to one of the floating point numbers is the answer. double does it. With floating point numbers, underflow is the issue eg: 999,999 + 1 => 999,999.
Last edited on
Why using just long makes some elements negative?
Why using just long makes some elements negative?

Because at some point the number you want to store is too big to fit inside long int (this is an overflow).

I'll try to explain by using char instead of long.

char is mostly used to encode characters, however it is actually a very small integer (on most computers it is an octet -- it has a length of 8 bits, whereas long has 32 or 64 bits).

in memory, a char looks like:
bbbbbbbb (8 bits, each is either 1 or 0)
10011101 (for example)


If the char is signed, the first bit is the sign.
If the char is unsigned, the first bit is part of the value.

What this means is that signed char can have values from -128 to 127.
While unsigned char can have values from 0 to 255.

When you try to exceed the maximum value, an overflow occurs and your result wraps around.
This same thing happens for long, but it has a much greater range.
http://en.wikipedia.org/wiki/Integer_%28computer_science%29#C_and_C.2B.2B

1
2
3
4
5
6
7
8
9
10
11
12
#include <iostream>

int main()
{
    signed char sc = 127;
    unsigned char uc = 255;

    sc += 10;
    uc += 10;
    std::cout << "sc: " << static_cast<int> (sc) << '\n';
    std::cout << "uc: " << static_cast<int> (uc) << std::endl;
}
sc: -119
uc: 9


The casts were needed to prevent the stream from displaying the char's as symbols.
Thank you for your detailed explanation.

How can I know when overflow occurs?
Another way to check if nCr > 1000000 is:
nCr = n! / [ (n-r)! * r! ]
nCr = [n*(n-1)*(n-2)*...*(n-r+1)] / [1*2*3*...*r]
Can be calculated recursive as:
k1 = n / 1
k2 = k1 * (n-1) / 2
k3 = k2 * (n-2) / 3
k4 = k3 * (n-3) / 4
...
kr = k(r-1) * (n-r+1) / r

with k(i) >= k(i-1). So you just have to calculate the multiplication up to k(i) where k(i) > 1 mil., not the whole kr = nCr which causes overflow. This stament is true when r <= n/2...

.
n*(n-1) will divided by 2, because 2 consecutive natural numbers will have 1 number divided by 2, and n*(n-1)*(n-2) will have 1 number divided by 3, etc... So don't worry about integer division in k1, k2, k3, k4, ..., kr.
Last edited on
How can I know when overflow occurs?

You can use the limits library of C++ (or the climits library, inherited from C) to see what the limits are for each type.

http://www.cplusplus.com/reference/limits/numeric_limits/
http://www.cplusplus.com/reference/climits/
There is no need to use integers as you only need to determine when the answer is >1,000,000. Using 64 bit doubles (binary64) works as 100! (9.332621544×10¹⁵⁷) < numeric_limits<double>::max() (1.79769e+308) so there will be no overflow when using 64 bit doubles. The critical issue is that there is precision of at least 1 when doing many summations of numbers less than 1,000,000 so nothing is lost with each addition. 64 bit doubles have approximately 15-16 decimal digits of precision.
Ref:http://en.wikipedia.org/wiki/IEEE_floating_point
Last edited on
Topic archived. No new replies allowed.