General computational question:

Hi, just a general question, if anybody has any further knowledge on this:

I am looking at doing some fairly large matrix operations, and, in the process of this, also need to generate COMBINATIONS, i.e. NCn. Generating combinations is a LENGTHY process, as there are a ****load (pardon my language) of them if N and n become sufficiently large. for example, I calculated, based on the time for generating 20C12 combinations on my machine, and taking some ROUGH estimates of current computational resources AROUND THE WORLD, generating 2079C12 combinations would take, using the algorithm I have, about 2.5E+09 years, using all ca. 1.1 billion personal computers around the world full time.

obviously, this means that, on my, single, laptop, I can only generate about 71C12 combinations in a YEAR, and, to be more realistic, generate 45C12 combinations in a day.

SO, as a result of this, I was taking a look at Giga and Teraflop values of current distributed computing projects around the world, and how, generally, the computing capacity of pcs increase as time goes on.

For example, I assume, most of you here are familiar with Moore's law, which, is a rough approximation, that transistors on an integrated circuit chip double approximately every two years. This has, according to Wikipedia, which might not be the most reliable, but, at least, A source, led to an increase from 2300 transistors in 1971 on an area of 12 mm² to 5,000,000,000 (five BILLION) transistors on an (admittedly large) 62-Core Xeon Phi processor.

Now, what I am wondering, and where I haven't found much information yet, is: How does this (moore's law), translate into actual computing capacity, measured in FLOPS (giga,tera, whatever). Is it linear, meaning that, computing capacity roughly increases linearly with transistor count, in other words, meaning that actual computing capacity doubles approximately every two years.

I am wondering this, ultimately, because I am wondering, if it will EVER, in the near-to-mid future (in thousands, or maybe hundreds of years, im pretty sure it will be) possible, to do such large operations, as, for example, calculate 2079C12 combinations.

Just as an example, if computational capacity, measured in flops, does grow linearly with transistors, and, transistors continue to double, roughly, ever two years, as specified by moore's law, then, a computer such as mine, which I am ranging in at about 30 gigaflops, will have, in about 200 years, enough capacity to create all 2079C12 combinations in only a year (again, something, that would now take ALL PCS ON EARTH more thean 2.5 billion years).

Of course, this is, assuming moore's law stays relevant and no improvements, or, as will hopefully not happen, limits to it are found.

Again, if anyone has any input on this, I would be very interested in this. Also, if you think my calculations are at least SLIGHTLY valid ;)

Cheers!

C :)
Last edited on
Actual single core speed does not increases much for several years. Now main speed increase is gained through multithreading and distributed calculations.

There is many different distributed volunteer computing systems working right now: SETI@Home, Folding@Home which have a speed of 12 petaFLOPS and others.

Raw time which is needed to calculate something is meaningless. You should think if there is practical use of all 2079C12 calculations.
Well, I think, it can be argued, in the case that I have, that it can, or at least COULD be, of practical use to have all, or at least, a lot, of the 2079C12 combinations. However, That seems to be, basically, not completely likely, at this point in time. So, I am working with smaller subsets, such as 20C12, 40C12, and so on.

However, anyway, in this thread, I am mainly interested in how computers will evolve, in general, meaningless of what I, or ANYONE else is actually DOING with it ;)

I did some basic calculations, and, IF, the assumptions I have made are roughly valid, creating, for example a set as huge as 2079C12, would not even be possible on ANY distributed computing network available, at least on THIS planet :) at this point in time.

I am, however, looking at OpenCL, CUDA, etc, in order to see if that can generate any substantial benefit in the problem I am having.
Topic archived. No new replies allowed.