AI making

Pages: 12
Oct 1, 2010 at 3:47am
Since everyone hated the idea of the last person who talked about this subject i propose this: do better

you all thoroughly destroyed his arguements and that's nice but this website is supposed to help people in need. he was in need of help for creating an AI and you just all told him he sucked or that it was impossible. real constructive...

let's try this why doesn't everyone on this website (who cares) get together here and discuss thoroughly how to create one. we are all programmers of some skill, maybe we can do it together...
Oct 1, 2010 at 4:34am
If you read the thread from beginning to end, you know that the replies started out helpful (in the sense of telling him that he had no idea what he was doing, which he didn't), then turned hostile when the OP replied with arrogance. Let me tell you something: we don't like arrogance, doubly so when unjustified. Isn't it extremely arrogant for someone with little to no formal computer science training to think that they can outdo the efforts of an entire field that's been working at the problem for decades?

I mentioned in that thread how it's a lot like the people who think they've discovered an infinite compression algorithm, and I mean it. Go to comp.compression and search "infinite compression". Every once in a while, some moron thinks they've discovered a method to do the mathematically impossible and they feel compelled to tell the world. One instance I witnessed somewhere else involved converting the binary representation of the message into a mathematical expression. I thought it was funny that the guy couldn't get into his head the fact that to represent random strings you'd need at least as much space as the original string. Obviously, when the algorithm utterly failed, he blamed his buggy implementation, not the principle behind it.

But I'm getting way off-track. The fact is, subjects such as data compression and AI are very complex and people dedicate their lives to study and understand them. I can't think of many things as arrogant as some random dude walking in claiming to have found the holy grail after doing a summer course on the subject.
Oct 1, 2010 at 4:57am
Granted the guy was a bit of an... well i wont say it but still, it remains of interest to me that you didn't try to offer a suggestion. what i would've done is give the guy something massively complicated and wait 5 years for him to puzzle it out. there was no need to blast his hopes. but i see why you did it and to be honest i don't care. what i do care about is making AI. what do you think? if we got a lot of people on this website to join in, could we do it? i have a few ideas on how myself... by the way i have 3 years of c++ experience and specialize in embedded systems (and yes i know most of it is in C)
Oct 1, 2010 at 5:02am
Well, I doubt anyone here has any experience in AI except maybe state machines of the kind you'd find in a game.
Oct 1, 2010 at 5:09am
Nah - no need to give him anything massively complicated - his original objective is already massively complicated to implement.

Part of the difficulty in figuring out any AI implementation is striking a balance between flexibility (so it has the chance to learn) and structure (so what it has learned can actually be deemed "intelligent"). To give any AI project a small chance of success, usually, one tries to narrow the field and/or the problem as much as possible, not throw out something totally broad and generic.
Last edited on Oct 1, 2010 at 5:12am
Oct 1, 2010 at 5:09am
what about logic? do we have any good logicians here because to be honest... that's all we need. I doubt anyone in the world has much experience w/ AI since we've never made any, that's what im saying. if we got enough programmers together we actually might able to get somewhere...
Oct 1, 2010 at 5:14am
Good AI is a lot more than having good logicians - Prolog was invented in 1972, 38 years ago - the AI problem would be done already if logic were all we needed.

Realize that, with a high degree of probability, what you are thinking now, someone else may have tried, and in fact, spent a good part of their life researching. The world is big and there are a lot of intelligent people out there.
Last edited on Oct 1, 2010 at 5:19am
Oct 1, 2010 at 5:19am
ok then what about time? a human baby takes years of constant input to become coherent. why should skynet be any different? that's what i think the problem is... aren't giving the leanring process enough input and time. i think flexiblity and time are key in this case. the human mind is nothing more than an advanced AI that has the remarkable ability to pass on it's knowledge. what if we copied the design?
Oct 1, 2010 at 5:30am
That's been thought through and researched for decades, already, too. The idea behind neural networks is to copy the structure of the mind with nodes and interconnections, in hopes to gaining some intelligence. There is some success using neural nets in the specialized areas of pattern recognition, but its power is nowhere near that of a generic baby brain. There are some AI theorists who think that developing AI may be like engineering a plane - we don't necessarily need to follow the structure of a bird in order to fly. In fact, the principles are somewhat different: planes depend more on thrust and gliding. The same may, eventually, be true of AI.

The bottom line is, we can't copy the human mind because we can't figure out how the human mind works. We may understand the mechanics and the chemistry at neuron level, but the brain is much, much more than the sum of its parts.

I will requote helios' quote in the other thread, because it's the most relevant message in the entire thread:

A beginner is unlikely to make a significant scientific discovery, because the ideas he may come up with are likely repetitions of other ideas that ideas that have already been tested, or they might be in collision with facts or theories he or she doesn't know about.
-Carl Hempel, Philosophy of Natural Science, ch. 2.

That's not to say it's impossible, but it's less likely. Since our lifespans are limited, best to spend a little of that time understanding what has been attempted, so we can better spend our time investigating ideas which have a better chance for success.
Last edited on Oct 1, 2010 at 5:34am
Oct 1, 2010 at 5:33am
I doubt anyone in the world has much experience w/ AI since we've never made any
What do you mean? There's voice recognition, face recognition, fingerprint scanners, data mining, ALICE.
http://en.wikipedia.org/wiki/Applications_of_artificial_intelligence

Rather than this AI business, I have a medium-sized OSS project in mind I've been wanting to do for a while but couldn't partly because of time constraints and partly because I wanted to do it in a team.
Oct 1, 2010 at 5:39am
First to answer kfm: i see what you mean... and i think you're right. and was thos last 3 lines a quote because those were GOOD!

Next Helios: if you need help with the OSS i do know WinAPI programming as well. ill be glad to if you want. oh and i didn't know about ALICE
Last edited on Oct 1, 2010 at 5:40am
Oct 1, 2010 at 4:59pm
Basically, what I want to do is take librsvg, rip Cairo out of it, and replace it with Anti-Grain Geometry and some callbacks.
Oct 1, 2010 at 5:13pm
Okay... helios, you are my hero. Honestly like a shining demi-god. Your project sounds like a very fun project- vector graphics are awesome and I know you need some intelligent people working with you. I can't say much for his experience, however "sargon94" will work hard on your team. I could lend a hand as well, depending on what you needed.

to sargon94:
bro.. just wow.
Last edited on Oct 1, 2010 at 5:40pm
Oct 1, 2010 at 8:49pm
The following is just an idea, likely already thought of (especially because I have no experience in AI), so blast it as you see fit.

In the (minimal) amount of "research" (READ:Googling) I've done on neural networks, I think that traditional NNs aren't emulating the brain as much as they should. In traditional networks, you have a static premade network of nodes. In the human brain, though, don't neurons grow and connect themselves to existing networks? Also, I think learning in the brain follows the rule, "If they fire together, they wire together," while people normally use algorithms like back-prop.

While the plane analogy makes sense, the brain is MUCH more complex than the idea of flying, so any other methods of creating intelligence would be ridiculously complex (as with the brain). In my mind it makes more sense to program AI to follow the principles of biological intelligence more closely.
Oct 2, 2010 at 3:19am
Entropy will probably wreak havoc on your idea.

Yes, the brain is more complex than a NN. No, most NN's don't grow and connect themselves to existing networks.

However, the bottom line is: we really don't understand where the intelligence is coming from that complexity in the brain. All we're doing is making best guesses (+1 Entropy).

Is it in the number of nodes? Maybe. Is it in the number of interconnections? More likely. However, if we just throw more nodes and interconnections together, will the system just spontaneously self-organize into a coherent system? Entropy usually says no.

There has to be something that really adds intelligence to the system. Maybe it's the training? Maybe it's in the number and weight of the feedbacks? Maybe it's in the bunches of neurons that fire together, as you suggested? Perhaps I could use genetic algorithms to have the system self-organize? It could even be due to the analog nature of the brain (as suggested in another thread) which allows for fuzziness? I'm pretty sure all these areas have been researched; however,

we just don't know where the intelligence is... ...yet.

As for the plane analogy, it applies to varying degrees. Note that we are building digital computers out of silicon, not growing analog neurons out of carbon, because we are assuming that those particular electrochemical features in biological neurons are not important. AI researchers are only modeling the features that they think are relevant.

Are those omitted features truly important or not? No one knows yet, but this problem of extracting only the relevant features is intrinsic to the science of modeling.

BTW, a bird's wing flap is actually much, much more complex than the Bernoulli's Principle behind machine flight. In the early days of flight, we saw planes with flapping wings that look silly to our eyes today, but definitely seem like a valid design back then. It's just an analogy - the real answer to AI could be really simple (based on just one area of AI not yet invented) or complex (needing a combination of many aspects of AI + other fields).

We don't know.

I don't want to say there are no advances in AI in the last few decades, because there are definitely advances in biometrics and some other pockets of AI as mentioned earlier; just no big breakthroughs (of the Steven Spielberg-type) yet.
Last edited on Oct 2, 2010 at 3:37am
Oct 4, 2010 at 6:09pm
@sargon94
I think the biggest problem in developing an actual thinking AI would be trying to program common sense into a computer. As far as I see it, in order to make a computer think on its own it will have to learn.

For a computer, learning would be to try and reprogram itself so that it won't commit mistakes experienced in the past. Technically speaking, I think making a program reprogram itself is possible, which is a good sign. But when you think about it more carefully, and ask yourself, how will it know when it's supposed to reprogram itself?

I think that's where the real problem lies with AI. Computers don't understand right from wrong or good from bad, and they don't have any motivation of any kind. Neither can we program the idea of good or bad into a computer (unless if we try to limit it to a single task like chess, or catching a ball), because good and bad is relative to the situation.

For example, lets take a simple task like formatting your hard drive. If you ask, when is it good and when is it bad to format your hard? There is no definite answer, because the answer depends on what your intentions are. Formatting your hard drive may be correct or not depending on the situation, if you want to reinstall a new OS then it's probably the "correct" thing to do, but if you want to keep your files then it's definitely not the correct choice.

So until computers have that ability of common sense to discern what the correct thing to do is from what isn't, then it would be impossible for a computer to learn, and actually think for itself. That's just my opinion about that.
Last edited on Oct 4, 2010 at 6:10pm
Oct 5, 2010 at 7:57am
You know what? That was a fantastic insight. If you were to have a certain goal in mind (preform good) and a hierarchy of good and bad things, with perhaps a way of classifying unknown things, you could feasibly create a learning AI, however where would it learn new actions? Do you need a list of possible actions as well?

Also on the topic of artificial intelligence, what exactly do you intend to do with said intelligence after it has "learned" what you intend? Spout off information? (just musings)

1
2
3
4
5
6
7
8
9
10
11
12
13
struct knowledge{
   int level_of_goodness;
   string what;
};
struct holy: public knowledge{holy(){level_of_goodness = 777;}};
struct unholy: public knowledge{unholy(){level_of_goodness = -666;}};
class AI{
   protected:
   knowledge all;
   public:
   holy learn(knowledge what);
   unholy spew();
};
Oct 5, 2010 at 5:38pm
kfmfe04 wrote:
A beginner is unlikely to make a significant scientific discovery, because the ideas he may come up with are likely repetitions of other ideas that ideas that have already been tested, or they might be in collision with facts or theories he or she doesn't know about.
-Carl Hempel, Philosophy of Natural Science, ch. 2.


Then why is it that the majority of new commets are found by ammateur astronomers?
Last edited on Oct 5, 2010 at 5:39pm
Oct 5, 2010 at 5:59pm
A new comet is hardly as significant to astronomy as a strong AI is to computer science, but if you want an answer, it's because there are more amateur telescopes than observatories.
Oct 5, 2010 at 6:02pm
Then why is it that the majority of new commets are found by ammateur astronomers?

because "discovering a comet" isn't in the same league as as a significant scientific discovery

with the availability of the Internet and the openness of the astronomers, I wouldn't be surprised if it's even easier these days to find an new comet and verify that it hasn't been found already...

...besides, I am sure most professional astronomers are interested in harder problems than finding a comet that hasn't been observed before.

It's almost like saying finding a yet undiscovered flock of migrating birds is as significant as formulating Maxwell's equations. An understanding of how to engineer strong AI would be worth a lot more (and is much more difficult) than discovering a comet.
Last edited on Oct 5, 2010 at 6:08pm
Pages: 12