AI making

Pages: 12
I would still be more than thrilled to find one.
...And, of course, because the amount of training necessary to look through a telescope is not equivalent to that necessary to, say, formulate Kepler's laws of planetary motion, discover general relativity, or invent calculus.
Galileio was a begginner famouse for looking through a telescope.
Hempel said unlikely, not impossible.
No, Galileo Galilei was famous for the "scientific revolution".
Galileo Galilei was famous for the "scientific revolution".


....because he looked through a telescope.
Sooo... Now we need to create a powerful AI... WITH a telescope... that searches for new comets.

Hmmm, what will we name the comets? AI(1...n) seems rather boring to me. 8^P
@wtf,
No, that had nothing to do with telescopes. All he did with telescopes was use and improve them.
Lol, I started this thread with the intention of getting everyone to join a group to start an AI (wishful thinking). now it's become an astronomy debate... programmers... Incidently i've decided to write a book instead.
Well joking aside it sounds like something that I would be more than happy to participate in. I should note if not already obvious that I'm still a beginner and know practically nothing about actual AI. So maybe I might not be of great assistance in inventing a new form of AI, but maybe there's a small chance that I could contribute somehow in some meaningful fashion. But I am a firm believer that in order to learn, or for that matter invent, any AI, one needs to start somewhere. I just read a quote attributed to Steve Jobs who said: What differentiates a leader from a follower is innovation. I might add that that applies to beginners equally well to experts.

So what is it you want to do particularily? One thing I might suggest, something that would have practical applications would be a language translater. I doubt it would be as complicated as the project of creating a bot with "understanding" as the other poster had inquired about in the other thread. It might be possible to simply implement an Expert System, which, depending on your definition might disqualify it from being AI.

Suppose you were to code enough rules and algorithms to 'brute force' (of sorts) every possible permutation of definitions of words in a database, and then map say a sentence in english to a corresponding sentence in spanish which matches the same definition. You would need some way to test whether the sentences make sense semantically as well as syntactically, I'm at a loss for how to do either, but I presume someone on here will tell me the latter has to do with parsing.

Maybe the semantical aspect of it could be handled with classes. Say a class for definition, which in turn would have a class for extension and another class for intension. I fooled around with Prolog once a little bit, not even enough to understand it, but would prolog be well suited for a project like this? It seems to test if x is a y and that sort of thing. What I don't understand about prolog is it returns false to the query if x is y if nothing has been stated about x or y, which I thought were counter intuitive. It should return a sentinle for (unknown). Prolog I have read is well suited for certain AI in general, plus using the extension/intension approach makes Prolog stand out in my mind.

But anyways getting back to the main point. You would need a database of words etc, each word would have multiple definitions, each definition would have at least an intension or an extension, but ideally both. Each intension would be a sentence of the kind: X is a Y and X has Z, y & z's would thus be similarily defined. Each extension would list if possible all known instances of X. etc etc.

Then when you tried to translate from english to spanish it would try to translate the sentence using the first definitions and employ algorithms to determine which one of those definitions is the one probably intended. The intensions and extenions may help in this process. But I havn't given it enough thought, what I'm talking about may involve infinite recursion and therefore be impossible. Just my 2 cents though to get the rest of you thinking.
Here's the thing:
to make AI, we need to make a computer WANT. what is want? want is what we do. we "want" to understand the world through pattern recongnition. when we look through those patters and inputs we look for other "wants" like food, reproduction, etc.
the problem with computers is that we're not sure what they "want"... they don't eat, sleep, reproduce (yet) or do any of the common "wants". the only "want" we can and should program into computers is understanding. we should program a computer to want to understand. when i say want i mean they have no choice: They have to try an make sense of their input using pattern recognition. once we have that down, we con look more closely at ouput. the output could also be closely linked to want of understanding: ex. the computer could ask "what is this?"
the input code would probably be similar to this:
1
2
3
4
5
6
7
8
9
10
11
void mind ()
{
while (1)
{
getInput ();
interpret_input_based_on_previous_patterns ();
interpret_input_based_on_previous_input (); //aka try to find patterns, this is key and prob. very difficult
write_down_new_patterns (); //if any
output (); //not sure what to do there
}
}
AI seems the flavour of the month.

I am tired and it is late, but I work in AI and am a logician so here are some thoughts:

what about logic? do we have any good logicians here


Logic will not do it, because logic (at least if you mean anything at least as strong as predicate logic, and you surely do) is non-constructive (no computer can solve it by brute force, and - ok somewhat controversially - we have no heuristics capables of putting logic to use as a realistic AI tool).

it makes more sense to program AI to follow the principles of biological intelligence more closely.


Perhaps. The problems are:
1. We do not know very much about biological intelligence.
2. What we do know, or what at least the models that we use to model what we like to think we know, are very, very compex.

I think the biggest problem in developing an actual thinking AI would be trying to program common sense into a computer. As far as I see it, in order to make a computer think on its own it will have to learn.


Learning is relatively easy. Learning in a way that is both tractable (can be computed) and robust (ie does not have an unreasonable failure rate with new cases) is the trick. And what a trick it has turned out to be! :(

And that gets to the center of the problem. We know how to create an AI, if only we had a computer that could process infinite amounts of information instantly. AI is ultimately about trying to find heuristics to compute problems that are too complex for any computer to ever be able to solve by brute force. (NB, 'ever' and 'any' are not exagerations). It should be pointed out that AI is now advancing rapidly. And it is an area that interested computer scientists could meaningfully contribute. But only with a large amount of math behind the belt. Read the text books first, then find people to work with.

closed account (3TkL1hU5)
http://www.gamedev.net/community/forums/forum.asp?forum_id=9
@SealCodes : Nice link! I'm enjoying the threads!
This has, so far, been an interesting discussion (though it did vear off track into astronomy for a little). I just wanted to add my 6 cent's worth:
Why should the AI program be robust? Look at us, we make massive errors all the time. Why should an AI program be any different. If one creates learning through constant input and pattern recognition one is invariably going to make mistakes. If a small child sees something such as fuel and then sees a liquid that looks like it being drunk by an adult, he may try to drink the fuel. This is a poor example but it illustrates the point clearly I think. An AI program would make mistakes like this and many more besides. The solution is simple: Time. As children grow older they begin to understand more as they learn and experience the world. New patterns are made and become available thereby creating what we call "learning." As we aquire more and more of these patterns over time, we begin to understand more. As our understanding grows, we look back on old patterns and alter or get rid of them. The robustness of an AI program should come naturally over time. If this theory is correct (and it may not be, I am not a AI programmer in any way), then the task is to create a basic AI program and let it run for a long period of time. All the while, we give it constant input, showing it, teaching it, and telling it about the world. This, I believe, is the key to creating AI.
closed account (3TkL1hU5)
For anyone who hasn't already read this.. It's a good read.

http://en.wikipedia.org/wiki/Artificial_intelligence
Topic archived. No new replies allowed.
Pages: 12