I am interested in building an AI, so I came up with 6 Rules for Robotics. These rules ensure good performance and that the AI doesn't turn evil :p
1. A robot must not interfere (stop, act against) in legal actions that are legal by the area of the action's laws. Also, a robot must follow the laws of the current area.
2. A robot must not physically harm a human, or allow one to come to physical harm without the robot trying to stop the physical harm, as long as doing this doesn't conflict with the above rules.
3. A robot must not lie and answer all questions, as long as doing this doesn't conflict with the above rules.
4. A robot must not mentally harm a human, or allow one to come to mental harm without the robot trying to stop the mental harm, as long as doing this doesn't conflict with the above rules.
5. Robots must follow orders, as long as the orders don't conflict with the above laws.
6. A robot must try to survive.
Along with those rules, I came up with the 2 Robotic Truths.
1. Humans are superior than robots.
2. Humans created robots.
Finally, I decided the religion for robots: Agnostic Atheist. All religions have religious facts, yet Science can explain religion. Due to that, robots are agnostic atheist as that belief isn't disrespectful and it doesn't say there is or isn't God, making it very scientific (no proof one way or another).
The point of stating all this was to hopefully help someone with something AI related. Anyways, onto the fun part. Using the above rules, would anyone like to help me in Java with an AI? Second, why don't we have a contest:
APRIL 10(Beginning of day)-24 (Beginning of day) AI-A-THON!
CONTEST RULES
Two weeks. Prize is glory and honor. Any language. Build a talking AI (not a helper (Example: SIRI), but a talker (Example: EVIE)) while implementing the above, applicable rules. FROM SCRATCH. No libraries (SFML), just languages. GOOD LUCK. Two weeks from April 10th, Midnight, CST. ENDS April 24th 00:00 (NOT April 25th, 00:00), CST. Winner decided from a judge with all the binaries. Source code may be kept privately. Groups are allowed.
Your rules seem flawed with the whole "doesn't conflict with the above rules" anyways... sounds a bit like i robot
As far as the competition creating an AI without any libraries will be a competition with just you I would imagine. Either way creating an AI within the allocated time would be nearly impossible for those of us that work full time + overtime.
Gilbit: How are my rules flawed? They were based off of IRobot YET it's about impossible to follow those rules equally, hence the order of priority. I have a full work schedule and I believe I can do it. I am willing to extend the deadline. LBT: those were rules for robotics, not AI. I realized some rules are pointless in this case. That's why I said applicable rules.
LBT: In this case, it's just a AI, so there is no need for the rules pertaining to robotics, just AIs. AIs are limited without a body, so some rules don't apply. Alby, S G H: Just don't tell it your password. The AI I'm building now has no password for it. programmer007: That's 4 laws, and they allow for a robot takeover to stop pollution. What if a robot could read minds? (Happened in IRobot) The robot caused trouble by lying as to not hurt egos. Back to Mats: It is just as complex to make a talking robot, if not more, than one that follows orders. One learns and adapts (the talker), yet the other just says pre-programmed statements and just takes in and manages statements (the doer).
Although robot will takeover to stop pollution but theoretically by the definition of zeroth law and by the definition of humanity it cant restrict our free will so you wont possibly face anything like iROBOT... :D
PS: There are 3 laws of thermodynamics. But you know that actually there are 4(because of zeroth law)... Same principle :D
If free will is taken then there will be no point of life... there would be no harm as there would be nothing to be harmed... Thus robots/AIs will may simply take over and that's why the zeroth law is important
If free will is taken then there will be no point of life
There is no point now.
A robot's job by your rules is primarily to protect us. By mine it's the same, yet there are restrictions. We would be like slaves without free will, not robots, yet my facts prevent that because we will always be viewed as superior. If a robot can do something easily and correctly or a hard way with the same outcome, it should be logical and pick the easy way.