Sorry for the relatively long absence, a combination of panic-programming course work and getting AS results has been taking up my time recently. I've also started reading Surviving AI by Calum Chace, which is a highly informative book.
@Helios
If this is a concern, simply keep the network airgapped.
It's worth noting that even an infinitely intelligent being would not be omnipotent, even in matters relating to its own hardware. Just like a person cannot learn how to consciously rewire their own brain, it's not a given than a hard AI would be able to learn how to use a network, even if its hardware was connected to one. |
If an AI was taught to program, especially if it was one which had been taught to produce and improve other AIs, it would most likely be able to either directly modify itself, or at least eventually create a better version of itself, which then repeats and the AI gets better and better at improving itself, although there will of course be a hardware barrier to how good it can actually get. As such, I believe it's not out of the question that an AI could learn, most likely via incredibly fast trial and error, to at least send data over a network, if not more.
This is like saying that a train that cannot be stopped at all is more reliable that one that can be... |
This was intended to highlight an issue with a truly creative AI system, especially one with a natural language interface. There's no guarantee that it would interpret what it was told to do in the same way that the person commanding it interpreted the instructions. This is where a system of restrictions such as Asimov's laws could be required, however as I've mentioned, AIs could well learn to better themselves, and in doing so, they could remove any restrictions that we program in.
@DTM256
I'm a hobbyest coder, daydreamer, and one who thinks a little too far out side the box. I hope my opinions help. |
Thinking a bit far outside the box is most likely a good thing when it comes to this sort of topic. Also your opinions do help; I need to get the opinions of as many people as I can, especially people such as programmers.
This is clearly a massive book to write. To be 100% safe we don't make true AI, but just a simpletons. Yet the moment somebody makes a true AI, then they will have the upper hand. |
Well I've got a 5000 word essay and 15 minute presentation to write on it, so I'll most likely do a general analysis of the question and find a couple of specific areas to go into detail in. And this arms race scenario of everyone rushing to make better and better AIs is one of the dangerous scenarios which I intend to address. I believe some restrictions do need to be put in place to ensure that we don't develop something which is too powerful to control/contain, hence my main question.
Automated systems are not intelligent. Therefore, if it is true intelligent, then you take the AI unit to court to be judge like you would a person. The outcome is what you decide what to do withe the situation. |
While this does seem the logical conclusion, I'm not sure most people will ever be happy with punishing what they will most likely consider 'just a machine', if the machine does something to harm them.
Therefore, if it's intelligent, than it's going to know how to lie to get the end results it needs .... or wants. |
That's what I'm trying to work out how to avoid. The ideal situation would be one where we can make actually intelligent systems, but do it in such a way that they never actually want to do anything that we don't want them to. I'm not sure that'll happen though.
@htirwin
Another thing you might want to think about is the fact that most stock market transactions are now done automatically by AI. |
This is a very interesting topic, and one that is also mentioned in the book I'm reading. In fact I think that if we do get a feedback loop of self-improving, or evolving AIs, that it would most likely be here. I think evolving AIs are likewise an interesting topic, though as far as I know, something that has not been developed for practical use yet.
I think to make AI adaptable enough to take our place, we have to enable, maybe even encourage it to sometimes make mistakes. |
Is it really a mistake if the result is better than not making the 'mistake' though? Or is it teaching the AI to use variety, and to predict what other AIs will do and react to that?