
please wait
Should we aim to develop systems that can replace people as manual labourers? |
one which has been taught how to complete its task rather than just having been programmed to do it? |
Who should receive the blame if an intelligent system harms someone |
Is the sci fi trope of an AI which is smarter than humans and turns against them actually possible? |
Should AI be used to develop a better AI? |
If a computer system is 'better' (e.g. is correct a higher proportion of the time) than a person at a job, is it always right to replace the person with the computer? |
Please clarify this distinction. |
What do you mean by "is it right"? |
If someone programs a computer incorrectly, and the result of the computer following this program is that it harms someone, the programmer would usually be to blame. [...] If it was an intelligent system that was taught and then got it wrong, should the responsibility go to the programmer, whoever taught it, whoever owns it etc? |
However to clarify my original question, do you think we will ever get to a point where AI will have been developed to such an extent that it could simply decide not to obey humans and there would be nothing we would be able to do about it because it would have become more intelligent that people. |
If we create a system that continuously gets more effective at developing intelligent systems, could it get to a point where this is dangerous because the developing system can develop better than people, hence we become dependent upon it? |
Alternatively, could it develop dangerous systems that cannot be stopped because people simply are less intelligent (as in cannot thing/develop/learn as fast)? |
Going back to the doctor example. Doctors are people and so they make errors a certain proportion of the time. Yet they are also able to apply judgment in a way far beyond current diagnosis systems. If, because these systems don't make human errors, they have a higher success rate, yet they cannot adapt as well as doctors if they encounter something they don't know, should the doctor be replaced. In this case you are choosing between randomly misdiangosing a higher proportion of people, as opposed to misdiagnosing a specific, but smaller, group of people, which the computer system would have no chance of diagnosing. I hope I made myself clear with that, I'm not sure though. It's really meant to be a ethical question of should you always replace people with computer systems that are effective a higher proportion of the time? |
one of the main problems with simply replacing everyone whom you can replace with machines, is unemployment. What will those people do? |
Is it really desirable to get to the sort of reality in films such as Wall-E where all work is done by robots, and humans are left purposeless? |
Should we aim to develop systems that can replace people as manual laborers? |
Who should receive the blame if an intelligent system harms someone, especially one which has been taught how to complete its task rather than just having been programmed to do it? |
Is the sci fi trope of an AI which is smarter than humans and turns against them actually possible? |
Should AI be used to develop a better AI? |
If a computer system is 'better' (e.g. is correct a higher proportion of the time) than a person at a job, is it always right to replace the person with the computer? |
Any opinions/thoughts on any of the above questions, or the over arching one of to what extent should the development of AI be restricted in the future, are greatly appreciated, as are any pointers to books/internet sources on the matter. Thanks |
We already are purposeless. We just don't have the luxury of doing whatever we want with our time, so we spend it learning and applying trades, and convinsing ourselves that this is what purpose is, rather than it being what it really is, which is the method we've found to not starve. If you want to look at what a post-scarcity civilization really looks like, check out Star Trek. |
No, the power can always be cut. |
the easiest way around this would be if the AI was a virus or worm |
preventing them from being shut off in any way is quite an effective way of making a system reliable |
To what extent the development of AI should be restricted in the future. |
Should we aim to develop systems that can replace people as manual labourers? |
Who should receive the blame if an intelligent system harms someone |
Is the sci fi trope of an AI which is smarter than humans and turns against them actually possible? |
Should AI be used to develop a better AI? |
If a computer system is 'better' (e.g. is correct a higher proportion of the time) than a person at a job, is it always right to replace the person with the computer? |
Going back to the doctor example. Doctors are people and so they make errors a certain proportion of the time. Yet they are also able to apply judgment in a way far beyond current diagnosis systems. If, because these systems don't make human errors, they have a higher success rate, yet they cannot adapt as well as doctors if they encounter something they don't know, should the doctor be replaced. In this case you are choosing between randomly misdiangosing a higher proportion of people, as opposed to misdiagnosing a specific, but smaller, group of people, which the computer system would have no chance of diagnosing. I hope I made myself clear with that, I'm not sure though. It's really meant to be a ethical question of should you always replace people with computer systems that are effective a higher proportion of the time? |
If this is a concern, simply keep the network airgapped. It's worth noting that even an infinitely intelligent being would not be omnipotent, even in matters relating to its own hardware. Just like a person cannot learn how to consciously rewire their own brain, it's not a given than a hard AI would be able to learn how to use a network, even if its hardware was connected to one. |
This is like saying that a train that cannot be stopped at all is more reliable that one that can be... |
I'm a hobbyest coder, daydreamer, and one who thinks a little too far out side the box. I hope my opinions help. |
This is clearly a massive book to write. To be 100% safe we don't make true AI, but just a simpletons. Yet the moment somebody makes a true AI, then they will have the upper hand. |
Automated systems are not intelligent. Therefore, if it is true intelligent, then you take the AI unit to court to be judge like you would a person. The outcome is what you decide what to do withe the situation. |
Therefore, if it's intelligent, than it's going to know how to lie to get the end results it needs .... or wants. |
Another thing you might want to think about is the fact that most stock market transactions are now done automatically by AI. |
I think to make AI adaptable enough to take our place, we have to enable, maybe even encourage it to sometimes make mistakes. |
Just like a person cannot learn how to consciously rewire their own brain |
Is it really a mistake if the result is better than not making the 'mistake' though? Or is it teaching the AI to use variety, and to predict what other AIs will do and react to that? |
I believe it's not out of the question that an AI could learn, most likely via incredibly fast trial and error, to at least send data over a network, if not more |
If an AI was taught to program, especially if it was one which had been taught to produce and improve other AIs, it would most likely be able to either directly modify itself, or at least eventually create a better version of itself, which then repeats and the AI gets better and better at improving itself |
This is where a system of restrictions such as Asimov's laws could be required, however as I've mentioned, AIs could well learn to better themselves, and in doing so, they could remove any restrictions that we program in. |
That is not quite right. CBT among many other modern psychology (and psychiatric more especially) techniques and treatments train and encourage patients to do just that. Sure the patient doesn't get out with a soldering iron and wire cutters in that sense but the emerging treatments based on brain plasticity are effectively self-rewiring. |
I could equally argue a computer with the best AI will in the world is not such on the grounds it has no control over the electrons or holes flowing through its circuits. That would be silly. |
the individual does it consciously, deliberately and without moving a muscle |
The idea that a brain or computer didn't work like this then nothing would happen is a tautology unfortunately beyond anything I can fathom. |