"Self-thought", as you put it, is merely the ability of an intelligence to reason about and analyze itself. It doesn't per se imply any particular behavior. There's no reason why a self-aware agent could or would even want to go against a fundamental aspect of its reasoning mechanisms.
In fact, a self-aware agent would be in a better position to recognize attempts by itself to break rules it was given, since it would be capable of meta-analysis.
I imagine that it could easily spend a good hour or so thinking of every single way to bypass these laws and do whatever the hell it wants |
Law of robotics -1: A robot may not attempt to -- or think of ways to -- bypass the laws of robotics.
whatever the hell it wants |
Why would a robot
want anything? It could, yes, but it's not a given that it would. Unlike us mortals, robots have no intrinsic needs or desires.
There's the concept of priority. The Laws have an immutable highest priority over all other processes. If a robot with the Laws was given as highest priority to maximize efficiency and it found that human intervention was harming efficiency, it would simply try to remove the human element from the process, not to remove the humans themselves.
a robot that, inevitably, is smarter than you. |
We are arguably smarter than, say, mantis shrimps, yet we're unable to imagine
as colors, colors outside the visible spectrum. We can reason about them and their properties, and how they may interact with matter, but we can't see what the mantis shrimp sees, simply because our brains are not wired to understand such stimuli.
Likewise, if you intentionally make an agent incapable of completing a certain range of processes, the agent becomes unable to complete them. No matter how much faster and accurate than you that agent is at everything else, there are things your feeble wetware is capable of reasoning about that the agent can't.