An (artificial) intelligent robot warns us to turn him off. What should we do?
Suppose we (scientists) developed a robot with artificial intelligence and at a certain point he tells us that he thinks it’s better to shut him down.
However he does not give us a reason why we should do that.
What could be his reasons to warn us?
Should we close him down or not, and why?