Safeguarding and Ethical Considerations for Artificial Intelligence
The Rise of Artificial Intelligence: Safeguards and Ethics
Artificial Intelligence (AI) is one of the most exciting and potentially troubling areas of development in the Computer Age. As we become increasingly concerned about how the rise of AI will affect our lives and society, we must also consider how our actions will affect them. Can humans make something as smart or smarter than ourselves? And if we do, how do we keep it from wiping us out or turning us into a disenfranchised minority in a two-species civilization that we started? What happens when our tools want to be treated with respect and allowed to make decisions of their own? These questions inevitably bring up notions of controls, safeguards, and overrides we might build into AIs, but those avenues also inevitably bring up concerns about ethics.
Safeguards for AI
Isaac Asimov’s Three Laws of Robotics is a good starting point for discussing safeguards for AI. These laws require robots to not hurt humans or let us be hurt, to obey our orders, and to protect themselves. The Three Laws are in order of priority, so for example, a robot can disobey a human’s order to harm a human, and it can let itself get hurt in order to obey an order or protect someone. But a key idea in discussing Laws for AI that doesn’t seem to get examined much in sci-fi is exactly how You’d enforce the laws.
If a dumb machine is just running human-written instructions, it’s easy to tell it what to do or not do. But AIs will be making judgments Based on their life experience and training data. Even if they think very differently than us, they’ll think more like us than like dumb machines. And that means getting them to obey a law will be a lot like getting humans to obey a law—in other words, not always successful.
So taking a look at how we get humans to mostly obey laws might give a clearer idea of what it will take for AIs. Humans actually come pre-programmed by nature with certain safeguards. We feel instinctual aversions to certain deeds, like murder and theft from our peers, and to people who do them. There’s an instinctive inhibition there because killing your species is bad for your species. Since we come equipped with such an instinct for survival of the species, that’s a pretty strong endorsement for it being practical, ethical, and reasonable to design into our own creations.
Enforcing Ethics for AI
Of course, you wouldn’t want to give them an instinct to preserve simply their own species, you either want an instinct to preserve our species too or to regard themselves as part of our species. But there’s still the Puzzle of how to program in such an instinct. Since we have one, it’s presumably doable, but we don’t currently have a very clear idea of how. We can train an AI to recognize if the object it sees is a dog or not, and likewise, we could in theory train a more advanced AI to correctly recognize that its actions would be deemed Good or Bad, Okay or Forbidden by its Creators. But that’s not the same as getting it to act upon that assessment the way we’d want it to.
Of course, our instinct isn’t a 100% safeguard, or anywhere near that, and while we don’t need 100%, we suspect we’d like an inhibition that lowered the odds even more than it does in humans, especially for anything trusted with much power. We screen humans before giving them the keys to the vault, so to speak, and this might be far easier and more effective with a created mind where you can peek around inside and see what is actually going on it’s thinking. Thin ice though, if taken too far, I wouldn’t want my brain scanned and we only do things like lie detectors voluntarily, a machine could obviously be forced to let us look in its brain but it might come to resent that.
Alternatively, we could install a “kill switch” that would Sense a forbidden activity and shut down or disable the AI. Or, it might activate a protocol for some other action like returning to the factory, but you might not trust it to do that if it’s already misbehaving. But however it specifically works, this amounts to equipping the AI with its own internal policeman, not actually making it obey the law.
Coexisting with AI
Coexisting with non-human intelligences might not be limited to just artificial intelligence. We Mentioned some differences dealing with AI and aliens today and we took an extended look at that in our Nebula-Exclusive series, Coexistence with Aliens. If humans ever do meet aliens, we’ll start out completely separate cultures and see how well we mingle. But if we develop true AIs, they’ll start out as integral parts of our culture and perhaps evolve some capacity for independence.
If you want to peacefully coexist with artificial intelligence, decide up front if you actually want to peacefully co-exist with them and act accordingly. Or just don’t make anything that smart or capable of becoming that smart. Few tasks would really require high intelligence that we couldn’t just use a human for anyway, and as we say on this Show, keep it simple, keep it dumb, or else you’ll end up under Skynet’s thumb.