I have literally spent most of my awaken time of more than 20 years on understanding problems, transforming them to IT structures and write working systems - but not a minute on how this can make me rich. Having this fundamentally different background, it is not a surprise that I have a quite different view on this topic. Do you have a few minutes to read and think about my opinion?
Why fear?
We fear of many things, starting with the bogeyman, then switch to teachers, monsters, ghost, socialism, capitalism, banks, Islam, whatever - with more or less reason. It is a longer story, but in short I think this is because our "modern culture" has no real answer to the most common and simplest fears: our personal vulnerability and inevitable death.
This ultimate fear is very strong and is almost impossible to truly handle for our brains. You should recall: your brain is not a rational computer, but just another organ of your body with the primary objective to keep your individual body alive as long as possible - and knows that it will surely fail! Obviously, we have a very strong motivation to find other fears that we can share and talk about.
So, we are kind of addicted to have common fears regardless of the actual target or its validity, and it is very popular to be an expert of a common bogeyman, especially if it has a "modern" taste. Consequentially, being afraid of an evil AI is a perfect topic, great business, lot of fun.
Is there any true reason to be afraid of an AI? Who cares??? It's showtime!
Films, TV series, great articles, worried scientists, excited chats with colleagues and neighbors, viral topics on Facebook.
So, is it rational to fear of an evil AI?
A child is afraid of the bogeyman because our brain seeks for environments that we can control. The time while we are sleeping, the dark corners where we don't see sets our brain to alarm state, and fills the "unknown" area with monsters to motivate us examine the unknown, make them known to us. To clear up this fear or not question, we should examine the terms first.
As a system analyst, I can say that "evil", and morality as a whole, is a human and social term, and has absolutely nothing to do with computer systems. "Intelligence" is a very popular word, but I see no practical definition that could be applied to both human and machine intelligence behind this "fear" area, not only "behaves like a human" or "plays chess better", or "more ethic" than us. (I do have a quite usable definition, but that should go to another article.)
For now it is enough to say that with our current programming tools and concepts, it does not matter how fast the machines will become, our systems will never become "dangerously intelligent". For example, "Skynet" would need a transparent control over all systems - but today we have gigantic amount of incompatible designs, platforms and solutions, generally to the same problems. Only synchronizing the user management within a company is a hard task that requires serious amount of manpower, resulting another heaps of custom codebase and therefore even less transparency. Not an extremely complex "conscious entity", but a simply reliable operation can't emerge from this process - although mysterious errors caused by faulty designs may look like "the spirit in the machine" for less trained minds.
The popular fear-makers say that we will have faster and faster machines all around, and at a critical moment, they will reach consciousness. Ridiculous. Like if we have tons of earthworms, after reaching a critical mass they will transform into a dangerous giant single entity. Or like assuming that if you put stronger and stronger engine into a car, it will eventually fly. Well, it will almost surely crash because you can't control it. But flying requires completely different propulsion (not spinning the wheels), wings and aeronautics knowledge: a paradigm shift.
A better analogy
Until a few centuries ago, physical power was the final argument: each and every person had his/her own experience when forced to do something, knew stories where people were humiliated or killed by stronger ones. Then engineers created machines that were stronger than human beings. It was quite natural that the same fear appeared: there will come those giant machines, they will be like us but stronger and invincible, and they will rule the world soon. I guess there were plenty popular scientists who told smart stories about them to the masses seeking for their daily dose of fear. Luckily enough, others spent years on experimenting with new materials, shapes and combinations, made all the 99% errors that are absolutely required to find that 1% successful innovation.
Creating these machines required understanding the concept of "power", separate it from human strength, transform it to very simple atomic elements like pressure, movement, lift, torque, and then build custom structures from the components: crane, drill, sawmill; then cars, ships, airplanes. Today, we are not afraid of "monster machines": we know that their strength don't mean an inherent danger to us - although it is very important to have trained people to operate them.
The other side of the story that yes, we can attach blades, guns, rockets, mines, chemical, biological and nuclear weapons to these machines, we can and actually do kill other human beings using them every day and night. In fact, we would not kill most of those people by bare hands or swords. We would not mean true danger to the whole planet without these machines, considering not only weapons, but also car or plastic bottle factories, nuclear power plants or industrial agriculture for pollution and the destruction of natural cycles.
Machines really made the human race very powerful, but for how we use this power we should not blame them.
Is there a real danger with using more advanced IT systems?
Yes, surely. But is is not the internal "immorality" of a computer system, however intelligent it may become. It is the immorality of people creating and using these systems, and this is a true and validated danger.
Via information systems, you can see or manipulate the life of other human beings, but you can also separate yourself from the natural empathy that could stop you from doing unfair things. You can very easily be evil in quite simple conditions, if you can alienate the target people, as it is properly proven by Auschwitz, or the Milgram experiment.
... and naturally, you can create IT systems that automatically decide over the fate of human beings, and do "evil" things with them. But that does not need an evil AI at all. It is already here, in the mines of Africa, in sweatshops of India and China, the insurance and health care companies of the western world.
the danger is in the mind and the hand, not the knife.