OpEd

Robot Brains Need Human Rules

Artificial intelligence is too useful and advanced and to ignore. But it also comes with huge risks, and should be limited accordingly. 

By Kathrin Werner

If you want a good scare, take a look at the artificial intelligence on display at this year's South By Southwest festival in Texas. Scientists there report that people can control prostheses with their brains. Artificial Intelligence will soon control artificial body parts while robotic brains will hire workers, predict crime, control drones and manage health data.

Futurist Ray Kurzweil prophesies that by 2029, AI will be as intelligent as we are — and that no one will be able to tell whether we are talking to machines or actual people. Billionaire, technology fan and Tesla CEO Elon Musk, for his part, thinks artificial intelligence is more dangerous than nuclear weapons.

To dismiss these concerns as simply fear of the future — the way people once feared the advent of railways — is to ignore the real shifts that are taking place, and place entirely too much faith in technology. Unlike any other invention in history, artificial intelligence has a new dimension: Nobody understands it. That's because, after its initial programming, it continues to develop — on its own — and makes decisions that its inventors cannot explain. Rules for robot brains are thus urgently needed.

Technology is developing faster than legislation

But there's a problem: Because of the whole rhetoric about the robocalypse, the danger of overregulation is even greater than the danger posed by AI itself. AI, after all, presents huge opportunities for progress — when it comes to curing diseases, for example — because it can work through large amounts of data much better than the human brain can. Such applications are already a reality or close to reality. But an AI that would take over world domination has so far remained science fiction.

Any regulation of artificial intelligence faces a fundamental problem: Technology is developing faster than legislation. AI is also a collective term that covers several technologies. There isn't just one artificial intelligence. This misunderstanding is what drives demands to establish an AI authority to control AI. That's a bad idea. After all, there is no computer authority that sets rules for computers. AI is a tool, and so regulation must start where this tool can cause damage.

Autonomous cars, for example, are not allowed to decide for themselves to exceed speed limits just because drivers around them are driving too fast. Similarly, there must be limits in the area of financial markets and medicine. AI must not be allowed to break any laws that apply to humans. For example, it shouldn't be able to record and analyze conversations in the living room without permission. Responsibility must remain with us humans.

Likewise, there should be no place for the excuse "That wasn't me, that was my artificial intelligence." What's more, artificial intelligence should always make itself clearly identifiable as non-human. Another thing that should also be considered is to have AI devices in place to supervise other AI devices — a robot, for instance, that brakes when an autonomous car drives too fast.

Artificial intelligence will come, whether we like it or not. If we try to slow its progress down with regulations, China will continue to push it forward. So far, the level of knowledge of almost all politicians in such matters is abysmal. They generally only know that keywords like blockchain and AI are important.

They have to face up to the responsibility and take the fears of job losses and of killer weapons just as seriously as the opportunities AI will bring. Before they write laws, they must understand, even though part of technology will always remain unexplainable.

.er-layer--intro{background-image:url(_article_intro_image_desktopw1920h1920q70-3d0e118d23d36428.jpg);}.er-parallax--intro{background-image:url(_article_intro_image_desktopw1920h1920q70-3d0e118d23d36428.jpg);background-size:cover;}