Technologist - The FindLaw Legal Technology Blog

Legal Scholars, Engineers Fight Against War Robots

Don't worry that robots might kill your job prospects; worry that they might kill you.

This is not a test. It's a real-life situation, and not a scary-movie scenario. Well, it was a movie but that's not important right now.

What's important is that military experts want AI to have an automatic trigger. And of course, they are aiming at lawyers.

AI Needs Restraint

That's because legal scholars are fighting back. Mary Ellen O'Connell, a law professor at Notre Dame, says AI needs some restraints.

Speaking at a Brookings Institute forum, she joined other scholars in saying human beings must be the final decision-makers when it comes to taking life. It's an issue also raised by the International Committee of the Red Cross.

"The development of autonomous weapon systems -- that is, weapons that are capable of independently selecting and attacking targets without human intervention -- raises the prospect of the loss of human control over weapons and the use of force," the committee said in a report.

The advocates want legal limits built-in to policy and treaties on robots, but U.S. military leaders say human beings are "always in the loop" when it comes to weapons. Besides, they say, machines already kill people autonomously.

"Terminator Conundrum"

The military calls the issue the "Terminator Conundrum," which should at least spook anyone who has seen the movie. A lot of Google engineers have, and they are against it.

Some 3,100 of them have signed a petition protesting the company's involvement with a project that uses AI to collect and analyze drone footage in combat. They have demanded the project be cancelled.

Of course, they run the risk that some robots will take over their jobs.

Related Resources: