Tag Archives | PMC

Of Ethics and Artificial Intelligence

download

War has changed. It is no longer about nations, ideologies and ethnicities. It is an endless series of proxy battles fought by man and machine.[1]

The above is the opening line of Metal Gear Solid 4, one of the greatest pieces of virtual entertainment. It paints a grim picture of the future of warfare replete with references to autonomous artificial intelligence (AI) overrunning defence systems. Given recent advancements however, one has to wonder if these portrayals were right.

Science fiction involving AI generally depicts a utopian or dystopian future, a plot point that writers exploit and exaggerate to no end. However, AI application development has been ongoing for several decades and the impact of early systems raises many questions on its full-scale integration in defence systems.

What could possibly go wrong?

In simple terms if we fail to align the objectives of an AI system with our own, it could spell trouble for us. For machines, exercising firm judgment is still a significant challenge.

Recent advancements in robotic automation and autonomous weapon systems have brought military conflict to a whole new level. Unmanned helicopters and land vehicles are constantly being tested and upgraded. The surgical precision with which these automations can perform military operations is unparalleled.

Emerging weapons tech with deep learning systems can ‘correct’ mistakes and even learn from them, thereby maximising tactical efficiency. The high amount of security in their design make them near-impossible to hack and in some cases even ‘abort’ an operation. This could result in mass casualties despite a potentially controllable situation.

An obvious issue is that in wrong hands an AI could have catastrophic consequences. Although present systems do not have much ‘independence’, the growing levels of intelligence and autonomy indicate that a malfunctioning AI with disastrous consequences is a plausible scenario.

Who is accountable in case of a mistake?

Autonomous vehicles and weapon systems bring forth the issue of moral responsibility. Primary questions concern delegating the use of lethal force to AI systems.

An AI system that carries out operations autonomously; what consequences will it face in terms of criminal justice or war crimes? As machines, they cannot be charged with a crime. How will it play out in case a fully AI-integrated military operation goes awry?

Problems with commercialisation

Today’s wars are not entirely fought by a nation’s army. Private military/mercenary companies (PMC) play an active role in wars, supplementing armies, providing tactical support and much more. It won’t be long before autonomous technologies are commercialised and not restricted to government contracts.

There is no dearth of PMCs who would jump at the opportunity and grab a share of this technology. The very notion of private armies with commercial objectives wielding automations is a dangerous one. Armed with an exceedingly efficient force, they would play a pivotal role in tipping the balance of war in favour of the highest bidder.

The way forward

In September 1983, Stanislav Petrov, Lieutenant Colonel with the Soviet Air Defence Forces, was the duty officer stationed at the command centre for the Oko nuclear early-warning system. The system reported a missile launch from the United States, followed by as many as five more. Petrov judged them to be a false alarm and did not retaliate. This decision is credited for having prevented a full scale nuclear war.

The findings of subsequent investigations revealed a fault with the satellite warning systems. Petrov’s judgment in face of unprecedented danger shows extreme presence of mind. Can we trust a robot or an autonomous weapon system to exercise judgment and take such a split-second decision?

Stephen Hawking, Elon Musk and Bill Gates – some of the biggest names in the industry – have expressed concern about the risks of superintelligent AI systems. A standing argument voiced is that it is difficult to predict the future of AI by comparing them with technologies of the past since we have never created anything that can outsmart us.

Although current systems offer fewer ethical issues such as decisions taken by self-driving cars in accident prevention, there could be potential complications with AI systems supplementing human roles.

There is a heightened need to introduce strict regulations on AI integration with weapon systems. Steps should also be taken to introduce a legal framework which keeps people accountable for AI operations and any potential faults.

AI, as an industry, cannot be stopped. Some challenges may seem visionary, some even far-fetched however it is foreseeable that we will eventually encounter them; it would be wise to direct our present-day research in an ethical direction so as to avoid potential disasters. A probable scenario would be where AI systems operate more as a team-player rather than an independent system.

Nick Bostrom, in the paper titled Ethics of AI sums up the AI conundrum really well:

If we are serious about developing advanced AI, this is a challenge that we must meet. If machines are to be placed in a position of being stronger, faster, more trusted, or smarter than humans, then the discipline of machine ethics must commit itself to seeking human-superior (not just human-equivalent) niceness.[2]

Image credit: AP Photo/Massoud Hossaini

[1] http://www.goodreads.com/quotes/478060-war-has-changed-it-s-no-longer-about-nations-ideologies-or

[2] https://intelligence.org/files/EthicsofAI.pdf

Further Readings:

https://intelligence.org/files/EthicsofAI.pdf

Ganesh Chakravarthi is the Web Editor of The Takshashila Institution and tweets at @crg_takshashila.

Comments { 1 }