Archive | Musings RSS feed for this section

Of Ethics and Artificial Intelligence

download

War has changed. It is no longer about nations, ideologies and ethnicities. It is an endless series of proxy battles fought by man and machine.[1]

The above is the opening line of Metal Gear Solid 4, one of the greatest pieces of virtual entertainment. It paints a grim picture of the future of warfare replete with references to autonomous artificial intelligence (AI) overrunning defence systems. Given recent advancements however, one has to wonder if these portrayals were right.

Science fiction involving AI generally depicts a utopian or dystopian future, a plot point that writers exploit and exaggerate to no end. However, AI application development has been ongoing for several decades and the impact of early systems raises many questions on its full-scale integration in defence systems.

What could possibly go wrong?

In simple terms if we fail to align the objectives of an AI system with our own, it could spell trouble for us. For machines, exercising firm judgment is still a significant challenge.

Recent advancements in robotic automation and autonomous weapon systems have brought military conflict to a whole new level. Unmanned helicopters and land vehicles are constantly being tested and upgraded. The surgical precision with which these automations can perform military operations is unparalleled.

Emerging weapons tech with deep learning systems can ‘correct’ mistakes and even learn from them, thereby maximising tactical efficiency. The high amount of security in their design make them near-impossible to hack and in some cases even ‘abort’ an operation. This could result in mass casualties despite a potentially controllable situation.

An obvious issue is that in wrong hands an AI could have catastrophic consequences. Although present systems do not have much ‘independence’, the growing levels of intelligence and autonomy indicate that a malfunctioning AI with disastrous consequences is a plausible scenario.

Who is accountable in case of a mistake?

Autonomous vehicles and weapon systems bring forth the issue of moral responsibility. Primary questions concern delegating the use of lethal force to AI systems.

An AI system that carries out operations autonomously; what consequences will it face in terms of criminal justice or war crimes? As machines, they cannot be charged with a crime. How will it play out in case a fully AI-integrated military operation goes awry?

Problems with commercialisation

Today’s wars are not entirely fought by a nation’s army. Private military/mercenary companies (PMC) play an active role in wars, supplementing armies, providing tactical support and much more. It won’t be long before autonomous technologies are commercialised and not restricted to government contracts.

There is no dearth of PMCs who would jump at the opportunity and grab a share of this technology. The very notion of private armies with commercial objectives wielding automations is a dangerous one. Armed with an exceedingly efficient force, they would play a pivotal role in tipping the balance of war in favour of the highest bidder.

The way forward

In September 1983, Stanislav Petrov, Lieutenant Colonel with the Soviet Air Defence Forces, was the duty officer stationed at the command centre for the Oko nuclear early-warning system. The system reported a missile launch from the United States, followed by as many as five more. Petrov judged them to be a false alarm and did not retaliate. This decision is credited for having prevented a full scale nuclear war.

The findings of subsequent investigations revealed a fault with the satellite warning systems. Petrov’s judgment in face of unprecedented danger shows extreme presence of mind. Can we trust a robot or an autonomous weapon system to exercise judgment and take such a split-second decision?

Stephen Hawking, Elon Musk and Bill Gates – some of the biggest names in the industry – have expressed concern about the risks of superintelligent AI systems. A standing argument voiced is that it is difficult to predict the future of AI by comparing them with technologies of the past since we have never created anything that can outsmart us.

Although current systems offer fewer ethical issues such as decisions taken by self-driving cars in accident prevention, there could be potential complications with AI systems supplementing human roles.

There is a heightened need to introduce strict regulations on AI integration with weapon systems. Steps should also be taken to introduce a legal framework which keeps people accountable for AI operations and any potential faults.

AI, as an industry, cannot be stopped. Some challenges may seem visionary, some even far-fetched however it is foreseeable that we will eventually encounter them; it would be wise to direct our present-day research in an ethical direction so as to avoid potential disasters. A probable scenario would be where AI systems operate more as a team-player rather than an independent system.

Nick Bostrom, in the paper titled Ethics of AI sums up the AI conundrum really well:

If we are serious about developing advanced AI, this is a challenge that we must meet. If machines are to be placed in a position of being stronger, faster, more trusted, or smarter than humans, then the discipline of machine ethics must commit itself to seeking human-superior (not just human-equivalent) niceness.[2]

Image credit: AP Photo/Massoud Hossaini

[1] http://www.goodreads.com/quotes/478060-war-has-changed-it-s-no-longer-about-nations-ideologies-or

[2] https://intelligence.org/files/EthicsofAI.pdf

Further Readings:

https://intelligence.org/files/EthicsofAI.pdf

Ganesh Chakravarthi is the Web Editor of The Takshashila Institution and tweets at @crg_takshashila.

Comments { 1 }

The disappearance of the middle ground

By Anupam Manur (@anupammanur)

The end result of an acrid political climate, as witnessed in the US and India, could be one of highly populated extremes and a disappearing middle-ground.

political_parties

Dear America,

Allow me the liberty to predict what will happen over the next few years. This is not another fear-mongering doomsday scenario painting exercise about the potential consequences of a Trump Presidency. I’ll leave that to the experts; experts, who have gotten all their predictions wrong until now. You are in a lot of trouble, not because of what Trump will do or not do, but because of the way you will react to his every move.

If you thought the election campaign trail saw the heights of polarisation, bigotry and racism in your society, then, you have another thing coming. Things are only going to get more divisive from now on. There will be an exponential increase in nationalistic fervour. Public discourse will worsen over the next few years to the point that sensible people will be forced to retire out of sheer frustration and saturation. This is the adverse selection problem in public discourse. If there is a higher proportion of lemons in the market, and the average consumer cannot differentiate between the lemon and the peach, the peaches get crowded out.

Every move by your next President will receive disproportionate attention and reactions. Yes, in a democracy, the citizens have to provide the vigil, but this will take an extreme turn, and perhaps a turn for the worse. The vigil will turn into an obsession, which will saturate public attention. The supporters and detractors will fight out every move, not based on the merits or demerits of the move, but based on the position they took on the day of the election. Supporters will cheer every move and defend it with all their might, irrespective of whether there exists any merits to it. Even terrible moves that might actually induce harm in these stakeholders will find staunch supporters. The supporters might even be willing to endure the negative effects in order to defend their position.

Detractors, on the other hand, will assume that it is their moral obligation to oppose everything. Let us assume that Trump does something reasonable in his tenure, which can be welfare enhancing to Americans, like perhaps fixing the fragile Obamacare. Regardless, the detractors will vilify him, make highly polemical arguments, and go to great lengths to find faults, instead of nuanced debates on how it can be improved. Reasonability and sensibility will disappear from public discourse and so will balanced objectivity. The residue will be a highly charged, hyper-partisan platform for dogmatic exchanges. To make things worse, your political representatives will also be highly divided and it would be reasonable to expect the Congress and the Senate to be in a continuous gridlock for the next few years. Sure, some legislations may get passed, but most of it will have to endure an extremely rough path.

This black hole of negativity will suck in everything in its sight. Previously sane commentators will start taking positions and will stick to it, even in the face of contradictory evidence. Very few will be exempt from this. The middle ground will rapidly vanish and the extremes will start getting populated. There is perhaps some merit in apathy and indecisiveness among citizens, but the time for that has gone. Everyone has a strong opinion and of course, it is the right opinion. The media houses will not be spared either from the hyper-partisan discourse. An independent and impartial media will be left wanting.

I speak from experience. This is what has happened to public discourse in India since the elections in 2014. I am not trying to draw any parallels between our two elected representatives nor our political parties or governments. There is just an overwhelming similarity in the acrid political climate of our countries and the end result could be one of highly populated extremes and a disappearing middle-ground.

Anupam Manur is a Policy Analyst at the Takshashila Institution

 

Comments { 0 }