Tag Archives | Ganesh Chakravarthi

Smart Watches and Privacy Concerns

By Ganesh Chakravarthi (@crg_takshashila)

A buzz on my wrist wakes me. I open my smartphone and see the status of my smart band synchronising my sleep schedule. My first cup of coffee and breakfast, the smart band takes readings. I go for a run, my smart band goes on overdrive. My ride to work takes me through peak hour traffic and my bike manoeuvres spike up my adrenalin.

Few weeks of the same routine and I observe subtle changes that my body has undergone. All these data points that allow me to alter my lifestyle are recorded on my watch – on the cloud, to be precise. My smart band’s readings give me a fairly good idea of where things stand. I can access this data whenever I want, monitor my eating and update the same data on my smartphone. The question arises, am I the only one seeing this data? Do I have a say if someone wants to pry or sell this data?

Wearable technology has altered the way we interface with the world. With increasing demand, it is important for consumers to be aware of potential security and privacy breaches. With vague regulations and lack of enforcement, data gathered by smart wearables can be used without the consumer’s knowledge, and it wouldn’t even be illegal.

The issue is considerably more serious since consumer-grade wearables currently possess little to no patching. The devices interface with smartphones however they come with their own operating system and applications. Although there are some smartphone antivirus programs that pair with a smartwatch, the lack of timely updates or indigenous security features increases vulnerability.

Poor data management can be exploited by third parties and sold to unscrupulous corporations for gross misuse. The lack of strong encryption with wearables and data in transit before synchronisation leaves it vulnerable to hacks. Additionally, companies would be willing to pay a fortune to get their hands on such personalised inputs.

There is also a big issue of continuity with a company that chooses to comply with privacy regulations. Say you choose to share your data with a manufacturer, there is no guarantee that the company will still be in existence a few years from now on. What happens to a company that goes bankrupt? What happens to all the data if the company is bought by a bigger corporation? The rules of the parent company could allow them to use this data at their own discretion. Additionally, there could be a new law which could allow access to data that you chose to share willingly.

As wearables are slowly entering corporate networks, they bring with them a slew of cybersecurity challenges. At a time where companies are auctioning collected data, how can anyone prevent companies from redistributing it? Will consumers retain any right to restrict access to their own private information?

Part of the problem can be attributed to the stiff competition in the wearables market. Everyone wants to roll their products out first causing manufacturers to cut back on data security in favour of faster roll out. The increased demand is prompting the creation of new editions almost every quarter, a process by which older devices are not getting any upgrades.

Companies have a potential copout with data breach insurance however insurance companies have begun to resist this in recent times. Consider the case of Columbia Casualty, the first insurance company to challenge liability after its client, Cottage Health System, had a data breach which released confidential patient information on the internet. The company paid about $4 million to settle the client’s filing but has now filed to recoup the funds, citing misrepresentation of control.

Cases like these serve to prove that financial institutions are realising the problems of bad data management and shielding themselves from liabilities.

A significant part of the data security debate with wearables is whether manufacturers should regulate the flow of data themselves or whether there should be government intervention.

Consumers should be able to understand the risks they are exposed to for the mere benefit of wearing a trendy electronic cosmetic. For now, there haven’t been any major public data breaches, a fact that has resulted in very little public discussion. However, certain corporations will find that personal fitness and health data is much more valuable than credit cards and payment information.

Security solutions for wearables are still in their infancy. For now, most wearables are left to self-regulatory practices which conversely may ensure bare minimum of compliance with privacy regulations. There is a heightened need to put regulations in place either via private industry or government intervention, maybe a combination of the two. Until these are in place, privacy and data security will always remain an inherent risk.

Ganesh Chakravarthi is the Web Editor at Takshashila and tweets at (@crg_takshashila)

Comments { 0 }

Of Ethics and Artificial Intelligence

download

War has changed. It is no longer about nations, ideologies and ethnicities. It is an endless series of proxy battles fought by man and machine.[1]

The above is the opening line of Metal Gear Solid 4, one of the greatest pieces of virtual entertainment. It paints a grim picture of the future of warfare replete with references to autonomous artificial intelligence (AI) overrunning defence systems. Given recent advancements however, one has to wonder if these portrayals were right.

Science fiction involving AI generally depicts a utopian or dystopian future, a plot point that writers exploit and exaggerate to no end. However, AI application development has been ongoing for several decades and the impact of early systems raises many questions on its full-scale integration in defence systems.

What could possibly go wrong?

In simple terms if we fail to align the objectives of an AI system with our own, it could spell trouble for us. For machines, exercising firm judgment is still a significant challenge.

Recent advancements in robotic automation and autonomous weapon systems have brought military conflict to a whole new level. Unmanned helicopters and land vehicles are constantly being tested and upgraded. The surgical precision with which these automations can perform military operations is unparalleled.

Emerging weapons tech with deep learning systems can ‘correct’ mistakes and even learn from them, thereby maximising tactical efficiency. The high amount of security in their design make them near-impossible to hack and in some cases even ‘abort’ an operation. This could result in mass casualties despite a potentially controllable situation.

An obvious issue is that in wrong hands an AI could have catastrophic consequences. Although present systems do not have much ‘independence’, the growing levels of intelligence and autonomy indicate that a malfunctioning AI with disastrous consequences is a plausible scenario.

Who is accountable in case of a mistake?

Autonomous vehicles and weapon systems bring forth the issue of moral responsibility. Primary questions concern delegating the use of lethal force to AI systems.

An AI system that carries out operations autonomously; what consequences will it face in terms of criminal justice or war crimes? As machines, they cannot be charged with a crime. How will it play out in case a fully AI-integrated military operation goes awry?

Problems with commercialisation

Today’s wars are not entirely fought by a nation’s army. Private military/mercenary companies (PMC) play an active role in wars, supplementing armies, providing tactical support and much more. It won’t be long before autonomous technologies are commercialised and not restricted to government contracts.

There is no dearth of PMCs who would jump at the opportunity and grab a share of this technology. The very notion of private armies with commercial objectives wielding automations is a dangerous one. Armed with an exceedingly efficient force, they would play a pivotal role in tipping the balance of war in favour of the highest bidder.

The way forward

In September 1983, Stanislav Petrov, Lieutenant Colonel with the Soviet Air Defence Forces, was the duty officer stationed at the command centre for the Oko nuclear early-warning system. The system reported a missile launch from the United States, followed by as many as five more. Petrov judged them to be a false alarm and did not retaliate. This decision is credited for having prevented a full scale nuclear war.

The findings of subsequent investigations revealed a fault with the satellite warning systems. Petrov’s judgment in face of unprecedented danger shows extreme presence of mind. Can we trust a robot or an autonomous weapon system to exercise judgment and take such a split-second decision?

Stephen Hawking, Elon Musk and Bill Gates – some of the biggest names in the industry – have expressed concern about the risks of superintelligent AI systems. A standing argument voiced is that it is difficult to predict the future of AI by comparing them with technologies of the past since we have never created anything that can outsmart us.

Although current systems offer fewer ethical issues such as decisions taken by self-driving cars in accident prevention, there could be potential complications with AI systems supplementing human roles.

There is a heightened need to introduce strict regulations on AI integration with weapon systems. Steps should also be taken to introduce a legal framework which keeps people accountable for AI operations and any potential faults.

AI, as an industry, cannot be stopped. Some challenges may seem visionary, some even far-fetched however it is foreseeable that we will eventually encounter them; it would be wise to direct our present-day research in an ethical direction so as to avoid potential disasters. A probable scenario would be where AI systems operate more as a team-player rather than an independent system.

Nick Bostrom, in the paper titled Ethics of AI sums up the AI conundrum really well:

If we are serious about developing advanced AI, this is a challenge that we must meet. If machines are to be placed in a position of being stronger, faster, more trusted, or smarter than humans, then the discipline of machine ethics must commit itself to seeking human-superior (not just human-equivalent) niceness.[2]

Image credit: AP Photo/Massoud Hossaini

[1] http://www.goodreads.com/quotes/478060-war-has-changed-it-s-no-longer-about-nations-ideologies-or

[2] https://intelligence.org/files/EthicsofAI.pdf

Further Readings:

https://intelligence.org/files/EthicsofAI.pdf

Ganesh Chakravarthi is the Web Editor of The Takshashila Institution and tweets at @crg_takshashila.

Comments { 1 }

Nuclear War: Is Our Complacency Misplaced?

By Ganesh Chakravarthi (@crg_takshashila)

The Cold War taught us many things. It compelled nations to judge every action against potential worldwide consequences. Most importantly, it taught us that  nuclear arms should never be taken lightly.

With the fall of the Iron Curtain the whole world breathed a sigh of relief. However, neither the end of the Cold War nor the Nuclear Non-Proliferation Treaty have stopped nations from developing new nuclear weapon systems. With countries increasing their nuclear arsenals and non-proliferation talks faltering, one has to wonder if a sense of complacency now permeates the global nuclear scenario.

The Stockholm International Peace Research Institute (SIPRI), an independent international institute dedicated to researching conflicts, armaments, arms control and disarmament, conducted research which revealed that there are more than 15,000 nuclear weapons in the world and that about 1800 of them are always kept in a state of ‘high operational alert.’ SIPRI further states that all nations with nuclear capabilities are developing new technologies or upgrading their current nuclear weapon systems. This brings forth the question of the relevance that a traditional treaty like the Nuclear Non-Proliferation holds in the current global order.

No nation seems to be heading towards disarmament. The rise of Asian powers, the tensions between India and Pakistan, and China advancing its nuclear arsenal are all pressing concerns. There is also the growing discontent in the Middle East where Israel is already a nuclear power and there are suspicions that Iran is on the road to becoming one.  The situation is only compounded by the fact that Saudi Arabia, Egypt and Turkey are vying to gain political supremacy in the region, which has resulted a dangerous balance of power in the Middle East.

The Cold War created a bipolar situation between two major superpowers, the U.S. and the U.S.S.R, potentially pitching their arsenals against one another. This concept of duality has been transplanted on to other players in the game i.e. India-Pakistan, Iran-Israel and so on. The question is, is this bipolar approach still relevant in a post-Cold War era?

The time has now come to not be limited by this bipolar framework and consider analytical models that have more stakeholders. This may be essential considering the threat of a nuclear Armageddon in a world that is becoming more and more interdependent. Although the concept of a ‘world state’ seems far away, there is a pressing need to develop more effective measures for cooperative security to ensure nuclear safety. Disarmament is central to the entire process while security cooperation and arms control are categorical imperatives.

Given the failing non-proliferation talks, the world needs to look at potential new treaties which can take into account emerging nuclear powers as well as offer methods for non-nuclear nations to have a say in the process and potentially take part in the codification of nuclear disarmament norms.

A number of countries across the world have divested landmines and cluster munition producers. A potential road to disarmament could be the adoption of divestment in the production of nuclear weapon components. For instance, the Norwegian and New Zealand Government Pension funds have already implemented such schemes. Additionally, the Swiss War Materials Act has been revised very recently which prohibits the financing of nuclear weapon producers.

The stigmatising of nuclear weapons and the potential release of large financial streams tied to their production could compel several countries to go towards disarmament. All this underlies a democratisation of the disarmament process which has not happened yet.

The Cold War saw the world almost resigning to the inevitability of a nuclear Armageddon. It is up to us now to ensure that the world is not as helpless as it once was.

Ganesh Chakravarthi is the Web Editor at The Takshashila Institution and tweets at (@crg_takshashila)

Comments { 0 }