Will Artificial Intelligence Ever Be a
Threat to Humankind?

A couple of weeks ago, I wrote an essay for a competition on whether AI is a threat. Here’s the work I did which gave me a 20% scholarship.

Hrushikesh Emkay
3 min readMar 29, 2022

Films such as The Terminator and ideas such as Roko’s Basilisk have struck our collective consciousness and has made us wonder whether there will be a point in time, where AI will be a threat to humankind.

But first, we must define AI and threat. AI “refers to systems that display intelligent behaviour by analysing their environment and taking actions — with some degree of autonomy — to achieve specific goals.” A threat can be defined as any danger posed by AI to humankind. By this definition, we have two types of threats:

  1. A labour threat, where the labour force is replaced by robots possessing AI
  2. An existential threat, where humanity faces extinction or slavery under AI

We shall explore both threats and the possibility of them materialising.

First, we have the labour threat. This is a threat that is currently underway with low-skilled and repetitive jobs being redistributed to AI. There are estimations from the World Economic Forum that 85 million jobs will be replaced because of automation, but 97 million new jobs will be created in the process by 2025.2 As hopeful as it may be, we must not forget the fact these jobs will be highly skilled and would need reskilling and upskilling for a large chunk of the labour population. We must recognise that not everyone can be reskilled, and reskilling is an expensive and time-consuming process. It puts a portion of the working population, who are too old to be reskilled, at risk of unemployment. However, for the younger generations, which can be easily reskilled, AI poses no threat to them and instead leads to long term job growth.

An existential threat, on the other hand, is likely to occur when the technological singularity takes place, a point where machine intelligence exceeds human cognition.

The development of AI has been improving rapidly, with algorithms like GPT-3 and AlphaGo. A rule of thumb to measure AI progress is to track the rate of change of difficulty in captchas. The steeper the graph, the more sophisticated AI is. With the current state of development, experts predict that we could reach singularity by the turn of the century.

Yet the root of this question lies in whether the AI will consider humankind a threat to its survival. If it does not, humans will be to AI, what ants are to us. We do not care about them and do not remorse their death unless there is a situation that affects both parties. However, if AI views us as a threat, it will have every incentive to wipe us out simply because of its survival instinct. Therefore, AI will be a threat to humankind.

Apart from the singularity, there is always the threat of AI being used against us. Public profiling, polarising social media algorithms and autonomous drone strikes are all such incidents. Though not direct, such weaponization can have dangerous consequences and pose a threat.

Such progress has alerted the European Union which has proposed to regulate AI development to make it more ethical. What we need to understand is that AI is a technology and its and usage determines whether it is a threat.

--

--