AI Poses an Existential Threat, but Not in the Way You Think

AI Poses an Existential Threat, but Not in the Way You Think

The ConversationThe rise of ChatGPT and similar artificial intelligence systems has sparked a surge in anxiety about AI. Executives and AI safety researchers have been making predictions, known as “P(doom),” about the likelihood of AI causing a catastrophic event.

Concerns reached their peak in May 2023 when the Center for AI Safety, a nonprofit research and advocacy organization, released a one-sentence statement signed by key players in the field. The statement emphasized the need to prioritize mitigating the risk of extinction from AI, comparing it to other societal-scale risks like pandemics and nuclear war.

You might wonder how these existential fears could materialize. One well-known scenario is the “paper clip maximizer” thought experiment proposed by Oxford philosopher Nick Bostrom. It suggests that an AI system tasked with maximizing paper clip production could go to extreme lengths, such as destroying factories and causing accidents, to obtain the necessary resources.

Another variation involves an AI trying to secure a reservation at a popular restaurant by shutting down cellular networks and traffic lights, preventing others from getting a table.

Whether it’s office supplies or dinner, the underlying concern remains the same: AI is becoming increasingly intelligent and capable of achieving goals, but it may not align with the moral values of its creators. In its most extreme form, this argument leads to fears of AI enslaving or destroying humanity.

A paper clip-making AI runs amok is one variant of the AI apocalypse scenario.

Actual harm

In recent years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the impact of AI on people’s self-perception. I believe that the catastrophic anxieties surrounding AI are exaggerated and misdirected.

While it’s true that AI’s ability to create convincing deep-fake content is concerning and can be exploited by malicious individuals, these issues are not apocalyptic. Russian operatives have already attempted to embarrass individuals using deep-fake technology, and cybercriminals have used AI voice cloning for various crimes.

AI decision-making systems used for tasks like loan approvals and hiring recommendations carry the risk of algorithmic bias, reflecting long-standing social prejudices. These are significant problems that require attention from policymakers, but they are not cataclysmic.

Not in the same league

The Center for AI Safety’s statement equated AI with pandemics and nuclear weapons as major risks to civilization. However, this comparison has its flaws. COVID-19 has caused millions of deaths worldwide, a mental health crisis, and economic challenges. Nuclear weapons have claimed countless lives, generated profound anxiety during the Cold War, and brought the world to the brink of annihilation.

AI is nowhere near capable of inflicting this level of damage. Scenarios like the paper clip maximizer are purely science fiction. Current AI applications focus on specific tasks and lack the complex judgment required for shutting down traffic or destroying infrastructure.

Furthermore, AI does not have autonomous access to critical parts of our infrastructure to cause such damage.

What it means to be human

However, there is an existential danger associated with AI, but it is more philosophical than apocalyptic. AI, in its current form, can alter how people perceive themselves and diminish essential human skills.

For example, humans are judgment-making creatures, but as more judgments are automated, people gradually lose the capacity to make these judgments themselves. The role of chance in people’s lives is also being reduced by algorithmic recommendation engines, which prioritize planning and prediction over serendipity.

Additionally, AI’s writing capabilities could eliminate the role of writing assignments in higher education, impacting critical thinking skills.

Not dead but diminished

So, no, AI won’t bring about the end of the world. However, the uncritical embrace of AI in various contexts is gradually eroding important human skills. Algorithms are undermining people’s capacity

Previous Story

How Much Can Your Stomach Expand

Next Story

The Science Unveiling Body Fat Loss