You are currently viewing AI is the new nuclear bomb, so, proceed with caution
  • Reading time:8 mins read
  • Post category:Elixirr

The current AI revolution is often compared to the industrial revolution, but a more fitting comparison is to the creation of the nuclear bomb, as the Centre for AI Safety recently highlighted: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

While nuclear fission’s immense power was feared by its inventors (and rightly so) it is now humanity’s strongest hope for reducing global carbon dioxide emissions. Similarly, AI holds the potential to prevent 10 million annual deaths from cancer, rectify justice system failures affecting billions of people, and save numerous animal species from extinction. However, to prevent history from repeating itself and AI from spiralling out of control, resulting in disaster, business leaders need to collaborate with governments to establish much-needed regulations.

On 16th July 1945 when Robert Oppenheimer stood in New Mexico watching the first detonation of the nuclear bomb, he uttered the words, ‘Now I have become death, the destroyer of worlds’. The father of the atomic bomb went on to express revulsion at President Truman’s decision to use the bomb on Japan’s cities of Hiroshima and Nagasaki. He reportedly felt personally responsible, claiming he had ‘blood on [his] hands’. Now in the present day, we have leading computer scientists and researchers, such as Eliezer Yudkowsky, shouting out to the powers-that-be to “Shut it down”. Geoffrey Hinton, the AI ‘godfather’, resigned from Google in protest. Mo Gawdat’s book ‘Scary Smart’, expresses equal concern about the enormous power of AI and our inability to control it. In March, Elon Musk, Steve Wozniak and other tech leaders attempted to pause the training of AI systems more powerful than GPT-4 for 6 months, but it failed to make an impact. This has culminated in the previously cited statement by the Centre for AI Safety. In summary, these in-the-know tech tycoons, computer scientists and academics are concerned that once we create a thinking system smarter than all humans, we will lose control of human civilisation forever.

So, what should we make of these tortured inventors? Especially as investment in AI is predicted to skyrocket from $100bn to $2tn by 2030. Will the investment world, founders and CEOs continue to ignore warnings in the same way Truman ignored Oppenheimer? Or will they realise the great power they have created and the necessity to look beyond the bottom line, and collaborate with governments to shape regulation?

Business leaders and policy makers need to engage in the regulatory conversation, and work towards controlling the explosion of AI so it brings more benefit than destruction

Global governance today is slow and fractured, but if it was possible for governments from the 1950s onwards to collaborate on regulating nuclear power, then surely, we can do it again for AI. The recent warnings from academics against rapid AI expansion remind me of the Russell-Einstein manifesto shared in 1955, warning against the dangers of nuclear war, which was followed by decades of disarmament campaigns and numerous political treaties creating nuclear-free zones across the world. The issue we have with AI is that we do not have decades to campaign and agree on political treaties. As such, I return to my plea for business leaders and policymakers to engage in the regulatory conversation, and work towards controlling the explosion of AI so it brings more benefit than destruction.

Shutting AI down entirely would also be a mistake – it offers humanity hope.

A once-in-human-history opportunity to revolutionise healthcare, justice systems and environmentalism, to name a few areas. In January this year, researchers at the University of Toronto linked AlphaFold, an AI system that predicts protein structures for the whole human genome, up with an end-to-end AI-powered drug discovery engine, and discovered a cure for a type of liver cancer in 30 days. Considering this is a field where it takes an average of 17 years to test and approve a drug, this type of time frame is ground-breaking. In addition to drug discovery and development, AI has the potential to improve patient research and diagnosis, refine personalised treatments, enhance monitoring and telehealth, and streamline administrative healthcare services. Just think about how much data there is in the UK’s NHS, and what insights could be found if AI linked it all together to discover correlations otherwise lost.

Let’s turn from the NHS to the UK’s other great institution, its justice system. The current backlog of cases for the Crown Court is just over 60 thousand and continues to steadily grow, and the Magistrates’ Court backlog has risen to over 300 thousand. The introduction of AI would give valuable time back to barristers stuck manually reviewing and processing hundreds of documents per case, by offering brief yet insightful synopsis. In addition, AI-powered tools can provide predictions on verdicts and their associated risks, and in doing so remove human bias from court judgements. As Jonathan Levav discovered in 2011, judges are more likely to grant parole after a small break or lunch, and the probability of getting parole falls from 65% downward the longer the judge is in session. So, introducing AI to assist in courtrooms and with judicial rulings will make judgements more accurate as well as speed them up.

From drug discovery and development to improvements to the justice system, AI use cases are clear

Lastly, many of us are familiar with UK ‘royalty’ David Attenborough, who has dedicated his life to highlighting the complexity of the natural world, as well as human’s continued destruction of it. Granted, often it feels like such an overwhelming task to sort out that we don’t know where to start. Yet, AI provides a great opportunity to simplify highly complex systems and quickly understand the problems we’re creating. For instance, in the case of protecting endangered species, drones equipped with AI-powered computer vision algorithms can survey vast areas and identify specific species based on their visual characteristics and sound. This enables conservations to detect illegal activity as well as monitor habitat conditions over time. AI-powered ecological modelling can simulate the dynamics of ecosystems and species populations to predict potential problems, and as such provide us with the chance to start implementing preventative measures way in advance.

So overall, despite the dangers of AI, we need to carefully embrace the opportunities that it offers. In a void of legal regulation, it falls to business leaders to drive the regulatory discussion and work with governments to establish reasonable measures and restrictions, at speed.

Done correctly, the future could look very bright indeed.

By Elixirr consultants >>

“We help senior business leaders turn ideas into actions. Of course, it’s execution that determines success; that’s why we also make change happen, treating our clients’ business like our own.

Our people make our firm. And while our team expands across the globe, we continue to attract the best talent in the industry, building a team of high performing, like-minded individuals who share our vision of building the best consulting firm in the world.

With the launch of our ESPP scheme in 2021, we gave our entire team the opportunity to be part owners of Elixirr — and with a 74% enrolment rate for 2022, entrepreneurialism has never been more embedded into our business.”

 


You can also contribute and send us your Article.


Interested in more? Learn below.