The rapid advancement of artificial intelligence (AI) has resulted in its proliferation across various sectors – from consulting and banking to manufacturing and healthcare, to name just a few. Generative AI has become increasingly widespread and is set to be a game changer at all levels, with far-reaching repercussions for democracy, elections and armed conflict. It is therefore critical to assess its impact on democratic principles and institutions, as well as military defence.
AI companies worldwide are racing to develop and deploy the technology as quickly as possible and attain AI supremacy. However, alarm bells have been sounded over the need to ensure that adequate safety measures are put in place to protect consumers, existing democratic institutions and society at large, and to prevent the technology from one day posing an existential threat to humanity.
What will be the impact of generative AI on our political system and in shaping public opinion and discourse? How can we guarantee that AI is utilised responsibly in conflict situations? And what is the proper role of governments in regulating the technology?
These were some of the questions posed at a recent Tech Talk X organised by [email protected], which was conducted under Chatham House Rule. Moderator Peter Zemsky, Deputy Dean and Dean of Innovation at INSEAD, was joined by panellists Antoine Bordes, vice president at Helsing, an AI and software company in defence; Judith Dada, general partner at venture capital fund La Famiglia; and Juri Schnöller, co-founder and managing director at digital political communications and campaigning firm Cosmonauts & Kings.
During the discussion, the speakers explored the intersection of AI and democracy, including the implications of AI for defence. They also proposed strategies to ensure that AI technologies foster, rather than undermine, democratic values amid the challenges posed by an increasingly volatile international security landscape.
Overview of the AI landscape
AI and big data play a key role in framing public discourse during elections, and the technology will undoubtedly affect the 2024 United States presidential race. The discussion kicked off with the panellists dissecting the evolution of how AI is used in the political realm.
Today, political parties and super PACs (political action committees, which raise and distribute campaign funds to their chosen candidates), especially in the US, are investing millions in developing and deploying AI models. These models allow them to dig deep into data points on individual voters to help facilitate more targeted campaign initiatives.
In addition to this, there is the widespread issue of bots and deepfakes being used to drive misinformation campaigns. As the technology becomes more sophisticated, it will become increasingly difficult for the average person to distinguish them from real or factual content.
Given the stresses that generative AI is putting on the political system, it is imperative for policymakers to play a key role in managing the technology appropriately. However, as this is a relatively new domain, the question is whether existing policymakers are equipped with the right knowledge and frameworks to understand the technology and enact the appropriate legislation around it.
The discussion then moved on to how AI is being used in military defence. In the Ukraine War, for instance, many AI tools that have commercial applications and are used by civilians are being harnessed to strengthen defensive capabilities, such as battlefield data analytics and drone technology. Indeed, the defence sector – and European companies in particular – saw record investment from venture capital firms in 2022, despite the wider slowdown in technology funding.
A tale of two regions
The panellists also touched on differences in the growth of AI between the US and Europe, and how European AI companies can catch up to their American counterparts. As one of the speakers pointed out, US companies have generally been a lot more strategic about investing in AI, leading to significant differences in value capture.
However, there seems to be a newfound sense of pride among European entrepreneurs who are eager to develop AI technology and shape the economic, political and regulatory perspective with a European viewpoint – one that prioritises and upholds democratic values. Generative AI, in particular, presents a big opportunity for European companies to ensure that models incorporate European data sets in their training, thereby reflecting cultural references and values in the output.
Establishing the right frameworks and regulations can nurture these seeds of progress. However, the challenge lies in designing AI regulations that help promote the creation of economic value, without putting consumers at risk. European Union lawmakers recently passed the AI Act, billed as the first law on AI by a major regulator. Although it has yet to become law, it will have major implications for the development of AI in the region.
While the panellists were all in agreement on the necessity of regulations, one point that was raised emphasised that these regulations should not curtail AI development by start-ups or smaller companies in Europe. The concern was that such restrictions would indirectly benefit Big Tech, US-based firms and similar start-ups in China. These hurdles could come in the form of heavy reporting burdens, restrictions, paperwork and time lost as companies adapt to new legislation and ensure that they are not running afoul of the law.
Ideally, these regulations will help mitigate consumer risks while also creating the conditions to build a flourishing European AI ecosystem. One of the panellists suggested that a multi-stakeholder approach to this complex issue could be more effective than leaving it in the hands of politicians.
Upholding democratic values
Much has been said about AI’s role in stoking populism and threatening the democratic process. One of the speakers framed democracy as a conversation that breaks down if it gets overwhelmed by bots and deepfakes. How, then, can we better protect the democratic process from the risks of AI, especially with crucial elections taking place in the US and Europe in the next few years?
As one of the panellists stressed, it will be crucial to have systems that verify AI-created content and clearly label it as being generated by AI. As political parties build customised large language models to serve their interests, it could be necessary to mandate the disclosure of the specific AI tools they are using and for what purpose, and how they train their data sets. This approach would be similar to disclosures required for political funding.
Of course, there are many cases of the technology being used for good. As a panellists commented, some NGOs are leveraging AI to help stateless individuals by getting real-time information to them in their language. The World Food Programme has also used AI to improve its ability to respond to emergencies caused by natural disasters.
Another panellist emphasised that this could potentially be the biggest technological shift humankind has ever seen. While politicians play an outsized part in shaping AI regulations, it is equally important for citizens to voice their opinions on how AI is developed, deployed and governed. This engagement is vital to ensure the preservation of democratic values in society.
“INSEAD, a contraction of “Institut Européen d’Administration des Affaires” is a non-profit graduate-only business school that maintains campuses in Europe, Asia, the Middle East, and North America.”
Please visit the firm link to site