If countermeasures and regulations fail to keep pace with AI-related advancement, then the gap between risks and resilience will certainly grow.
Image: Getty Images (Sean Gallup / Staff)
Artificial Intelligence (AI) marks a technology revolution with profound long-term implications for humanity that will become increasingly apparent in 2024. Visions of the future in which AI permeates all aspects of social, economic, scientific, political and military activity are far closer to realisation than could have been imagined even in 2022. The democratisation of AI in particular, with the emergence of ChatGPT and other large language learning models (LLMs) in 2023, marks a step change in the role such technology will play in our collective future.
Discussions and predictions about the many positive and negative impacts of AI abound. Yet the pace of change in AI is so rapid that our understanding of its potential and the risks that may emerge means any forecasting is liable to be overtaken. Attempts to regulate the technology, build resilience and mitigate against malign exploitation or unintended consequences are very likely to lag behind the technology. We foresee the next year as likely to be defined, not as one of sudden profound transformation, but by the disruption that will invariably come with these transformative processes.
AI is enabling innovations to occur across different sectors. In healthcare, for example, AI is being used to enhance disease detection and drug development, address administrative inefficiencies and improve patient communication and training. AI-driven tools and applications promise to deliver similar efficiencies in other areas of life. This includes transforming the threat detection and early warning system capabilities of governments and global organisations, increasing preparedness against natural disasters, cyber attacks, disease outbreaks and other emergencies.
But these same technologies are also placing strains on political and societal resilience within countries, compounding global security and geopolitical risks. The propensity for political repression has increased globally, as AI expands the capacity of states to surveil, manipulate and exert control over their citizens and businesses. We have observed that a number of states – including Egypt, India, Russia and Turkiye – have followed a trajectory of being more repressive in this way in recent years.
Income inequality risks being deepened as the jobs of workers in sectors that are especially vulnerable to automation – such as finance and insurance, education, health and social work – are displaced or lost. Job losses due to AI have already begun. Some 4,000 US-based jobs were cut because of AI in May 2023 alone. More look likely to follow; two-thirds of jobs in the US and Europe are exposed to automation. This raises the real near-term prospect of job losses and more workers having to move into lower-paying jobs acting as a catalyst for political and social unrest.
Extremist, hate and fringe groups have already incorporated AI-generated deepfakes and misinformation campaigns into their propaganda and attack planning. This is likely to herald a major change in non-state actors’ capabilities to further their interests and make an impact. Content such as the AI-generated deepfake photograph of an explosion near the US Pentagon building in May 2023 – which triggered a 0.26% fall in the US stock market – are likely to become more common.
Malign actors who are already using automated disinformation to try to influence political and business outcomes around the world are also very likely to exploit AI to increasingly facilitate a variety of malicious activities. This includes undermining the integrity of elections, influencing voter opinions or inciting violence. The resilience of democratic processes to withstand malign interference will increasingly be put to the test over the coming year – particularly in Taiwan and the US which have presidential elections due in January and November 2024, respectively.
All of these risks now seem reasonably foreseeable in the next few years. What is less clear is whether states and organisations will have adequate safeguards in place in this time frame. While governments in several countries are likely to continue their attempts to position themselves as global leaders on AI – in part as a way to reap economic benefits – less material action appears to be being taken to minimise the occurrence and impact of such activity.
Urgent and effective efforts are therefore required to create national and international global regulatory frameworks at a speed that looks unlikely to be feasible in the prevailing political climate. If countermeasures and regulations fail to keep pace with AI-related advancement, then the gap between risks and resilience will certainly grow.
But these same technologies are also placing strains on political and societal resilience within countries, compounding global security and geopolitical risks. The propensity for political repression has increased globally, as AI expands the capacity of states to surveil, manipulate and exert control over their citizens and businesses. We have observed that a number of states – including Egypt, India, Russia and Turkiye – have followed a trajectory of being more repressive in this way in recent years.
Income inequality risks being deepened as the jobs of workers in sectors that are especially vulnerable to automation – such as finance and insurance, education, health and social work – are displaced or lost. Job losses due to AI have already begun. Some 4,000 US-based jobs were cut because of AI in May 2023 alone. More looks likely to follow; two-thirds of jobs in the US and Europe are exposed to automation. This raises the real near-term prospect of job losses and more workers having to move into lower-paying jobs acting as a catalyst for political and social unrest.
Extremist, hate and fringe groups, have already incorporated AI-generated deepfakes and misinformation campaigns into their propaganda and attack planning. This is likely to herald a major change in non-state actors’ capabilities to further their interests and make an impact. Content such as the AI-generated deepfake photograph of an explosion near the US Pentagon building in May 2023 – which triggered a 0.26% fall in the US stock market – are likely to become more common.
Malign-actors that are already using automated disinformation to try to influence political and business outcomes around the world are also very likely to exploit AI to increasingly facilitate a variety of malicious activities. This includes undermining the integrity of elections, influencing voter opinions or inciting violence. The resilience of democratic processes to withstand malign interference will increasingly be put to the test over the coming year – particularly in Taiwan and the US which have presidential elections due in January and November 2024, respectively.
All of these risks now seem reasonably foreseeable in the next few years. What is less clear is whether states and organisations will have adequate safeguards in place in this time frame. While governments in several countries are likely to continue their attempts to position themselves as global leaders on AI – in part as a way to reap economic benefits – less material action appears to be being taken to minimise the occurrence and impact of such activity.
Urgent and effective efforts are therefore required to create national and international global regulatory frameworks at a speed that looks unlikely to be adequate in the prevailing political climate. If countermeasures and regulations fail to keep pace with AI-related advancement, then the gap between risks and resilience will certainly grow.
A Revolution in Human Affairs