Image: Getty Images (DAVID MCNEW / Contributor)

Image: Getty Images (FREDERICK FLORIN / Contributor)

The extent to which new rules seek to protect jobs will no doubt be influenced by the effectiveness of unions, other influence and advocacy groups and leftist parties in doing so.

AI technologies and their applications do not operate in an unregulated vacuum. Existing labour laws, controls on the transfer of personal data and rules for commercial competition all apply. New rules and controls are likely to come into place through 2024. Canada, for example, is in the process of introducing an Artificial Intelligence and Data Act (AIDA) that it says will prevent fraud. The Netherlands’ powerful Dutch Data Protection Authority (DPA) began to coordinate with other agencies on AI-related issues in early 2023. 

The actions that governments and international bodies take (or not) to regulate generative artificial intelligence (Gen AI) over the next five to ten years will be key in determining the extent to which businesses and other organisations can harness the power of these technologies or succumb to them and the risks they create. In other words, regulation will determine resilience, but it will also have the potential to intensify geopolitical fault lines. And the nature of such regulation will also determine more prosaic questions that range from ‘how safe are our people and our assets?’ to ‘which video surveillance provider can we use?’.

Regulatory Resilience

Following the AI Innovation Curve

Click the circles in the key to add/remove AI providers.
Source: Stanford Center for Research on Foundation Models
Most AI providers do not currently comply with the EU’s draft AI Act



Downstream documentation

Member States

Machine-generated content



Risks and mitigations

Capabilities and limitations

Copyrighted data

Data governance

Data sources

The four point scale refers to Stanford's grading of each model's compliance with different aspects of the draft EU AI Act.