Image: Getty Images (DAVID MCNEW / Contributor)
Image: Getty Images (FREDERICK FLORIN / Contributor)
The extent to which new rules seek to protect jobs will no doubt be influenced by the effectiveness of unions, other influence and advocacy groups and leftist parties in doing so.
AI technologies and their applications do not operate in an unregulated vacuum. Existing labour laws, controls on the transfer of personal data and rules for commercial competition all apply. New rules and controls are likely to come into place through 2024. Canada, for example, is in the process of introducing an Artificial Intelligence and Data Act (AIDA) that it says will prevent fraud. The Netherlands’ powerful Dutch Data Protection Authority (DPA) began to coordinate with other agencies on AI-related issues in early 2023.
The actions that governments and international bodies take (or not) to regulate generative artificial intelligence (Gen AI) over the next five to ten years will be key in determining the extent to which businesses and other organisations can harness the power of these technologies or succumb to them and the risks they create. In other words, regulation will determine resilience, but it will also have the potential to intensify geopolitical fault lines. And the nature of such regulation will also determine more prosaic questions that range from ‘how safe are our people and our assets?’ to ‘which video surveillance provider can we use?’.
Following the AI Innovation Curve
Risks and mitigations
Capabilities and limitations