With the AI whitepaper recently published, the UK has set out its approach to regulation of the use of this technology. This follows the establishment of the AI Council which published the AI roadmap in 2021.
Good timing, as the government has had the advantage of seeing the regulatory approaches others are taking particularly the EU’s AI Act which leans more heavily into an ethical framework. The general public is awash with stories about ChatGPT and everyone seems to be talking about how AI will transform the future. Also noteworthy is an open letter on AI signed by experts and backed by Elon Musk calling for a six month pause to AI experiments due to “risks to society and humanity”.
Where the UK is positioning itself within the global environment, similar to their approach in other regulatory matters, especially over tech, is carving out a slightly different path to spur innovation and boost the UK’s competitiveness.
While the overview of what’s in the whitepaper is readily available, it’s interesting to see the tech industry’s reaction – which points out a few key takeaways beyond the headlines and reveals the likely direction of travel and future discussion topics. There are still lots of unanswered questions and more to be uncovered as experts digest these proposals further. From what we know now my top three areas to watch are:
- Level of regulation: “Light touch” and” “flexible” are words the sector is using to talk about the UK’s approach and the lack of a “single rule book”. While this could be confusing and contribute to a lack of clarity that holds back innovation as a few voices are calling attention to it, many in the industry thus far are welcoming this as proportionate for developing technology. It is worth noting that the method of regulation suggested is sector-specific, and the whitepaper proposes establishing an overseer to join up the dots. One area likely to get future debate is if this approach is workable or if it will leave too many potential gaps, allowing societal harm to slip through.
- Regulatory sandboxes: As previewed in the Budget, the whitepaper moves further towards establishing safe testing beds and my industry contacts are excited about this approach. This can be useful for identifying opportunities to democratise the development and use of AI. Yet as experts have been warning for years, the key risk of AI is around bias – so any solutions developed will need to tackle this to truly be effective.
- International standards: Obviously multinational companies want aligned rules and there is competition to see who will innovate the fastest / best and who will create the standard for the rest of the world or region. Analyst Matt Howett has put together a comparative chart of how the UK’s approach stacks up to other countries, which is worth examining. The approach of guiding international standards isn’t new and the UK has already invested in AI standards setting. How this develops will be one to watch as it also has the potential to provide a model for other emerging technologies.
Revealing the limits of popular AI apps, when I asked ChatGPT if it could write me this blog unfortunately it does not currently have access to real-time news so the best it could do was a generic prediction that the industry will support the government’s initiative as a growth opportunity while staying wary of regulation that could create barriers for entry. Roughly that’s what we’re seeing play out – though the devil will be in the details for companies looking to harness AI.
What this whitepaper does is kick off the government’s engagement with industry. There is a timeline of next steps published and a formal consultation open – any company with relevance would do well to engage now.