The UK's AI Opportunity?
A look at the UK's recently unveiled AI Opportunity Plan along with other quick hits
Today I explore the UK’s newly announced AI Opportunities Action Plan and some of the reactions to it. It is a document with a lot to commend, as well as one containing some major red flags. I also highlight a few quick hits that I think are worth your time. I hope you enjoy it.
Written by Matt Clifford, Chair of the UK’s Advanced Research and Invention Agency, the AI Opportunities Action Plan is an ambitious document that sets out 50 uncosted recommendations for the UK, which the government adopted in full. It is broken down across three pillars “Invest in the foundations of AI”, “Push hard on cross-economy AI adoption”, and “Position the UK to be an AI maker, not an AI taker”. While wide-ranging in its goals, it is nonetheless grounded in a degree of realism, highlighting upfront the UK’s tough fiscal situation and the need to build on areas of UK strength, such as AI for science and robotics, rather than attempt to be a leader in every type and application of AI.
In addition, the Plan takes a holistic view of AI needs. Calling for a 10-year investment commitment when it comes to AI infrastructure (Canada’s, in contrast, is a 5-year plan), it makes recommendations for mission-focused AI Research Resource programme directors that will help allocate compute to “high-potential projects of national importance, operating in a way that is strategic and mission driven”. That is a marked contrast with the Canadian Sovereign AI Compute Strategy, which has no details so far of how the $300 million AI Compute Access Fund will be allocated.
The emphasis on unlocking public and private data assets is also interesting. The plan recommends creating a National Data Library as a strategic UK asset and a copyright-cleared British media asset training set that could be licensed internationally at scale.
Talent also receives considerable focus, with nine recommendations. These span the full range of talent, from nurturing pathways for high-level academic talent to increasing the supply of AI talent for the workforce, establishing a headhunting capability to bring elite AI leaders to the UK, increasing the diversity of the talent pool, and addressing the impact of AI on lifelong skills programming.
AI safety and adoption within the government are also emphasized. Again, this is in contrast with how AI is being dealt with federally in Canada, where AI in government is a separate strategy led by the Treasury Board. AI safety and regulation seem to be quite disconnected by AI compute and adoption, despite both being led by ISED.
The Plan heavily emphasizes the benefits of AI and the need to scale rapidly when it comes to government adoption. Clifford recommends a “Scan>Pilot>Scale” approach, and the Department for Science, Innovation, and Technology supports public sector partners in " moving fast and learning things.”
There is a lot of interest here and a lot of food for thought. Whether AI ever lives up to the full potential its boosters claim is an open question. But it is undoubtedly here and having a major impact for good and ill. On the positive side, the fact the 2024 Nobel Prize for Chemistry was awarded to Demis Hassabis and John Jumper for the use of AI in predicting protein’s complex structures highlights the immense potential of AI in supporting leading scientific research. On the negative, AI systems remain highly imperfect tools that often replicate the worst of us, not the best. One example is how the UK Home Office had to stop using an algorithm to sort visa applications after it was claimed to contain “entrenched racism and bias”. At any rate, it is clear that AI and its use are major topics that need to be grappled with seriously.
Yet, while the AI Opportunities Action Plan takes a wide-ranging view, various commentators have pointed out a number of key gaps and red flags.
This thoughtful piece by Anuradha Sajjanhar takes the Plan to task for its emphasis on innovation over accountability. For example, when it comes to speeding up the construction of AI infrastructure, the Plan recommends creating AI Growth Zones that would include expedited planning permissions as a way to attract foreign AI investment. As Sajjanhar argues, these Zones “echo the deregulated Special Economic Zones seen in the Global South - spaces that often prioritize corporate profit over worker protections, environmental sustainability, and local welfare.” She continues, “By courting tech giants with regulatory leniency and infrastructure incentives, the UK risks ceding public accountability to private actors, effectively turning governance into a subsidiary of the tech industry.”
As for the use of AI in government, Sajjanhar points out some major issues with the approach recommended:
In the UK, where public services are already under strain, the adoption of AI risks compounding existing inequalities while shielding decision-makers from scrutiny. Civil servants, touted as the “human-in-the-loop” ensuring accountability, often lack the technical expertise or authority to challenge algorithmic decisions effectively.
[…]
Once empowered to exercise discretion and adapt policies to local contexts, civil servants are increasingly constrained by the rigid logic of algorithmic systems. This shift has profound implications for democratic accountability
When it comes to regulation and data use, Gaia Marcus, the Director of the Ada Lovelace Institute, highlighted that there “will be no bigger roadblock to AI’s transformative potential than a failure in public confidence”. Requiring regulators to formally implement growth goals, potentially diluting their mandates to protect the public interest, could undermine that confidence, as could the use of public data in untransparent ways.
Meanwhile, Rachel Coldicutt, one the UK’s sharpest tech commentators and practitioners, emphasized that “this isn’t so much a technology plan or a delivery strategy but a shop window for investment”. Given that the Labour government has made its number one goal increasing growth, and they have zeroed in on a lack of business investment as the means to accelerate that, this plan can’t but be considered in that light. The question ultimately is how do you balance the imperatives of growth, which, much like in Canada, desperately needs accelerating, with wider issues of public interest. Going back to Sajjanhar:
This isn't just a question of misplaced priorities; it's a telling snapshot of a government that seems to favor innovation at any cost over a balanced approach that includes accountability, equity, and protection for all. The future may indeed be built by the AI industry, but who ensures it’s one worth living in for everyone?
That is a trade-off we need to grapple with.
Quick Hits
The impact of artificial intelligence on macroeconomic productivity Impact of AI on macroeconomic productivity - Continuing the AI theme, this piece from Masayuki Morikawa looks at AI’s impact on productivity. Drawing on a range of different studies, Morikawa estimates a 0.5-0.6% boost to labour productivity at the macro level. That is far lower than some of the most ambitious estimates, though it would still mark a noticeable improvement, especially when Canada’s labour productivity has actually been falling. This increase, though, is likely to come with a widening of overall labour market inequality in the near term.
Right Brain, Left Brain, AI Brain - Also on AI, this brand new report by the Dais’s Vivian Li and Graham Dobbs looks at AI’s impact on jobs and skill demands in Canada’s workforce. They find that 27% of workers have high exposure and high complementarity to AI while 29% have high exposure and low complementarity, placing them at higher risk of having their tasks automated without human involvement. They highlight the need to think about AI adoption at the task level and the need to deeply consider (and mitigate) the skills and talents impact as adoption across the economy continues.
Labour and the challenge of coherence - Given my recent post on the importance of articulating a vision and engaging in positive storytelling, I thought this piece by my favourite UK political commentator, Sam Freedman, is worth highlighting. Freedman explores the frustration within the UK government that the “lack of direction from Starmer” is making it even harder to accomplish things. Starmer has explicitly tried to position himself as “unburdened by doctrine” but as Freedman argues, a managerial approach is insufficient.
You can’t run a country without some kind of guiding philosophy. The prime minister doesn’t have the time to be involved in every decision, which makes having clearly defined principles extremely important. They allow those around the PM to make decisions on his behalf confident that he will support them. Without that there simply isn’t the capacity for the centre of government to function.
This doesn’t mean that Labour isn’t doing anything. The issue is that all the considerable activity going on across government lacks coherence because it’s overly dependent on the approach of each secretary of state. This leads to contradictions or unresolved disagreements that create further confusion, and also to the misallocation of resources due to a lack of central strategy.
This is worth thinking deeply about here as well as in the UK.
Really enjoyed this one, Tom! The coda at the end about the Starmer government is helpful - I recently read Freedman's Failed State and found it really informative and provocative.