Government Adoption of AI + Quick Hits
A look at the Federal Government's Principles for AI adoption plus some summaries of interesting reads
Today, I discuss the federal government’s approach to AI adoption in the public service and my response to their ongoing consultation before turning to some Quick Hit reading recommendations.
AI in Government
The Treasury Board of Canada is nearing the end of its public consultation on using AI in the federal service. Distinct from other AI initiatives focused on the wider economy, this is highly focused on government operations with the aim of aligning and accelerating responsible AI adoption throughout the federal public service. If you are interested in responding you can find the survey here or can email a response here by October 31.
The strategy is based on four principles:
Human Centred: We focus on the needs of our clients and public servants in deciding where we adopt AI and how we integrate it into our work
Collaborative: We work together on AI adoption with Indigenous and Canadian partners, other Canadian and international jurisdictions, and our Government of Canada colleagues.
Ready: We have the infrastructure, tools, and policy we need for safe secure, and successful AI adoption
Trusted: Government of Canada clients and public servants know when and how we use AI and can trust that our use of AI is responsible, ethical, safe, and secure.
There are certainly worse principles around which to build a strategy, but I have a number of thoughts and concerns.
Why new principles?
First, the government of Canada already has a set of 12 principles for the use of AI aligned with the Digital Nations Shared Approach to AI. Notably, these include several things missing from those in the consultation, even in their longer descriptions. Evaluating outputs to minimize biases, ensuring training and input data is lawfully collected, used, and disclosed, publishing legal or ethical impact assessments, establishing oversight mechanisms to ensure accountability, and assessing and mitigating the environmental impacts - all are explicitly in the 12 principles but are either absent or more vaguely worded in the principles under consultation.
That is concerning to me. Being very clear about the importance of transparency through publishing assessments and establishing oversight mechanisms matters. Governmental decisions can have dramatic consequences on people’s lives. The language that is guiding its deployment and adoption should be precise on their importance.
What about consultation with equity-seeking groups?
While working together with Indigenous organizations and rights holders is mentioned, the need for broader consultation with equity-seeking groups is absent from the proposed principles. Under Principle 4 there is mention of making sure AI does not create bias, discrimination, or barriers to access, but this is still a long way from explicitly embedding the importance of meaningful consultation and co-development. It is well-established that the tech workforce lacks in diversity in many ways. And there is ever-growing research on bias in AI models and their harms. Given this, any AI adoption strategy should be crystal clear on the need to consult with and engage equity-seeking groups in clear and transparent processes. This should include both staff within the public service as well as end users of government services.
The uses of AI aren’t uniform - nor are their impacts
One thing that strikes me is that the principles do nothing to distinguish between high-risk/high-impact uses of AI and those with lower impacts and risks. Using a private LLM to help write a more concise briefing note on a run-of-the-mill topic, or to suggest a series of social media posts based on a forthcoming government document are one thing. Using AI for immigration decisions, for decisions with national security implications, as part of decisions on benefit eligibility - all have very different types of implications. This speaks to the piece from Geoff Mulgan I included in Monday’s newsletter. In it, he compared AI governance to cars or to finance:
Governments don’t regulate finance through generic principles but rather through a complex array of rules covering everything from pensions to equity, insurance to savings, mortgages to crypto.
Much the same will be true of AI which will need an even more complex range of rules, norms, prohibitions, in everything from democracy to education, finance to war, media to health. Governments will have to steadily fill in the ‘thousand cell matrix’, that connects potential risks to contexts and responses.
The government needs to approach AI use in the public service in a similar way. The strategy needs guidance on low-risk uses where safe experimentation can be encouraged as well as clarity on the high-risk uses where far greater planning and consultation is essential.
AI won’t fix broken systems
Ultimately though, AI is not central to whether the strategy is able to deliver more efficient operations, or improving the quality, speed, or accessibility of its services. That is all about the system AI is used in.
Matt Clifford, the Chair of the UK’s Advanced Research + Invention Agency and Chair of the AI Opportunities Action plan has a good way of thinking about AI adoption:
AI is not a magic wand to make government efficient. Real work needs to be done to fix the underlying systems and workflows before AI can yield real benefits - especially when those systems are broken or biased to begin with.
Indeed, we’ve been through that before with the transition from analogue processes to digital ones. Tara Dawson McGuinness and Hana Schank’s excellent book Power to the Public explores the huge potential of public interest technology to help people, as well as some of the big barriers to getting it right. In it they argue:
A common mistake people make when trying to improve or modernize something is believing that digital will always be better. But digitizing a broken paper process doesn't make it better. Sometimes it makes it worse.
The same is true for using AI. They highlight the varied reasons big IT projects in government fail (and Canada has a long list of them): “Technology is viewed as a way to fix a policy or process that is broken. An agency fails to understand the underlying issues slowing down a process, or even what the agency's core goals are in building a new system. Staff and leadership often lack the technical know-how required to make decisions about modern projects. And so many more.”
We need to make sure we are doing the work to fix those underlying issues and improving those processes first and then deploying AI where it makes sense and can add real value. We should not be starting from the perspective of here is a shiny AI tool, let’s use it, without doing that deep and painful work to begin with.
That should be a core principle for the federal government - both for AI adoption and beyond.
Quick Hits
Canada should be opening more doors to gifted Afghan students, not closing them - A timely and depressing case study into exactly how some of those underlying issues and systemic problems within the federal government that need fixing. In it Lisa Ruth Brunner highlights the systemic double standards of Canadian immigration that closes the door to exceptionally bright and talent Afghan women seeking to pursue higher education in Canada because of the fact they would likely qualify for asylum - despite the fact the government also recognizes and even encourages dual intent to study and then stay in Canada for others. This is just another in a long line of institutional bias in Canada’s immigration system, as my wife’s past research for UCL’s Migration Research Unit has demonstrated.
Canada Needs a National Strategy on the Future of Innovation - In this piece for CIGI, Matthew da Mota argues that some of the failures in Canada’s innovation outcomes are due to the fact that “Canada lacks a coherent vision and strategy on the future of innovation and tech and the role we wish to play in the global context”. If you’ve been reading my recent pieces then you’ll know I couldn’t agree more with this.
Innovation and Trade Can Boost Small Business Productivity and Profitability in Canada - Some commentary from Desjardins on why boosting our innovation outcomes matters. It highlights how SMEs are slow to integrate innovations into their business processes - which holds back productivity growth. Given 98% of Canada’s businesses are small, helping them utilize innovation can make a big difference.