Orbit Policy's Deep Dives

Orbit Policy's Deep Dives

Share this post

Orbit Policy's Deep Dives
Orbit Policy's Deep Dives
Friday Reading Roundup #10

Friday Reading Roundup #10

Canada's federal AI hype & more.

Tom Goldsmith's avatar
Tom Goldsmith
Jun 20, 2025
∙ Paid
1

Share this post

Orbit Policy's Deep Dives
Orbit Policy's Deep Dives
Friday Reading Roundup #10
1
Share

Happy Friday! Today, I cover how the federal government seems to have fully bought into AI hype and how that isn’t what we need from them right now.

Then, for paying subscribers, I highlight some more reads, including on digital trade and the value of marketing and sales executives for Canadian tech companies. I also include the most cathartic, expletive-filled takedown of ICE and the fascist clowns who are in it and direct it. I’m covering all the bases today!

Orbit Policy's Deep Dives is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

AI Hype

blue LED Hype sign
Photo by Verena Yunita Yapi on Unsplash

Ottawa’s AI guy - Mickey Djuric, Mike Blanchfield, and Nick Taylor-Vaisey, Politico

The Simple Macroeconomics of AI - Daron Acemoglu, Shaping the Future of Work

Your brain on AI - Joe Castaldo, The Globe and Mail

Teachers Are Not OK - Jason Koebler, 404 Media

In his first major speech, our new federal Minister for AI has indicated that the government is fully bought into AI hype. Speaking at a Canada 2020 event, Evan Solomon described AI as a “key to our economic destiny.” Furthermore, Solomon said it would be “an existential threat to our future” if we fell behind in the AI race.

I rather balk at this. This is a classic example of what Lee Vinsel and Andrew L. Russell call “innovation-speak.”

Innovation-speak is fundamentally dishonest. While it is often cast in terms of optimism, talking of opportunity and creativity and a boundless future, it is in fact the rhetoric of fear. It plays on our worry that we will be left behind: Our nation will not be able to compete in the global economy; our businesses will be disrupted; our children will fail to find good jobs because they don't know how to code. Andy Grove, the founder of Intel, made this feeling explicit in the title of his 1996 book Only the Paranoid Survive. Innovation-speak is a dialect of perpetual worry.

While Solomon is breathlessly discussing the existential threat we face, the reality of AI is more complicated (as everything always is). Move past the hype machine stoked by those selling AI, and the estimates of economic impact are less glowing, and the wider harms are far more stark.

On the first front, Daron Acemoglu estimated that AI would lead to an increase of less than 0.53% in total factor productivity over 10 years. This is a long way away from estimates of the trillions of dollars and huge productivity gains that are often spoken of.

Furthermore, Acemoglu also includes the potential negative impacts of AI in his calculations and estimates that the net effect will be negative on welfare and social value.

We can see how some of this plays out in generative AI’s impact on critical thinking skills, which Castaldo covers in his Globe and Mail piece. While there are cases where people use it constructively, there are plenty where they outsource their thinking to an algorithm. As Carleton University professor Frances Woolley says in that article, “when students delegate their work to AI, their skills atrophy.”

Jason Koebler’s article for 404 Media is even more stark on this. It explores the experiences of school and university teachers in this new age of AI in education.

They describe trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. “That sure feels like bullshit.”

Robert W. Gehl, the Ontario Research Chair of Digital Governance for Social Justice at York University, argues in that piece that “generative AI is incredibly destructive to our teaching of university students.” For him, “We need to rethink higher ed, grading, the whole thing.”

We need our government to tackle questions like that.

AI, in all its various forms, is here for good and for bad.

Now, we need governments (federal and provincial) to engage in the deep, complex work of understanding AI’s positive and negative impacts and ensuring that the actual outcomes for Canadians align with our values.

What does higher education look like in that context? What does our media ecosystem look like in an age of AI slop and content farms? What happens to our youth when entry-level jobs are being replaced by AI?

What we don’t need is for government ministers to act as just another AI booster.

Share

Other Reads

Keep reading with a 7-day free trial

Subscribe to Orbit Policy's Deep Dives to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Tom Goldsmith
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share