Orbit Policy's Deep Dives

Orbit Policy's Deep Dives

How I'm Thinking About and Using AI in My Work

Some reflections

Tom Goldsmith's avatar
Tom Goldsmith
Sep 11, 2025
∙ Paid

Happy Thursday! My daughter was home from daycare yesterday with a mild fever, yet almost full of energy to play, so I didn’t have the time for a proper post. So instead of a post yesterday and one tomorrow, I thought I would write just one this week about how I think about and use AI in my work.

The short version is that I find AI’s negative implications for critical thinking skills, immense environmental impact, ethical concerns about how LLMs are trained, and very meh results to be big reasons to avoid it in most cases. However, there are two distinct uses where I find AI tools valuable, as I’ll get to below the paywall.

Orbit Policy's Deep Dives is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Thinking About AI

Cognitive Impacts

My approach to AI has been influenced by my wider reading on its various impacts. These include its cognitive impacts, such as this piece from Cornelia C. Walther on agency decay, “a subtle yet substantive erosion of human autonomy resulting from overreliance on AI;” this academic paper that concludes that the available evidence suggests “that frequent engagement with automation induces skill decay;” and this paper from Michael Gerlich that finds that:

a significant negative correlation between the frequent use of AI tools and critical thinking abilities, mediated by the phenomenon of cognitive offloading. This suggests that while AI tools offer undeniable benefits in terms of efficiency and accessibility, they may inadvertently diminish users’ engagement in deep, reflective thinking processes.

I could go on, citing other papers I’ve come across, but my own experience when experimenting with AI confirms this for me more than the research. I’ve used it in the past, not because I expect the results to be better than what I could do myself, but because I want to offload some hard cognitive work. But there are reasons to engage in that hard work of thinking.

I’ve been influenced by this piece from Bianca Wylie: “Automating Summation - On AI and Holding Responsibility in Relationships.” Wylie writes about the time-consuming and sometimes dull work of transcribing and analyzing worksheets from public meetings. It is not an efficient process and is one that seems a perfect use case for AI.

Yet, as she argues:

When I see AI being suggested as a summarizing agent, I’m not only concerned about the accuracy of what is created through the use of automation, but moreso the absence or loss of what does not get done — what is inefficient and what is dull. I’m concerned because in the time-pressured world we live in, where efficiency is a constant measure of our professional capacity, there is every incentive to rid ourselves of this type of work if and where we can.

For Wylie, these are “necessary inefficiencies and skills in a world where we’re incentivized to get out of the weeds of details, away from anything dull.”

Not only does getting into the weeds help us think in a purely practical, applied sense, but it also helps us think at a more philosophical level. Thinking is a big part of what makes us human. As Lyndsey Stonebrige has written about Hannah Arendt’s approach to thinking: “Among other things, thinking is the exercise of representing the views of other people in one’s mind, of testing perspectives, experimenting with possibly new or alien ideas.”

Outsourcing our thinking is a dangerous path to take.

In practical terms, day to day, this means choosing the hard path, not the easy one. For example, I conducted 10 qualitative interviews early in the summer for a report I’m working on. The easy route would have been to use an LLM on my notes to summarise the themes and takeaways to speed up the process. But instead, to quote Wylie again, my notes from those interviews “became and were important to the relationship that I wanted to hold in a way where my duty to be careful and thoughtful was upheld.”

There is a duty to be thoughtful in my work, to immerse myself in the words and thoughts of others and to use my own mind to sew them into a richer tapestry of other ideas and thinking.

Sure, manually coding and categorizing those interviews and then writing them up into a coherent analysis was far more time-consuming than using an LLM. But I do not doubt that the analysis is stronger, that my skills at doing that and doing hard thinking have been strengthened, and that “my duty to be careful and thoughtful” with the words of others was upheld.

That is both a short-term and long-term good thing.

Other Concerns

Beyond the impact on thinking, there are a raft of other concerns that give me pause about using LLMs. These include their significant environmental impact, including their massive energy consumption and water usage. At a time of accelerating climate change, regulators must act rapidly to reduce these impacts. I also believe that individuals need to be judicious in their use of LLMs because of these environmental implications.

Numerous ethical considerations also exist. Many LLMs are clearly trained on vast troves of stolen data and copyrighted works. Even if the recent Anthropic court case found that its use of copyrighted work constituted fair use, I find it wrong that a company that uses these works to train algorithms and then often uses that work verbatim at scale — making money off it in the process — can count as a fair use case. Many AI companies are mounting a direct attack on the ability of real people to make a living out of their creative endeavours.

Then we get to the exploitation of low-wage workers, which AI companies have depended on to provide the human oversight necessary to tag these massive datasets and steer LLMs towards usefulness. This exploitation is often intensely colonial in nature, as Madhumita Murgia wrote about at length in her excellent book Code Dependent (see my post about it here). A piece in The Guardian today looks at some of this with US-based workers, with one researcher quoted in it saying:

“AI isn’t magic; it’s a pyramid scheme of human labor,” said Adio Dinika, a researcher at the Distributed AI Research Institute based in Bremen, Germany. “These raters are the middle rung: invisible, essential and expendable.”

Finally, there is the fact that US firms dominate the AI & LLM space. The immense power of these companies has long had severe consequences on individuals and countries. As Marietje Schaake wrote in The Tech Coup:

The digital technologies that once promised to liberate people worldwide have instead contributed to declining standards and freedoms, as well as weakened institutions. Private firms have leveraged their technologies for power consolidation. Tech CEOs have become the generals in geopolitical battles all over the world. From building platforms for conducting elections, to curating public access to information in app stores, to interfering in the front lines of war to decide who does and doesn't get internet access, these companies and their leaders share or have even overtaken the responsibilities of the democratic state. Yet there are no elections for consumers to share thoughts on corporate policy; CEOs cannot be voted in (or out) by the public; C-SPAN doesn't cover these companies' internal deliberative processes. The decisions that they make in the public interest are locked behind the fortress of private-sector protections. And unless democracies begin to claw back their power from such companies, they will continue to experience the erosion of their sovereign power.

And all of that is before we get to the increasing integration of these companies into an authoritarian, fascist US under Trump, and how he is willing to weaponize these tech firms in pursuit of his own ideological priors and geopolitical, imperialistic ambitions. AI and LLMs cannot be separated from that context.

Using AI

The result of all of these concerns is that I actively avoid using LLMs for the vast majority of use cases. Certainly, the quality of what LLMs produce is nowhere near enough to override the many issues and red flags there are. Some uses, such as Google’s attempts to force AI email and search summaries on users, are actively counterproductive and inaccurate. I’ve disabled email summaries and other forced AI where possible for my business Google account, and actively moved away from Chrome and Google Search in favour of Vivaldi and Ecosia.

However, this isn’t to say there aren’t some valuable use cases for LLMs. I have found one distinct use case of an LLM integrated with another product that is incredibly useful, plus another use case with major caveats.

Keep reading with a 7-day free trial

Subscribe to Orbit Policy's Deep Dives to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Tom Goldsmith
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture