Will AI Really Make Us More Productive?

Why the faster we work, the more work we get

BUSINESS

Richard Hanson

4 min read

In 1865, economist William Jevons discovered something that should worry every professional betting on AI to lighten their workload. He noticed that as steam engines became more fuel-efficient, Britain did not burn less coal but burned vastly more. Cheaper energy meant more factories, longer routes, bigger ambitions. Efficiency created abundance, not leisure.

Today, I watch the same paradox unfold in my inbox. The first time I used AI to draft a legal note, what once took an hour was done in minutes. But as Oliver Burkeman argued in "Four Thousand Weeks," productivity improvements often backfire: "if you get much quicker at answering email, you reply to more people more quickly, and then they reply to you, and then you have to reply to those replies. And before you know it, you're doing more email with your life." My newfound efficiency had simply reset the baseline expectations.

This is AI productivity in practice: faster outputs matched by inflated expectations. The old rule that work expands to fill the time available - Parkinson's Law - just received a silicon upgrade.

The early evidence paints a complex picture. Jon Whittle's team at Australia's CSIRO studied customer service agents using AI assistants and found productivity increases of 14% overall, impressive until you discover that 35% gains went exclusively to less experienced staff handling routine queries. Meanwhile, a broader survey revealed that 70% of workers felt AI either slowed them down or that they could not unlock its potential.

When CSIRO examined its own workforce, the pattern became clearer. People gravitated toward trivial applications: drafting emails, summarising notes, generating first drafts that still required substantial human refinement. Meaningful productivity gains demanded time, experimentation, and a knack for identifying high-value problems rather than convenient ones. Most crucially, unlocking anything beyond the superficial required genuine skill development.

But even when organisations invest properly in training and implementation, they face a more fundamental challenge. Research suggests that up to 80% of attempts to apply AI in organisations fail outright. The technology works; the human systems do not.

Here lies the first lesson from history. In the Victorian era, factory owners who installed steam-powered machinery rarely passed efficiency gains to their workers as shorter hours. Instead, they redesigned workflows to extract more output from the same time. A study from the 1980s found this dynamic persists: new technology typically improves productivity in one area while creating unexpected work elsewhere. Managers who design IT systems to ease their own tasks often unwittingly burden administrators with additional complexity.

The parallel to AI is precise. When associates can draft contracts twice as fast, the likely result is not shorter days but two contracts, with higher expectations for customisation and speed. When AI speeds up one department, other departments find themselves managing the overflow, handling escalations, or processing the additional volume that efficiency has unleashed.

This creates an uncomfortable reality that most implementation strategies ignore. Workers understand, often more clearly than their managers, that AI threatens their interests. Even when the technology works perfectly, it typically means higher expectations, more work, and less job security. The rational response is passive resistance: not dramatic rebellion, but quiet sidelining. A tool that never quite gets used properly, training that never quite sticks, processes that somehow never quite get optimised.

Management knows this dynamic exists but rarely acknowledges it openly. The technology gets treated as inevitable progress; the humans get treated as the inconvenient variable to optimise. There is an unspoken assumption that the tools are right and the psychology is wrong - that resistance, confusion, or suboptimal adoption reflects human failings rather than flawed and unempathetic implementation.

The result is a kind of collective fiction: everyone pretends the resistance stems from technological limitations or training gaps, when the real issue is that people correctly recognise the technology is not designed to serve them.

This does not make me an AI pessimist. The technology offers genuine opportunities beyond mere productivity gains. Whittle identifies four compelling reasons for AI adoption: productivity improvements, cost reduction, enhanced customer value, and competitive advantage through business model disruption. The most interesting applications focus on delivering services that were previously impossible rather than simply accelerating existing processes.

But realising these benefits requires acknowledging psychological realities that most organisations prefer to ignore. People respond to incentives, not just capabilities. They resist changes that threaten their position, regardless of how elegantly those changes are presented. And they can distinguish, often with remarkable accuracy, between initiatives designed to serve them and initiatives designed to extract more value from them.

A few principles for the decade ahead:

Expect Jevons everywhere. Efficiency gains will tempt everyone to demand more, not less. Build expectation management into any AI strategy from the start, or watch your time savings evaporate into scope inflation.

Prepare for delayed gratification. Historical evidence suggests that productivity gains from transformative technologies take years to appear in national statistics. Early adoption brings turbulence before it brings measurable improvement.

Design for human interests, not just business interests. The most successful implementations align worker incentives with organisational goals rather than treating employee psychology as a problem to be overcome.

Focus on value creation over cost cutting. Use AI to improve decision-readiness, customer experience, and service quality - not merely to do familiar tasks faster or with fewer people.

The AI productivity revolution will arrive not as a single dramatic shift, but as a quiet accumulation of marginal gains - each too modest for headlines, but together capable of reshaping how professional work gets done. The question is whether those gains will serve the people doing the work, or whether they will follow the pattern of cheaper coal and factory automation: making the system more profitable while leaving the humans to run faster just to stay in place.

History suggests that efficiency without empathy creates abundance for some and exhaustion for others. The choice of which path AI takes remains ours to make.