Human Agency Scale: Redefining the Balance Between AI and Work

Human Agency Scale: Redefining the Balance Between AI and Work
Stanford researchers published an intriguing study in June, where 1,500 professionals from diverse fields evaluated the role of AI agents in their work. The results challenge the assumption that employees resist AI merely to protect their jobs.
Even when participants were asked to consider both job loss risk and work enjoyment, 46% of tasks were considered suitable for automation. Most interestingly, 69% of those in favor explained their choice as "freeing time for more valuable work". This isn't about laziness but about meaning. Workers want to offload routine tasks to focus on what makes their work valuable.
The study introduced the Human Agency Scale (HAS) concept, dividing tasks into five categories based on the level of human involvement required. Three insights reshape the way we think about AI agent adoption:
1. Collaboration wins. 45% of professions preferred equal partnership with AI. The critical question isn't "will AI replace humans" but "what is the optimal division of labor".
2. Meaning matters more than maximum automation. Employees systematically favored higher agency levels than experts thought necessary. Technical feasibility alone isn't enough – the meaningfulness of human roles must be consciously preserved.
3. The skills hierarchy is shifting. Tasks requiring unavoidable human involvement strongly connect to interpersonal skills and deep expertise. Meanwhile, data analysis – currently a high-paying skill – is trending toward automation. Workers are moving away from routine information handling toward human interaction and creative problem-solving.
The answer isn't "AI does everything" or "humans do everything", but something in between. For organizations, this means AI strategies must account for how employees experience meaningful work. The HAS framework helps determine when agents should be autonomous and when collaboration is essential. Most importantly, resistance isn't about fear of technology but about protecting the meaningful core of work.
How does your organization define the division of labor between humans and AI? And how do you develop employee skills for the demands of future hybrid work?
#AIAgents #HybridWork #FutureOfWork
Marko Paananen
Strategic AI consultant and digital business development expert with 20+ years of experience. Helps companies turn AI potential into measurable business value.
Follow on LinkedIn →Related Insights

Shadow AI: Three Critical Risks Organizations Face
Employees increasingly share sensitive data with AI tools using personal accounts, creating governance and security challenges.

Autonomous AI Agents: Benefits and Hidden Risks
Autonomous AI agents are shifting work from task iteration to goal definition and evaluation. The risk: convincing but shallow output may flood decision-making.

What 700 Million ChatGPT Users Actually Do with AI - And What It Reveals About Workplace Strategies
OpenAI's study with Harvard and Duke analyzed 1.5M ChatGPT messages. Results show AI is valued more for decision support than automation—what does this mean for workplace strategy?
Interested in learning more?
Contact us to discuss your company's AI strategy.