Human Agency Scale: Redefining the Balance Between AI and Work

Human Agency Scale: Redefining the Balance Between AI and Work
Stanford researchers published an intriguing study in June, where 1,500 professionals from diverse fields evaluated the role of AI agents in their work. The results challenge the assumption that employees resist AI merely to protect their jobs.
Even when participants were asked to consider both job loss risk and work enjoyment, 46% of tasks were considered suitable for automation. Most interestingly, 69% of those in favor explained their choice as "freeing time for more valuable work". This isn't about laziness but about meaning. Workers want to offload routine tasks to focus on what makes their work valuable.
The study introduced the Human Agency Scale (HAS) concept, dividing tasks into five categories based on the level of human involvement required. Three insights reshape the way we think about AI agent adoption:
1. Collaboration wins. 45% of professions preferred equal partnership with AI. The critical question isn't "will AI replace humans" but "what is the optimal division of labor".
2. Meaning matters more than maximum automation. Employees systematically favored higher agency levels than experts thought necessary. Technical feasibility alone isn't enough – the meaningfulness of human roles must be consciously preserved.
3. The skills hierarchy is shifting. Tasks requiring unavoidable human involvement strongly connect to interpersonal skills and deep expertise. Meanwhile, data analysis – currently a high-paying skill – is trending toward automation. Workers are moving away from routine information handling toward human interaction and creative problem-solving.
The answer isn't "AI does everything" or "humans do everything", but something in between. For organizations, this means AI strategies must account for how employees experience meaningful work. The HAS framework helps determine when agents should be autonomous and when collaboration is essential. Most importantly, resistance isn't about fear of technology but about protecting the meaningful core of work.
How does your organization define the division of labor between humans and AI? And how do you develop employee skills for the demands of future hybrid work?
#AIAgents #HybridWork #FutureOfWork
Marko Paananen
Strategic AI consultant and digital business development expert with 20+ years of experience. Helps companies turn AI potential into measurable business value.
Follow on LinkedIn →Related Insights

Built to Adapt: What Football Teaches Us About AI Transformation
AI is changing the rules of business while the game is still being played. Not a new tool—a fundamental rule change. The teams that thrive won't be those with an AI strategy, but those built to adapt when competitive dynamics shift beneath them.

The Data Maturity Paradox: Data Needs Users, Users Need Data
Data maturity requires users, users require working solutions, working solutions require data maturity. How to break this cycle?

Shadow AI: Three Critical Risks Organizations Face
Employees increasingly share sensitive data with AI tools using personal accounts, creating governance and security challenges.
Interested in learning more?
Contact us to discuss your company's AI strategy.