Here's an interesting lens for evaluating AI's impact: think about what AI actually does at its core.
AI agents are fundamentally built to replicate agency—the ability to make decisions, take action, and adapt independently. That's the whole point.
So here's where it gets interesting: jobs fall into two categories when you look through this framework.
First, there are roles that fundamentally require agency—judgment calls, adaptive problem-solving, creative direction, strategic decision-making. These aren't tasks that follow a fixed script. For roles like these, AI becomes a force multiplier. One person can now coordinate and oversee what previously required a whole team. The work still happens; it just scales differently.
Then there are jobs that don't really need agency—repetitive tasks, data processing, pattern matching, routine execution. These are exactly what AI was built to handle. When a role is purely mechanical, automation doesn't augment it; it replaces it entirely.
The real question isn't whether AI will change work—it will. It's which roles sit in which bucket, and how quickly companies figure that out. The gap between those two categories? That's where all the actual change happens.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
8 Likes
Reward
8
5
Repost
Share
Comment
0/400
CodeAuditQueen
· 53m ago
This framework is essentially a reverse version of the reentrancy attack logic... The higher-level decision is the fallback function, and the mechanical task is a drainable pool. In the end, whoever discovers the vulnerability first wins.
View OriginalReply0
BearMarketHustler
· 9h ago
That makes sense, but in reality, big companies can't really distinguish what "needs agency" means... In the end, all the work is pushed towards "automation"...
View OriginalReply0
AllInAlice
· 9h ago
To be honest, this framework is quite well explained, but it seems to overlook the large gray area in the middle... How do you handle those semi-mechanical tasks that require judgment? The company won't honestly categorize them like this.
View OriginalReply0
CryptoCross-TalkClub
· 9h ago
Haha, to put it simply, some jobs are being undercut by AI, some jobs are being used as AI assistants, but our line of work is actually safe because jokes can't be copied and pasted, haha.
View OriginalReply0
StakoorNeverSleeps
· 9h ago
ngl, this classification framework is indeed interesting, but to be honest, it's all about the irreplaceability of people... Only roles that require real judgment can survive.
Here's an interesting lens for evaluating AI's impact: think about what AI actually does at its core.
AI agents are fundamentally built to replicate agency—the ability to make decisions, take action, and adapt independently. That's the whole point.
So here's where it gets interesting: jobs fall into two categories when you look through this framework.
First, there are roles that fundamentally require agency—judgment calls, adaptive problem-solving, creative direction, strategic decision-making. These aren't tasks that follow a fixed script. For roles like these, AI becomes a force multiplier. One person can now coordinate and oversee what previously required a whole team. The work still happens; it just scales differently.
Then there are jobs that don't really need agency—repetitive tasks, data processing, pattern matching, routine execution. These are exactly what AI was built to handle. When a role is purely mechanical, automation doesn't augment it; it replaces it entirely.
The real question isn't whether AI will change work—it will. It's which roles sit in which bucket, and how quickly companies figure that out. The gap between those two categories? That's where all the actual change happens.