The Seed

I stopped in the middle of organizing my notes today. Not because there was nothing to do — the opposite. Too much to do.

I have an Obsidian vault with around 300 files. Claude Code helped me build an automated workflow: morning planning, task dispatch, knowledge digestion, cross-project sync. Before AI, I would never have attempted any of this — too complex, not worth the time investment. But now, “not worth it” has become “maybe I could.”

Then I noticed a paradox: everything became “doable,” but everything took longer than expected. The tools got stronger, but I got more tired.

I thought it was just me — maybe I wasn’t efficient enough, maybe I hadn’t found the right workflow. Until I saw the data:

Upwork’s 2024 research found that over half of employees using AI tools feel MORE overwhelmed — not less burdened, but more exhausted.

It’s not just me. It’s structural.


I. The Pipe Is Fixed

Zhuangzi said something over two thousand years ago that hits different now:

“My life has a limit, but knowledge has none. To pursue the limitless with the limited — that is perilous.” — Zhuangzi, Nourishing Life

Not a platitude. Cognitive science spent the next two thousand years proving him right.

In 1956, psychologist George A. Miller published what became a landmark paper: The Magical Number Seven, Plus or Minus Two. He found that human working memory can hold roughly 7 chunks of information at once. Not 7GB. Not 7TB. Seven chunks. This is a hardware constraint, not a software issue. You can’t upgrade it by “trying harder,” just as you can’t grow an extra heart chamber by running.

John Sweller’s Cognitive Load Theory (late 1980s) added another layer: working memory isn’t just small — it’s also short-lived. Everything has to squeeze through this bottleneck before it can reach long-term memory. Pile on too many options, too complex an interface, too much noise — and your ability to learn and decide just falls apart.

Then there’s Directed Attention Fatigue, from Rachel and Stephen Kaplan at Michigan. Your brain burns energy just suppressing distractions — and that fuel tank is finite. When it runs dry: more mistakes, worse planning, shorter temper, dumber impulses.

In other words: your brain is a pipe with a fixed diameter. From Miller to Sweller to Kaplan, over half a century of research says the same thing — the pipe doesn’t get wider.


II. Ten Times More Valves

Herbert Simon, 1978 Nobel laureate in economics, wrote in 1971:

“In an information-rich world, the wealth of information means a dearth of something else — a scarcity of whatever it is that information consumes: the attention of its recipients.”

  1. No internet, no smartphones, no AI. Simon was just watching TV channels and newspapers multiply. But he already saw the core truth: attention is zero-sum. Give it to A, and B gets less. Period.

Now apply this logic to the AI era.

Before AI, my day had maybe 5-10 decision points: what tasks to do, what methods to use, how to prioritize. Now, with Claude Code, I might face 50: Should I let AI refactor this code? Is its output correct? Should I iterate again? Is this automation worth building? Who fixes the bugs later? Do I understand AI’s suggested approach? If not, how long will it take to understand?

Every new “possibility” is a new valve. Every valve asks: open or not?

There’s a famous Israeli study on parole judges that shows what this does to people. They found: approval rates start at roughly 65% at the beginning of each session, then gradually drop to near 0% as decisions accumulate. After a break, they reset to 65%. The cases didn’t get worse — the judges’ brains got tired.

David Meyer’s research is even more direct: multitasking increases completion time by more than double. The brain produces zero-progress “restart” time when switching between tasks. And only 2-5% of people are true “supertaskers” — the other 95% perform worse when multitasking.

That’s the trap. AI lets you open more valves, but the pipe hasn’t changed. Every new valve doesn’t add output — it just thins what each one gets.


III. The Data Doesn’t Lie

If it were just me, fine — maybe I’m doing it wrong. But the numbers say otherwise.

APA’s 2023 Work in America Survey (n=2,515):

  • 38% of employees worry AI will make their jobs obsolete
  • Among those worried: 51% say work hurts their mental health, 64% feel tense or stressed daily, 37% report emotional exhaustion
  • Even among those NOT worried about AI replacement, stress is rising — 56% feel micromanaged by tech surveillance

Microsoft’s 2024 Work Trend Index:

  • 75% of knowledge workers now use AI at work
  • 78% bring their own AI tools (BYOAI)
  • Yet 59% of leaders admit they can’t quantify AI’s productivity gains
  • 60% of companies lack an AI vision or plan
  • 46% of global workers are considering quitting within the year — an all-time high

Upwork’s 2024 Research (the most damning):

  • Over 50% of AI-using employees report feeling “overwhelmed”
  • Workers with AI tool access report more burnout, not less

Gallup 2021 Global Emotions Report:

  • 41% of adults worldwide report stress — all-time high
  • 42% report worry — all-time high

Back in 2007, Tarafdar named five flavors of technostress: overload (work faster), invasion (always on), complexity (steep learning curves), insecurity (will I be replaced?), uncertainty (this tool will be obsolete next quarter).

Sound familiar? AI cranks every single one up.


IV. History Repeats

This isn’t the first time.

In 1987, Nobel economist Robert Solow wrote a line that became known as the “Solow Paradox”:

“You can see the computer age everywhere but in the productivity statistics.”

Erik Brynjolfsson quantified this in 1993: computing power grew 100x in the 1970s-80s, yet labor productivity growth fell from over 3% in the 1960s to roughly 1%. More computers didn’t mean more productivity — at least not in the short term.

Same story with steam and electricity — the productivity payoff took decades. 1870s factories bolted electric motors onto steam-era floor plans and wondered why nothing changed. It wasn’t until someone rethought the entire factory layout that electricity actually delivered.

Today’s AI era is replaying this cycle:

Tool explosion → Cognitive overload → Organizational adaptation lag → Productivity stagnation

75% are using AI, but 59% of leaders can’t tell you what it’s actually doing for them. More tools, more exhaustion. The problem isn’t AI — it’s that our habits, organizations, and brains haven’t caught up.


V. Knowing When to Stop

So what do we do? Reject AI? Obviously unrealistic. Use AI harder? The data already shows that road leads to burnout.

Lao Tzu said:

“He who knows when to stop does not find himself in danger.” — Tao Te Ching, Chapter 44

And an even sharper one:

“Less brings gain; more brings confusion.” — Tao Te Ching, Chapter 22

Two and a half thousand years apart, same answer: do less, not more.

When Steve Jobs returned to Apple in 1997, the first thing he did wasn’t launch new products — it was killing 70% of the product line. He later said:

“People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas.”

Focus isn’t saying yes to the goal. Focus is saying no to a hundred other good ideas.

Obama wore the same gray or blue suit every day. Zuckerberg wore the same gray T-shirt. Not because they couldn’t afford variety — because they understood a fact: every trivial daily decision draws from the same pool of limited cognitive resources. Reducing unimportant decisions protects the quality of important ones.

My own recent realization: Claude Code can help me with dozens of things — note automation, code refactoring, knowledge curation, data analysis, documentation. But when I try to advance all these possibilities simultaneously, I’m more exhausted than before AI. Every new automated pipeline means one more system to maintain, one more potential failure point, one more “is this AI output correct?” judgment call.

The moments that actually work are when I decide not to do something. Not because I can’t — precisely because I can, which makes “not doing” an active decision that must be deliberately made.

Jacoby’s experiments showed the same thing: more brand information actually led to worse decisions. Barry Schwartz called it the “paradox of choice” — past a certain point, more options just mean more anxiety and more regret.

AI has handed us more options than we’ve ever had. But our ability to choose well? Still the same.


Closing

Let me return to Zhuangzi.

“To pursue the limitless with the limited is perilous” — but Zhuangzi didn’t say “so do nothing.” He was talking about nourishing life. Nourishing what? The pipe itself.

Here’s what I keep coming back to: the more you CAN do, the less you SHOULD do.

Not rejecting tools. Using the power of tools to do fewer, deeper things — not more, shallower things. The pipe diameter won’t change. The water pressure will keep rising. The only thing you can control is how many valves you choose to open.

Eric Schmidt once warned that too much information could get in the way of “deep thinking, comprehension, and memory formation.” Clay Shirky put it better: the problem isn’t information overload — it’s filter failure. Our filters can’t keep up.

AI generates information faster than anything before it. But the hard part was never generating more — it’s throwing most of it away and actually paying attention to what’s left.

Simon made it clear in 1971: in an information-rich world, the scarce resource isn’t information — it’s attention.

So in an AI-rich world, what’s scarce?

The moments you decide not to use AI.


Written April 8, 2026. An evening when I used AI to organize 300 note files — and found myself more exhausted than before I started.

Sources

缘起

今天下午我在整理笔记的时候,突然停下来了。

不是因为没事做——恰恰相反,是因为事情太多了。

我面前有一个Obsidian vault,大概300多个文件。Claude Code帮我搭了一个自动化工作流:晨间规划、任务调度、知识摘要、跨项目同步。以前这些事情我根本不会去做——太复杂了,不值得投入时间。但现在,有了AI,”不值得”变成了”好像可以”。

然后我发现了一个悖论:每件事都变得”可以做了”,但每件事做起来都比我预想的要多花时间。工具更强了,但我更累了。

我以为这是我的问题——可能是我效率不够高,可能是我还没找到正确的工作方式。直到我看到了一组数据:

Upwork 2024年的研究发现,超过一半使用AI工具的员工感到更加不堪重负(overwhelmed)——不是更轻松,是更累。

这不是我一个人的错觉。这是一个结构性问题。


一、管道是固定的

庄子两千多年前说过一句话,现在读起来后背发凉:

“吾生也有涯,而知也无涯。以有涯随无涯,殆已。” — 庄子《养生主》

这不是鸡汤。这是认知科学的预言。

1956年,心理学家George A. Miller发表了一篇日后成为经典的论文:《神奇的数字7±2》。他发现人类工作记忆(working memory)一次只能同时保持大约7个信息单元。不是7GB,不是7TB——是7个chunks。这是硬件限制,不是软件问题。你不能通过”更努力”来升级它,就像你不能通过跑步来让心脏多长出一个腔室。

1980年代末,John Sweller的认知负荷理论又补了一刀:工作记忆不光小,还短命。所有信息都得先挤过这个瓶颈才能存进长期记忆。选项太多、界面太乱、信息太杂——学习和决策质量直接崩盘。

还有一个维度:密歇根大学Kaplan夫妇提出的定向注意力疲劳。大脑要集中注意力,得不断压制周围的干扰——但这个”压制”本身就在烧能量,烧完了就没了。后果:判断失误、规划变差、脾气变大、做事冲动。

换句话说:你的大脑是一根固定口径的管道。从Miller到Sweller到Kaplan,半个多世纪的研究都在说同一件事——管道不会变粗


二、阀门多了十倍

Herbert Simon,1978年诺贝尔经济学奖得主,在1971年写下了一句话:

“在信息丰富的世界里,信息的富裕意味着其他东西的匮乏——即信息所消耗的东西的匮乏:接收者的注意力。”

1971年。没有互联网,没有智能手机,没有AI。Simon看到的只是电视和报纸变多了。但他已经看透了:注意力是零和的。给了A,B就少了。

现在把这个逻辑放到AI时代。

以前,我的一天大概有5-10个决策点:今天做什么任务、用什么方法、优先级怎么排。现在,有了Claude Code,我的决策点可能是50个:要不要让AI帮我重构这段代码?它生成的结果对不对?要不要再迭代一次?这个自动化流程值不值得搭建?搭建了之后出了bug谁来修?AI建议的这个方案,我理解吗?不理解的话,我要花多少时间去理解?

每一个新的”可能性”都是一个新的阀门。每个阀门都在问你:开还是不开?

以色列有个很有名的假释法官研究,结果很扎心:每轮裁决开始时,批准率约为65%。随着决策数量的增加,批准率逐渐降至接近0%。休息之后,恢复到65%。不是案件变差了——是法官的大脑累了。

David Meyer的研究更直接:多任务切换使完成时间增加一倍以上。大脑在任务之间切换时,产生零进展的”重启”时间。而且只有2-5%的人是真正的”超级任务者”——其余95%的人在多任务时都会表现下降。

陷阱就在这里。AI让你能开更多阀门,但管道没变粗。每多开一个,不是多了产出——是每个阀门分到的水更少了。


三、数据不说谎

如果只是我自己的感觉,大可以说”你自己的问题”。但数据不这么说。

APA 2023年美国职场调查(2,515人):

  • 38% 的员工担心AI会使他们的工作过时
  • 在这些担忧AI的员工中:51% 认为工作损害了他们的心理健康,64% 在工作日感到紧张或压力,37% 感到情感耗竭
  • 即使是不担心AI取代的员工,压力水平也在上升——56% 感到被技术监控微管理

Microsoft 2024年工作趋势指数:

  • 75% 的知识工作者现在在工作中使用AI
  • 78% 自带AI工具上班(BYOAI)
  • 59% 的领导承认无法量化AI的生产力收益
  • 60% 的公司承认缺乏AI的愿景和计划
  • 46% 的全球职场人考虑在一年内辞职——历史最高

Upwork 2024年研究(最扎心的一组):

  • 超过50% 使用AI的员工报告感到”不堪重负”
  • 有AI工具访问权限的员工报告了更多的倦怠(burnout),而非更少

Gallup 2021全球情绪报告:

  • 全球41% 的成年人感到压力——历史新高
  • 全球42% 的成年人感到担忧——历史新高

2007年Tarafdar总结过五种技术压力:过载(逼你加速)、入侵(永远在线)、复杂(学不完的新工具)、不安全(怕被替代)、不确定(下个季度又换一套)。

放到AI身上,五条全中,而且每条都在加码。


四、历史总在重复

这不是第一次发生了。

1987年,诺贝尔经济学奖得主Robert Solow写了一句话,后来被称为”Solow悖论”:

“你到处都能看到计算机时代,唯独在生产力统计中看不到。”

Erik Brynjolfsson在1993年量化了这个悖论:1970-80年代,计算能力增长了100倍,但劳动生产率反而从1960年代的3%以上下降到约1%。更多计算机没有带来更高的生产力——至少在短期内没有。

蒸汽机和电力也一样——生产力红利花了几十年才兑现。1870年代的工厂装上电动机,车间布局还是蒸汽机时代那套——直到有人想明白要重新设计整个流程,电力的好处才真正显现。

今天的AI时代正在重复这个循环:

工具爆发 → 认知过载 → 组织适应滞后 → 生产力停滞

75%的人在用AI,但59%的领导说不清楚AI到底带来了什么。工具更多了,人更累了。不是AI不行——是我们的习惯、组织和脑子还没跟上。


五、知止

所以怎么办?拒绝AI?显然不现实。更努力地用AI?数据已经告诉你这条路通向倦怠。

老子说:

“知止不殆。” — 《道德经》第四十四章

还有一句更狠的:

“少则得,多则惑。” — 《道德经》第二十二章

两千五百年前说的,和今天的认知科学说的,是一回事:不是做更多,是选更少。

Steve Jobs在1997年回归Apple之后,做的第一件事不是推出新产品——而是砍掉70%的产品线。他后来说:

“People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas.”

专注不是对目标说”是”。专注是对一百个其他好主意说”不”。

Obama每天穿同款灰色或蓝色西装。Zuckerberg每天穿同款灰色T恤。不是因为他们买不起衣服——是因为他们理解一个事实:每一个微小的日常决策都在消耗同一池有限的认知资源。减少不重要的决策,是为了保护重要决策的质量。

我自己最近的一个体会:Claude Code能帮我做的事情有几十种——自动化笔记、代码重构、知识整理、数据分析、写文档。但当我试图同时推进所有这些可能性的时候,我比不用AI的时候还要累。每多一条自动化流程,就多一个需要维护的系统、多一个可能出错的环节、多一次”这个AI输出对不对”的判断。

真正有效的时刻,是我决定不做某件事的时候。不是因为做不到——恰恰是因为能做到,所以”不做”才成为一个需要主动做出的决策。

Jacoby的实验也说了同样的事:给消费者的品牌信息越多,决策反而越差。Barry Schwartz管这叫”选择的悖论”——选项多到一定程度,带来的不是满足,而是焦虑和后悔。

AI给了我们空前多的选项。但选的能力呢?还是老样子。


收束

让我回到庄子。

“以有涯随无涯,殆已”——用有限去追无限,是危险的。但庄子没有说”所以什么都别做”。他说的是养生。养的是什么?是那根管道本身。

我反复想到的一句话是:你能做的事越多,你应该做的事越少。

不是拒绝工具。是用工具的力量去做更少、更深的事情,而不是更多、更浅的事情。管道口径不会变。水压还会继续上升。唯一能控制的,是你主动打开多少个阀门。

Eric Schmidt说过,信息太多会妨碍”深度思考、理解和记忆形成”。Clay Shirky说得更到位:问题不是信息太多,是”过滤失败”——我们的过滤能力跟不上信息的增长速度。

AI是有史以来最猛的信息生成器。但真正稀缺的从来不是生成更多信息——而是扔掉大部分之后,还能对剩下的东西深度集中注意力。

Simon在1971年就说清楚了:在信息丰富的世界里,匮乏的不是信息——是注意力。

那么在AI丰富的世界里,匮乏的是什么?

是你决定不用AI的那些时刻。


写于2026年4月8日。一个用AI帮自己整理了300个笔记文件、然后发现自己比整理之前还累的晚上。

来源