The Revenge of the Humanities: Why Everything I Learned Outside the Classroom Is Now the Only Thing That Matters
For most of the past decade, the career advice was consistent: learn to code, get technical, stop wasting time on things that cannot be measured. That advice is aging badly. Here is what changed between 2024 and 2026, and what it means for anyone building a creative practice.

For most of the past decade, the career advice was consistent and delivered with the confidence of people who had recently discovered spreadsheets. Learn to code. Get technical. Stop wasting time on things that cannot be measured. The message was clear: soft skills were a consolation prize for people who could not do hard skills.
That advice is aging badly.
The Collapse of the Safe Bet
Something quiet happened between 2024 and 2026 that very few people in the technology industry wanted to acknowledge publicly. The unemployment rate for computer engineering majors climbed to 7.8%, the highest of any major except anthropology. McKinsey's new chief hiring priority, after decades of recruiting the same profile, is now explicitly "liberal arts majors, whom we had deprioritized." Cognizant's CEO told Fortune he is going directly to liberal arts schools and community colleges because those graduates can do something that narrowly technical candidates often cannot: think across domains, communicate under pressure, and navigate situations where the right answer is not in the documentation.
This is not a pendulum swing or a trend piece. It is a structural consequence of what AI actually automates. And once you see the logic, it is impossible to unsee.
What AI Replaced Was Not What We Thought
The Noema Magazine piece by Nils Gilman makes the argument precisely: the knowledge worker's primary function was routing. Taking information from one place, processing it through human cognition, and outputting it through a keyboard. That function is being automated. Not because humans became less intelligent, but because the cost of that specific cognitive operation is collapsing toward zero.
When the cost of one input collapses, the value of the complementary input rises. This is basic economics, and it applies here with unusual clarity. The more AI commodifies content generation, code drafting, data analysis, and document production, the more scarce and therefore valuable the things AI cannot do become: judgment under ambiguity, persuasion, trust-building, the ability to read a room that does not exist in any training dataset.
Anthropic co-founder Jack Clark put it bluntly in the New York Times: "good taste and intuitions about what to do" will become the limiting resource. NVIDIA's Jensen Huang described the future definition of smart as the ability to "infer the unspoken around the corners" and "feel the vibe." These are precise descriptions of capabilities that no large language model currently has, and that are very difficult to acquire through technical training alone.
The FOMO Trap
There is something worth saying honestly here, because most people writing about AI either perform hostility or perform enthusiasm, and both performances are dishonest.
The FOMO around AI has been manufactured at industrial scale. Every week brings a new announcement designed to make you feel that if you did not adopt this specific tool by Thursday, you will be irrelevant by Monday. That anxiety is real, I have felt it myself, and it has produced an enormous amount of performative adoption: people using AI to appear productive rather than using it to actually produce.
But AI hostility is equally unconvincing, and ultimately just as unproductive as the hype it reacts against.
The honest position is somewhere more specific. AI has made genuine contributions to how I work. Suno turned half-finished riffs into complete pieces. The value was not the output itself. It was hearing an idea fully realized for the first time, a kind of acoustic prototype that opened up directions I could not have heard otherwise. Claude Code is helping me build a data science project that exceeds my current technical knowledge. I am not pretending otherwise. It fills in the parts I do not yet know while I hold the direction.
None of this is a pose. These are real tools solving real gaps. And none of it required surrendering the part that matters most.
The distinction I keep returning to is this: I do not give AI the last edit. I do not let it substitute for thinking. I use it the way a musician uses a sequencer: it executes, but it does not decide. The decision is mine, and staying close to the decision is not a romantic attachment to difficulty. It is the only way to keep building the judgment that the work requires.
What AI Cannot Document
The Sound Vault started as an experiment. A place to document music I was already listening to, to write about artists I was already following, to build a record of something I cared about without needing anyone's permission to care about it. What happened next was organic in the precise sense: it grew because the content was real, and the readers who found it recognized something in it that automated content cannot produce.
The work of documenting music through AI-assisted research is, for me, genuinely enjoyable. Finding a thread in an artist's discography, tracing an influence across decades, making a connection that has not been written about in exactly this way before: that process is not something AI does for me. It is something I do with AI alongside me, handling the retrieval while I handle the interpretation.
What AI cannot do is feel the weight of why a particular record matters. It cannot bring the experience of having heard a piece of music at a specific moment in your life and understood something you could not articulate at the time. That experience is not data. It is the material that interpretation is made from, and it is irreplaceable by any system trained on the aggregate of what other people have written about the same record.
This is the liberal arts argument made in practice rather than in theory. The skills that make writing about music worth reading are the same skills that make any writing worth reading: a sensibility developed through real experience, genuine curiosity that has no performance value, and the willingness to arrive at a conclusion that is yours rather than the output of something trained to produce conclusions that feel plausible.
The Education Nobody Valued and Now Everybody Needs
Washington Monthly reported in April 2026 that nearly 90% of employers now seek evidence of problem-solving ability in candidates, while around 80% value strong teamwork and written communication. The same surveys show a large gap between how much employers value these skills and how confident they are that candidates actually have them. The gap is widest in precisely the capabilities that a narrowly technical education tends to underweight: critical thinking, contextual judgment, and the ability to operate without a clear answer.
The World Economic Forum found that 22% of jobs worldwide will change in the next five years, and that the primary deficit is not technical knowledge but human-centred skills: agency, creativity under constraint, persistence in ambiguity, and the ability to act when no clear answer exists.
The Global Times reported in March 2026 that the emerging demand is not for liberal arts graduates over STEM graduates but for people who can operate in both directions. "What we are seeing is the end of rigid disciplinary labels, and the unstoppable rise of interdisciplinary, hybrid talent." The new coveted roles, AI narrative designers, human-AI ethics consultants, algorithm governance specialists, require rigorous humanistic capacity plus enough technical literacy to understand what the systems are actually doing.
This is exactly the position that anyone who has spent years at the intersection of multiple domains, without the security of a single label, finds themselves in right now. It is not a comfortable position historically. It is, however, the one the economy is moving toward.
The Risk Nobody Is Talking About
The argument for liberal arts skills in the age of AI is correct, and it is also dangerously incomplete.
McKinsey estimates AI agents and robots could generate nearly $3 trillion in annual value for the U.S. economy by 2030. But the IMF's research confirms that where judgment defines the difference between high and low-skilled workers, AI will disproportionately benefit those who already have higher skills, widening productivity differences and income disparity. The judgment economy rewards judgment. But access to the kind of education, experience, and cognitive flexibility that builds judgment has never been evenly distributed. Making judgment the primary economic resource does not solve that problem. It may accelerate it.
Workforce analyst Brent Orrell identified the specific mechanism clearly: AI is hollowing out exactly the entry-level analytical tasks that have traditionally served as training grounds for professional judgment. The junior analyst who learns to think by doing junior analysis loses that training ground when AI does it instead. The people who already have judgment become more valuable. The people who were supposed to build it through those roles may never get the chance.
Noema's Gilman asks the question directly: "are the senior-most people in organizations going to use AI to pull up the ladder behind them?" This is not rhetorical. It describes the structural risk of a transition that could easily produce a two-tier economy in which the judgment economy is genuinely great for people who already have judgment, and genuinely brutal for everyone who was supposed to acquire it through the work that no longer exists.
What This Moment Actually Requires
The practical implication is simpler than the theoretical argument suggests.
The most valuable thing you can do right now is refuse to let AI do your thinking. Use it for everything it is good at. Let it draft, retrieve, process, and synthesize. Then make the decision. The reason this matters is not ethical, though it has ethical dimensions. It is practical: the only way to build the judgment that the economy is about to pay a premium for is to exercise it. Every time you outsource the hard part, you trade an appreciating asset for the output of a depreciating one.
We are willing to make mistakes, to produce things in public that are not perfect, to keep building even when the tools change faster than we can absorb them. That willingness is not recklessness. It is the only way to stay close enough to your own judgment to know when it is working.
The second implication is that communication, empathy, and the capacity to build things with other people have not become soft skills. They have become the hardest skills in the room. Not because they were undervalued before, but because everything that could be automated has now been automated around them, leaving them as the visible remainder, the thing you cannot replicate with a good prompt.
Humanity will find ways to express itself that cannot be measured in tokens. In art, in music, in sports, in the kind of craftsmanship that lives in the hands and the ear and the eye. The mathematical mind has never been the whole of what we are. The age of AI may be, among other things, what finally makes that obvious.
Sam Harris said recently: "My expectation now is that there's gonna be something like the revenge of the humanities. What the world is going to need are well-educated generalists with good taste."
That revenge is not coming. It is already here.
Noema Magazine. Why a Liberal Arts Education Will Soon Be More Valuable Than Ever. noemamag.com
Fortune (2026, March). Liberal arts degrees have long paid the worst salaries. That may be changing. fortune.com
Washington Monthly (2026, April). What AI Can't Do. washingtonmonthly.com
McKinsey Global Institute (2026, January). Human skills will matter more than ever in the age of AI. mckinsey.com
IMF Finance and Development (2025, June). Machine Intelligence and Human Judgment. imf.org
Global Times (2026, March). AI doesn't favor arts or science; it demands hybrid talent. globaltimes.cn
World Economic Forum (2026, January). In the age of AI, human skills are the new advantage. weforum.org
London Business School (2025, November). Why human judgement is essential in the age of AI. london.edu
Related reading:
- The Polished Lie: Why Great AI Execution Still Feels Wrong -- on the gap between technically correct AI output and work that actually carries something: what the difference reveals about what interpretation requires.
- Claude Code Changed How I Think About Marketing, Not Just How I Build -- the specific experience of using AI as an execution layer while keeping judgment in-house, and why the distinction matters more than the tool.
- The Rise of Dopamine Culture: How Algorithms Rewired Everything We Create, Consume, and Feel -- the structural argument for why platforms optimizing for engagement are in tension with the sustained attention that builds real judgment.

