About that viral AI article
This article is currently going viral. Multiple people sent it to me within the first hour I was out of bed. You can read it for yourself, but the takeaway is easy to summarize:
AI wins. You lose (if you’re not on team AI).
It’s a bad article. I don’t know why Matt Walsh thinks it’s good, but one guess might be that there’s a certain symmetry between apocalyptic AI hype and apocalyptic political activism. People like Walsh really need to understand that anybody my age (37) has been assured on a daily basis for most of their adult life of two things: 1) the digital/streaming/non-ownership/AI age is all there is and there’s nothing after this, and 2) the next election is the most important election of our lifetime, and if you get it wrong, there’s nothing after it. Call me stupid, call me stubborn, but you can’t call me inexperienced.
(Walsh understands this, of course.)
I will freely admit to having economic, ideological, and aesthetic biases against AI hype. I’m an editor who provides for his family by working with books and words. I don’t like the idea of my phone conversations with authors being replaced by an LLM, or the hours spent pacing, searching for the best way to structure an argument, being flushed down in favor of 15 seconds of data prompting. I have a belief that digital life carries unique dangers and tilts the human spirit away from its creational design. And I have an aesthetic objection to a world without writers, artists, and counselors. The world that the foremost AI architects most desire is a world I recoil from.
When I express disdain for articles like this—and trust me, there are lots of them—I always hear from a few folks who say something like, “But have you seen what the latest models can do?” In most cases, the answer is no, I haven’t. But I think these friends are giving me the wrong answer to the right question. When a CEO fantasizes about a world without human relationships, the revulsion that occurs in many of us is not due to ignorance about what this CEO’s technology can accomplish. Pointing out to me the power of the models when I say I don’t want to live in Big Tech’s kingdom is like showing me how sleek and efficient a Glock is when I lament violence. Yes, OK, you have a point: It works! And people love tech that works!
Of course, my big problem with the article is its hilarious self-assured predictions. I commend this author for skillfully evading any hint that his predictions have been made before and not materialized. Politicians do this, too. Every day for four years I am told what will happen to my family, my city, and my country if we allow the wrong person/group/legislation to succeed. And of course, bad ideas and candidates have consequences! But that’s not what I’m told. I’m told that I won’t have a country anymore. I’m told I’ll be driven to abject poverty and enslaved…unless I can convince all my friends to vote the right way.
You know what happens at the end of four years?
Nothing.
You know what happens right after that?
The same predictions.
Look, I am not saying, and I don’t believe, that AI is a nothingburger. It’s disruptive, yes. It will change industries, yes. And it can be helpful…yes! And I’m not smart or well connected enough to tell people who are smart and well connected in this area what can and cannot happen.
I’m just a normal person who has heard plenty of sales pitches before. And I cannot stop thinking about how much of these “purely telling you what’s going to happen” takes sound like sales pitches. Like any clever sales pitch, the AI stuff pivots quickly whenever people ask to see what’s being promised. Freddie deBoer puts it well:
The constant two-step is exhausting. They make absurdly outsized claims about what AI is and does or will do; when it’s pointed out that these tools are at present very limited, certainly relative to the hype, the response is some version of “Well let’s be realistic now!” These are transformative technologies, but when we ask to see the transformation we’re accused of asking for too much. I can’t stand it anymore.
When people ask for money and support based on what they claim will inevitably happen, it’s more than reasonable to ask to see the scoreboard. And if they don’t want to talk about that, or if they straw man that request (“Why do you think AI means nothing at all??!?!?”), the red flags need to come up.
Anyway. Here’s where I am these days:
AI’s potential to transform society as a whole has been talked about nonstop for at least three years. And so far, the main thing we have to show for that are porn bots and data centers.
There is a concerted ongoing effort to lie to people about what AI is and is not doing right now. Ross Douthat is one of my heroes, but even he got bait and switched by what appeared to be a community of sentient AI bots but was actually a clever simulation run by humans in real time. It’s impossible for 99% of people to verify the claims they’re seeing, and the people making those claims know this.
There is a difference between what AI can do and what AI will do. It’s fine to be a fan of AI can do. It’s fine to think these abilities will grow and spread in significant ways. But that’s different than confidently telling people that the CPU will give birth any minute now.
What AI takes away will always be harder to express than what it adds. What it adds is material and instant. What it takes away is invisible and immeasurable (you cannot quantify the thinking you didn’t do or the conversations you didn’t have).
The geopolitics of AI are so liquid and volatile right now that we have no idea what kind of vulnerabilities are or will be exposed. This is the one area that AI hype men struggle to pretend to know about, and for that reason, they avoid talking about it. But it’s common sense. If your economy runs on AI, and your enemy’s weapons also run on AI, you’ve got some really important issues to work on really fast.
I think people really want to believe in AI’s power to change the world partially because they also believed that about social media. Instead, we all ended up with no friends, addicted to the same 3-4 streaming shows, and frustrated that our 20s passed us by while we were scrolling. AI feels like a chance to rewrite our story. And I suspect Big Tech feels similarly…which is concerning.
I’m 37. My entire life, Big Tech has done one thing consistently: Used the word “inevitable” as a pretext to slowly causing me to own nothing and pay for the privilege. The religion of Silicon Valley is a hellfire and damnation altar call, and speaking for myself, I’m not gonna walk the aisle again. If AI destroys my life, it’ll be because for once, the people incapable of moral feeling got it right. So be it. But it won’t be because I get intimidated every 5 years by the voice coming from behind the curtain



I will say that you are uniquely in a very very human-centric profession: acquisitions editor, cultural commentator. You are uniquely rewarded for having good instincts and good taste. It is certainly true that your work would be the last kind of job affected by AI directly. But in law, design, customer service, HR, driving, some kinds of education, etc etc… it will be a very different story. It will not all be bad or all be good, but it will change everything.
You: "Have you seen what the latest models can do?” In most cases, the answer is no, I haven’t. ... Pointing out to me the power of the models when I say I don’t want to live in Big Tech’s kingdom is like showing me how sleek and efficient a Glock is when I lament violence."
Matt: "Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone."
You freely admit that you're writing from a position of voluntary ignorance, which is exactly what Matt Shumer warns against doing.
I don't want to live in Big Tech's kingdom either. But what you're doing here is like confidently declaring that the loud purring at the door is a dozen housecats and not a lion. If you can really live without ever opening the door, I guess it doesn't matter. But most of us can't. And we'd like to look through the peephole before we do. To unwrap the metaphor, what would "never opening the door" look like in the modern world? Think the Amish. Even the most countercultural among us aren't the Amish.